Jeremy Kashel

Jeremy Kashel's Blog

SQL Server Denali CTP 1 Released

I’ve just found out that the first CTP for SQL Server codename Denali is now available for download. The link to get both the 32 and 64 bit versions is:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6a04f16f-f6be-4f92-9c92-f7e5677d91f9

On a related note, Simon Sabin has recently posted here about the Tech-Ed Europe keynote that shows a glimpse of a new Silverlight reporting tool for SQL that’s part of Denali. Well worth watching the keynote video…

Master Data Services Subscription Views

Subscription Views are one of the ways to extract data from Master Data Services. Created via the ‘Data Integration’ section of the MDS Master Data Manager front end, they result in the creation of a physical SQL Server view in the underlying database.

All this you will probably know from the MSDN article. The purpose of this post, however, is to explain a few of the more subtle differences between the different types of subscription views that get created, and looking how they can be used to feed a data warehouse.

Derived Hierarchies

You can create a subscription view from a derived hierarchy, which will return all the levels from your hierarchy in a de-normalised form. If you choose the sample Product model, then you will see that one of the Derived Hierarchies that you can pick from is called Category. As your Product dimension in a data warehouse would include attributes such as Product Category and Sub Category, then on the surface this view looks to be ideal.

Unfortunately there is a slight issue with subscription views based on derived hierarchies. Users could have altered one of the members that exist within the hierarchy before the data warehouse nightly load has taken place. If the MDS validation of that member hasn’t occurred yet (which it doesn’t in some situations), then the member could potentially contain some incorrect attribute values that we don’t want to expose to the data warehouse.

For example, a user has edited a product, as shown below, but chosen a different Sub Category to that of its adjacent members:

image

As indicated by the yellow question mark above, the member is awaiting validation. However, when I run a select statement against my view that’s based on Derived Hierarchy Subscription view, then the member that is marked above will still appear in the view results. We would not want to load this change into the data warehouse, as until the business rules have been run, we dont know if the new Sub Category is valid in the context of the other attributes. For reference, the same behaviour exists for members that have explicitly failed validation.

Leaf Attributes

You can also create a subscription view by picking an entity, and then choosing the format of Leaf Attributes, as shown below:

image

In my case, I’ve set up a subscription view called ProductLeafAtributes, and when I run a select statement against this view in SQL Server, then I get the following results:

image

As with the subscription view that’s based on a Derived Hierarchy format, I still get all the members returned, but there’s also a column called Validation Status, that shows that one of the members is awaiting revalidation. Therefore, this column can be filtered on in the data warehouse load, just to take through the changes that have definitely passed validation.

Conclusion

Creating the subscription views based on entities, rather than Derived Hierarchies, exposes the validation status of a member, and is therefore the preferred subscription view format to use in this case. The one catch is that the Leaf Attributes format will not return attributes of any related entities, e.g. Product Category. It will return the Sub Category code, so therefore its necessary to join to separate Subscription Views for both Sub Category and Category in order to pick up the missing attributes.

SQL Bits 7 Agenda

The SQL Bits 7 agenda was published recently, and I’m pleased to say that I will be delivering a session titled ‘End to End Master Data Management with SQL Server Master Data Services’.

The heart of this session will be an end-to-end technical demo that will show the role of MDS in extracting master data from source systems, before demonstrating how data is fed via MDS to subscribing downstream systems. I’ll cover areas such as the MDS staging process, business rules and notifications amongst others.

I’ve also seen from Simon Sabin that the SQLCAT team will be showing some of the features of SQL 11, code name Denali. Well worth going for that alone surely?!

You can register for SQL Bits here.

Master Data Services Learning Resources

Back to a bit of Master Data Services this week – there’s been a few learning resources that have appeared here and there for MDS over the past couple of months. I thought that it would be worth rounding up the ones that I’ve noticed:

  • Installing and Configuring Master Data Services – Starting right at the beginning, this video is useful if you haven’t installed the product yet. It takes everything step by step, showing you how to do the basic install then configure the database and website.
  • Implementing MDM Using Master Data Services – This is one of the recent videos from 24 Hours of SQL Pass, which are now available online.
  • Microsoft E-Learning Master Data Services – One of the free clinics available on the MS E-Learning site. The content delivery is a good mix between articles, video tutorials and self tests, so really it’s a good couple of hours of decent content, covering everything from business rules to subscription views.
  • SQL Server 2008 R2 Labs – There’s hands on labs for all parts of R2 here, and the MDS one is definitely worth taking a look at.
  • Analysis Services Dimension Management with Master Data Services – A video from the recent Tech Ed conference in New Orleans. Another one that’s really worth watching as it covers how MDS can be utilised in conjunction with Analysis Services.

Analysis Services KPIs and Reporting Services 2008 R2

There was a comment on my last blog post by Peter Eb asking about whether Reporting Services 2008 R2 knows how to render SSAS KPIs automatically. The short answer is that it doesn't unfortunately, but there are only a few steps needed to get it working.

For this blog post I've built a simple report that uses the Adventure Works DW 2008 R2 Analysis Services database as a data source. The first step is to build a data set that contains a query that references the Month attribute and the Channel Revenue KPI:

image

I'm interested in reporting on the status and trend part of the KPI, so I've added the month and value parts of the KPI, plus blank columns for the status and the trend:

image

For the status and the trend columns, the next step is then to drag on (or right click and insert) an indicator in each of the cells. This is when the manual steps come in:

  1. Right click on the indicator to open the properties window;
  2. Set the Value of the indicator to be the Status part of the Analysis Services KPI;
  3. Change the States Measurement Unit to Numeric;
  4. Configure the indicator Start and End properties to be -1, 0 and 1. SSAS KPIs can have an unlimited number of status values, so it may be necessary to add more states here if SSAS returns more than 3 values for the Status.

This changes that are needed to the indicator properties window is shown below:

image

Applying the same method to the Trend part of the KPI produces the following simple report:

image

This post wasn't really supposed to compare SSRS Vs PerformancePoint Services, but I've found out that PerformancePoint Services does pick up the SSAS KPI meta data (slight mismatch on the colours only), saving you having to set up the indicator yourself:

image

It's a shame that the KPIs aren't picked up automatically by SSRS, but then again the above process isn't too difficult to master.

Reporting Services KPIs Or PerformancePoint Services KPIs?

Although you've always been able to deliver KPIs in Reporting Services by using images and expressions, Reporting Services in SQL Server 2008 R2 now includes a feature called Indicators, which assists with producing KPIs out of the box.

This got me thinking how producing KPIs and/or scorecards in Reporting Services compares to doing the same in PerformancePoint Services. This blog post gives a short overview of creating KPIs in each technology, before comparing the pros and cons of each.

Reporting Services 2008 R2 Indicators

The indicators in Reporting Services come as an extra item on your toolbox, and can be dragged onto a Tablix for example. Once you do add the indicator to your report, you'll be prompted to select the shape that you want for your indicator:

image

So no more messing about with images! Once you click OK your indicator will be added to your report. If you then right click on the properties of the indicator, you can choose what to base the indicator's value on on, and also set its state, both of which essentially control the points at which your indicator changes its colour and shape. What I quite like here is that it's easy to change the colour for each state, and it's also easy to add new states, if you need four colours rather than three for example:

image

You seem to be able to add a huge number of indicator states here, certainly more than enough to deal with KPIs. Taking some data from AdventureWorks, based around employee sales targets, I've created the following very basic report using the Indicators:

image

PerformancePoint Services KPIs

The KPI and Scorecard have had a few decent changes in SharePoint 2010 (e.g. drill up/down), but Dashboard Designer is still the tool that's used to create KPIs in PerformancePoint Services, and its a different experience to developing in Visual Studio.

Assuming that you get to the point where you have a data source set up, the first thing to do in order to create a KPI is to pick an indicator. The choice here is very similar to the choice in SSRS:

clip_image002

The alternative here is to pick a blank indicator, where you'll have to decide yourself how many levels you want for your indicator, and what the colours and shapes should be. You can pick an image for an indicator state (as you can with SSRS), although you are limited to a maximum of 10 states. Again, that should be more than enough for KPIs.

The objects within PerformancePoint Services are very modularised, meaning that you create indicators, KPIs and scorecards separately. If you need to re-use an indicator in a new KPI, then you can just point to an existing indicator, or alternatively you can create a new one.

Again, taking some sample data from AdventureWorks, I've produced the following basic scorecard, based on similar data:

image

Comparison

The following table lists some KPI/Scorecard features and compares how SSRS and PPS are able to deal with them:

Feature

PerformancePoint Services

Reporting Services 2008 R2

Parent KPIs

Out of the box Parent KPIs

No built in parent KPIs

Scoring

  • Two built in scoring methods;
  • Weighting concept to assist with scoring.

No built in scoring capability

Data Sources

  • Analysis Services;
  • SQL Server;
  • SharePoint List;
  • Excel.

  • SQL Server;
  • Analysis Services;
  • Oracle;
  • OLE DB, ODBC;
  • XML;
  • SharePoint List;
  • SAP Netweaver BI;
  • Teradata;
  • Essbase.

Positioning

  • KPIs can only exist within Scorecards.

  • A scorecard can only contain KPIs and their associated Indicator and data items.

  • Indicators can be embedded in other objects or exist by themselves.
  • Other objects (e.g. spark lines) can co-exist alongside Indicators in the same Tablix.

Formatting

  • Colours & number formats of the Scorecard grid can be changed;
  • Colours/fonts in data cells cannot be changed.

If the Indicator is embedded in a Tablix, then each cell can be altered in terms of Colours and number formats.

Ease of Use

As long as your data is in good shape, its possible for power users to build KPIs and Scorecards with Dashboard Designer relatively easily.

The indicator windows are very intuitive in SSRS, but I'd say that building an SSRS report is harder for a power user than building a PPS dashboard.

Other functionality

Scorecards allow users enter comments against KPI values, which is a great feature.

The standard SSRS expressions allow you to fine tune indicators, states, values and formatting of your KPIs.

In summary, PerformancePoint Services in SharePoint 2010 will allow you to get some great looking KPIs and Scorecards up and running much quicker, as that's what it's designed to do. It is, however, more restrictive than Reporting Services, so SSRS is the better choice if your KPI requirements are complex in terms of layout or formatting.

MDM White Papers

With the release of SQL Server 2008 R2 nearly upon us, it's a safe bet that the number of technical articles for MDS will start to increase a bit. In the meantime, I felt it was worth mentioning a few older MDM White Papers that may come in useful:

Allocations in PowerPivot Using DAX

Although I didn't mention it, the inspiration for my last post was the excellent MDX Solutions 2nd Edition. Whilst flicking through the book a while back, I though it would be interesting to see how DAX and PowerPivot could handle the so called 'Common Calculations and Selections in MDX'.

This post continues that theme, focusing on one calculation in particular that's in the aforementioned MDX book, namely 'Unweighted Allocations down the Hierarchy'. Essentially this business problem is dealing with allocating data that is entered at a higher level (e.g. Quarter) down to a lower level (e.g. Day). It varies from business to business what the users actually want to see at the day level in this situation. Some expect to see the Quarter amount, some want to see a blank value and then some users want the amount in the quarters allocated down to the day level.

If the expectation is that data should be allocated, then one way of doing the allocation is to use a ratio. In the MDX book, this is achieved by the following MDX:

1.0 / 
Descendants(
            Ancestor(
                    [Date].[Calendar].CurrentMember,
                    [Date].[Calendar].[Calendar Quarter]
                    ),
            [Date].[Calendar].CurrentMember.Level,
            SELF
            ).Count

If we put this into a query and run it against the Calendar hierarchy in AdventureWorksDW we get the following results:

image

Therefore our quarter-entered value can simply be multiplied by this ratio to get the correct value at each level in the Calendar hierarchy.

PowerPivot

So the challenge is how to be able to do this in PowerPivot. I started off by looking for some data in AdventureWorksDW that would fit the bill. A good candidate seems to be the FactSalesQuota table, which I imported into PowerPivot along with its related dimensions:

image

Once in the pivot table, I wanted to be able to calculate the correct ratio at the daily and monthly levels. For the daily level this is 1 / 90, as there are 90 days in the Q1 that year, and for the month level it is 1/3 as there are 3 months in the quarter.

Given that there's no MDX style Ancestor() function in DAX, I ended up having to have a different calculation at different levels. Something that I learnt from this forum post is that you need to use IF() in order to isolate a calculation at a different level. Therefore the DAX that I came up with was:

=IF( COUNTROWS(VALUES('DimDate'[FullDateAlternateKey])) = 1,

    1 / CALCULATE( COUNTROWS('DimDate'), ALL('DimDate'[FullDateAlternateKey], 'DimDate'[EnglishMonthName]) ),
    1 / CALCULATE( COUNTROWS( DISTINCT( 'DimDate'[EnglishMonthName]) ), ALL('DimDate'[EnglishMonthName]) )
   )

The COUNTROWS(VALUES('DimDate'[FullDateAlternateKey])) = 1 allows me to check that we're at the day level, and if we are at the day level, then count the number of days in the quarter. Otherwise, if we're at the month level we calculate the number of months in the current quarter. When this DAX is put into a new measure we get the following results:

image

This produces the same results as the MDX statement. The next step was then to apply this ratio to the Sales Amount Quota for the current quarter. This was achieved using the following DAX:

=IF(
      COUNTROWS( VALUES('DimDate'[FullDateAlternateKey]) ) < 90,
      CALCULATE( Sum('FactSalesQuota'[SalesAmountQuota]), 
      All('DimDate'[FullDateAlternateKey], 'DimDate'[EnglishMonthName]) ) * 'FactSalesQuota'[Ratio],
      Sum('FactSalesQuota'[SalesAmountQuota])
   )

The < 90 at the start of the statement allows me to check if we're at the day level or the month level. If we're at either of these levels, then we want to pick up the Quarter-level SalesAmountQuota and multiply by the ratio. Otherwise, at the Quarter and Year levels, we just apply the original SalesAmountQuota value. This produces the following results:

image

Conclusion

The use of the MDX Scope statement would have been nice to pinpoint exactly where I wanted the calculation to occur, but then again, the use of IF() keeps it quite simple, which is better for an Excel power user. Whilst MDX is very very powerful, can you imagine a business user getting to grips with MDX statements such as Scope or Freeze?

The bottom line is that the DAX calculation produces exactly what I want - the new AllocatedSalesQuota measure is correct at all levels in the Calendar hierarchy. Even with an IF() statement, like everything else in PowerPivot, the calculation performs very quickly.

DAX Closing Balances

One of the often required calculations for users in reports is to be able to calculate a point-in-time closing balance, which is especially relevant when dealing with stock levels or financial data.

In the Analysis Services world, at least in the Enterprise Edition, we're lucky that we have a variety of Aggregation Functions that can deal with semi-additive measures such as a closing balance. Or we can always turn to MDX if we need to which will achieve the same thing.

In PowerPivot, we don't have Aggregation Functions, but there is DAX, which has a variety of time intelligence functions up its sleeve. The idea behind this post is to see how PowerPivot would deal with the requirement of reporting a closing balance measure.

DAX Time Intelligence Functions

One of the DAX functions that we can use for our closing balance is called ClosingBalanceMonth(), which will simply evaluate an expression at the end of a given month. There's also the similar ClosingBalanceQuarter() and ClosingBalanceYear().

Having noticed these functions within PowerPivot, my first port of call was to set up a simple PowerPivot model by taking data from the FactInternetSales table in AdventureWorksDW, plus all it's related dimensions of course. What I ended up with was the following simple pivot table in Excel 2010:

image

Sales Amount obviously isn't a semi-additive measure, but I'm treating it as one in this example. My first port of call was to use ClosingBalanceMonth() to try and get the value for the month to be equal to the value for last day in the month. I managed to come up with the following formula:

=CLOSINGBALANCEMONTH(Sum([SalesAmount]), 'DimDate'[FullDateAlternateKey], ALL('DimDate'[FullDateAlternateKey]))

This produces the results that are shown below, in the measure called ClosingBalanceFormula:

image

It does a good job of calculating a closing balance for each month, but it presents that closing monthly balance on each day. Therefore, I started looking for an alternative approach and found the LastDate() function. It returns the last date that's in context for the passed date column. The last date that's in context at the month level will be the last date in the month, but thankfully the last date that's in context at the day level will be the current day. Therefore I came up with the following formula:

='FactInternetSales'[Sum of SalesAmount](LASTDATE('DimDate'[FullDateAlternateKey]))

This produced the results that I wanted - the daily closing balance at the day level and the monthly closing balance at the month level:

image 

A bit more digging lead me to find out that a combination of the Calculate() and LastDate() functions gets to the same result, as an alternative. That formula is:

=CALCULATE(Sum('FactInternetSales'[SalesAmount]), LASTDATE('DimDate'[FullDateAlternateKey]))

Conclusion

It's a shame that the ClosingBalanceMonth() formula didn't quite work, but perhaps getting that approach to work will come out in the wash with more DAX experience. However, as with MDX or any kind of expression language, there's always going to be a load of different ways to get to the same result. It's much more efficient to work knowing that you've got several ways around a particular problem, rather than being forced down a particular path, and in fact, I think the range of calculation functions available shows the richness of the DAX language.

PowerPivot and Data Warehousing

A consultant that I know has recently been through the process of reviewing his company's management information requirements, and also the process of evaluating what technology platform can deliver those requirements.

Naturally, a data warehouse was one of the options. A data warehouse, in my opinion, will provide the enterprise-level platform for any organisation to deliver their management information.

In this case though, due diligence flagged up some alternative approaches to a data warehouse, namely the raft of self service BI tools that are out there, including PowerPivot. Other tools that can take data in from a variety of data sources are Tableau (which is also a good front end for Analysis Services) and Spotfire for example.

I think that all of these tools have their merits. PowerPivot is incredibly fast and easy to use, whereas Tableau and Spotfire for example have some handy data visualisation capabilities. Some vendors actually claim that self service tools can replace a data warehouse. Can this really be the case? Whereas these tools do a good job of presenting data in a dimensional model, my personal view is that a data warehouse of course also carries this out, but in addition delivers the following benefits to name a few:

  • Data Cleansing - The power of SQL in the backend and also features like Fuzzy Matching in SSIS mean that you can address the huge problem of data quality. The bonus here is that all data quality issues could be logged, meaning that you can go back to the owners of source systems and let them know that their data is in a mess.
  • Control - If you do encounter data quality issues, you may decide that they are actually too bad to load into the data warehouse and present to your users.
  • Transparency - if you want, you can choose to expose the data quality level of your facts to users, via the Kimball-style audit dimension. E.g. we may have noticed that within our retail data, one store has abnormally low sales. Therefore, we can assign this fact a lower data quality score than the other sales by store for the day, which lets the users know that the data quality steward is aware of the issue, and is looking into it.
  • Slowly Changing Dimensions - A lot of source systems don't behave. E.g. if we're tracking a customer's Marital Status of 'single' or 'married', you often find that source systems only record the latest change to a given field. Without addressing slowly changing dimension issues in a data warehouse, we may never know that Customer X is now single, but once was married.
  • Conformed Dimensions - When we have multiple data sources for a single dimension, the goal is to ensure that each dimension available to end users represents a single, consistent view of that dimension. Due to the often complex matching and de-duplication that's required, it's difficult to see how this would be possible without a data warehouse.
  • Late Arriving Dimension Rows - Going back to the 'Marital Status' attribute, we can get the situation where a source system only updates a particular field a long time after the event actually occurred. E.g, if we think that a customer called John Doe is single, but then today we're told he's married, but has actually been married for the last two years. If Marital Status is an important piece of management information to the business, then we need to update all data associated with John Doe over the last two years, in order to reflect his correct Marital Status over the last two years.
  • Complex Transformations - I recently encountered a source system where one of the columns in a SQL table held data in a varying length format of ‘#Product Type#Currency#Value, #Product Type#Currency#Value’. So what you basically have here is potentially say 25 records held in one row in a SQL table, whereas another table might have just 7 records within one row. Rows could also be badly formed! We ended up using a bit of C# within SSIS to split this data out so that the sub-elements could be analysed correctly.

I could go on, but I think as a few simple examples the above will do.

Whilst I'm an advocate of data warehousing, I also think that the self service tools, such as PowerPivot, can be complementary to a data warehouse. You'll often find, for example, that there's some data that's not in the data warehouse yet - after all some organisations have a lot of data, and it will take time before all the business analysis etc can be carried out to get this data into the warehouse. Another example is proto-typing, analysts can very quickly build a proto-type model, which can then be production-ised at a later stage. I think this video from Chris Webb is really worth watching - it covers these examples and a few more.

In summary there's always going to be a need for users to go digging for data and producing their own reports - the data warehouse can't always deliver everything immediately. PowerPivot is going to be great in this situation, but it will work best when it's used alongside cleaned and conformed data, which is exactly what a data warehouse provides.