Adatis BI Blogs

Allocations in PowerPivot Using DAX

Although I didn't mention it, the inspiration for my last post was the excellent MDX Solutions 2nd Edition. Whilst flicking through the book a while back, I though it would be interesting to see how DAX and PowerPivot could handle the so called 'Common Calculations and Selections in MDX'. This post continues that theme, focusing on one calculation in particular that's in the aforementioned MDX book, namely 'Unweighted Allocations down the Hierarchy'. Essentially this business problem is dealing with allocating data that is entered at a higher level (e.g. Quarter) down to a lower level (e.g. Day). It varies from business to business what the users actually want to see at the day level in this situation. Some expect to see the Quarter amount, some want to see a blank value and then some users want the amount in the quarters allocated down to the day level. If the expectation is that data should be allocated, then one way of doing the allocation is to use a ratio. In the MDX book, this is achieved by the following MDX: 1.0 / Descendants( Ancestor( [Date].[Calendar].CurrentMember, [Date].[Calendar].[Calendar Quarter] ), [Date].[Calendar].CurrentMember.Level, SELF ).Count If we put this into a query and run it against the Calendar hierarchy in AdventureWorksDW we get the following results: Therefore our quarter-entered value can simply be multiplied by this ratio to get the correct value at each level in the Calendar hierarchy. PowerPivot So the challenge is how to be able to do this in PowerPivot. I started off by looking for some data in AdventureWorksDW that would fit the bill. A good candidate seems to be the FactSalesQuota table, which I imported into PowerPivot along with its related dimensions: Once in the pivot table, I wanted to be able to calculate the correct ratio at the daily and monthly levels. For the daily level this is 1 / 90, as there are 90 days in the Q1 that year, and for the month level it is 1/3 as there are 3 months in the quarter. Given that there's no MDX style Ancestor() function in DAX, I ended up having to have a different calculation at different levels. Something that I learnt from this forum post is that you need to use IF() in order to isolate a calculation at a different level. Therefore the DAX that I came up with was: =IF( COUNTROWS(VALUES('DimDate'[FullDateAlternateKey])) = 1,     1 / CALCULATE( COUNTROWS('DimDate'), ALL('DimDate'[FullDateAlternateKey], 'DimDate'[EnglishMonthName]) ),     1 / CALCULATE( COUNTROWS( DISTINCT( 'DimDate'[EnglishMonthName]) ), ALL('DimDate'[EnglishMonthName]) )    ) The COUNTROWS(VALUES('DimDate'[FullDateAlternateKey])) = 1 allows me to check that we're at the day level, and if we are at the day level, then count the number of days in the quarter. Otherwise, if we're at the month level we calculate the number of months in the current quarter. When this DAX is put into a new measure we get the following results: This produces the same results as the MDX statement. The next step was then to apply this ratio to the Sales Amount Quota for the current quarter. This was achieved using the following DAX: =IF(       COUNTROWS( VALUES('DimDate'[FullDateAlternateKey]) ) < 90,       CALCULATE( Sum('FactSalesQuota'[SalesAmountQuota]),        All('DimDate'[FullDateAlternateKey], 'DimDate'[EnglishMonthName]) ) * 'FactSalesQuota'[Ratio],       Sum('FactSalesQuota'[SalesAmountQuota])    ) The < 90 at the start of the statement allows me to check if we're at the day level or the month level. If we're at either of these levels, then we want to pick up the Quarter-level SalesAmountQuota and multiply by the ratio. Otherwise, at the Quarter and Year levels, we just apply the original SalesAmountQuota value. This produces the following results: Conclusion The use of the MDX Scope statement would have been nice to pinpoint exactly where I wanted the calculation to occur, but then again, the use of IF() keeps it quite simple, which is better for an Excel power user. Whilst MDX is very very powerful, can you imagine a business user getting to grips with MDX statements such as Scope or Freeze? The bottom line is that the DAX calculation produces exactly what I want - the new AllocatedSalesQuota measure is correct at all levels in the Calendar hierarchy. Even with an IF() statement, like everything else in PowerPivot, the calculation performs very quickly.

DAX Closing Balances

One of the often required calculations for users in reports is to be able to calculate a point-in-time closing balance, which is especially relevant when dealing with stock levels or financial data. In the Analysis Services world, at least in the Enterprise Edition, we're lucky that we have a variety of Aggregation Functions that can deal with semi-additive measures such as a closing balance. Or we can always turn to MDX if we need to which will achieve the same thing. In PowerPivot, we don't have Aggregation Functions, but there is DAX, which has a variety of time intelligence functions up its sleeve. The idea behind this post is to see how PowerPivot would deal with the requirement of reporting a closing balance measure. DAX Time Intelligence Functions One of the DAX functions that we can use for our closing balance is called ClosingBalanceMonth(), which will simply evaluate an expression at the end of a given month. There's also the similar ClosingBalanceQuarter() and ClosingBalanceYear(). Having noticed these functions within PowerPivot, my first port of call was to set up a simple PowerPivot model by taking data from the FactInternetSales table in AdventureWorksDW, plus all it's related dimensions of course. What I ended up with was the following simple pivot table in Excel 2010: Sales Amount obviously isn't a semi-additive measure, but I'm treating it as one in this example. My first port of call was to use ClosingBalanceMonth() to try and get the value for the month to be equal to the value for last day in the month. I managed to come up with the following formula: =CLOSINGBALANCEMONTH(Sum([SalesAmount]), 'DimDate'[FullDateAlternateKey], ALL('DimDate'[FullDateAlternateKey])) This produces the results that are shown below, in the measure called ClosingBalanceFormula: It does a good job of calculating a closing balance for each month, but it presents that closing monthly balance on each day. Therefore, I started looking for an alternative approach and found the LastDate() function. It returns the last date that's in context for the passed date column. The last date that's in context at the month level will be the last date in the month, but thankfully the last date that's in context at the day level will be the current day. Therefore I came up with the following formula: ='FactInternetSales'[Sum of SalesAmount](LASTDATE('DimDate'[FullDateAlternateKey])) This produced the results that I wanted - the daily closing balance at the day level and the monthly closing balance at the month level:   A bit more digging lead me to find out that a combination of the Calculate() and LastDate() functions gets to the same result, as an alternative. That formula is: =CALCULATE(Sum('FactInternetSales'[SalesAmount]), LASTDATE('DimDate'[FullDateAlternateKey])) Conclusion It's a shame that the ClosingBalanceMonth() formula didn't quite work, but perhaps getting that approach to work will come out in the wash with more DAX experience. However, as with MDX or any kind of expression language, there's always going to be a load of different ways to get to the same result. It's much more efficient to work knowing that you've got several ways around a particular problem, rather than being forced down a particular path, and in fact, I think the range of calculation functions available shows the richness of the DAX language.

PowerPivot and Data Warehousing

A consultant that I know has recently been through the process of reviewing his company's management information requirements, and also the process of evaluating what technology platform can deliver those requirements. Naturally, a data warehouse was one of the options. A data warehouse, in my opinion, will provide the enterprise-level platform for any organisation to deliver their management information. In this case though, due diligence flagged up some alternative approaches to a data warehouse, namely the raft of self service BI tools that are out there, including PowerPivot. Other tools that can take data in from a variety of data sources are Tableau (which is also a good front end for Analysis Services) and Spotfire for example. I think that all of these tools have their merits. PowerPivot is incredibly fast and easy to use, whereas Tableau and Spotfire for example have some handy data visualisation capabilities. Some vendors actually claim that self service tools can replace a data warehouse. Can this really be the case? Whereas these tools do a good job of presenting data in a dimensional model, my personal view is that a data warehouse of course also carries this out, but in addition delivers the following benefits to name a few: Data Cleansing - The power of SQL in the backend and also features like Fuzzy Matching in SSIS mean that you can address the huge problem of data quality. The bonus here is that all data quality issues could be logged, meaning that you can go back to the owners of source systems and let them know that their data is in a mess. Control - If you do encounter data quality issues, you may decide that they are actually too bad to load into the data warehouse and present to your users. Transparency - if you want, you can choose to expose the data quality level of your facts to users, via the Kimball-style audit dimension. E.g. we may have noticed that within our retail data, one store has abnormally low sales. Therefore, we can assign this fact a lower data quality score than the other sales by store for the day, which lets the users know that the data quality steward is aware of the issue, and is looking into it. Slowly Changing Dimensions - A lot of source systems don't behave. E.g. if we're tracking a customer's Marital Status of 'single' or 'married', you often find that source systems only record the latest change to a given field. Without addressing slowly changing dimension issues in a data warehouse, we may never know that Customer X is now single, but once was married. Conformed Dimensions - When we have multiple data sources for a single dimension, the goal is to ensure that each dimension available to end users represents a single, consistent view of that dimension. Due to the often complex matching and de-duplication that's required, it's difficult to see how this would be possible without a data warehouse. Late Arriving Dimension Rows - Going back to the 'Marital Status' attribute, we can get the situation where a source system only updates a particular field a long time after the event actually occurred. E.g, if we think that a customer called John Doe is single, but then today we're told he's married, but has actually been married for the last two years. If Marital Status is an important piece of management information to the business, then we need to update all data associated with John Doe over the last two years, in order to reflect his correct Marital Status over the last two years. Complex Transformations - I recently encountered a source system where one of the columns in a SQL table held data in a varying length format of ‘#Product Type#Currency#Value, #Product Type#Currency#Value’. So what you basically have here is potentially say 25 records held in one row in a SQL table, whereas another table might have just 7 records within one row. Rows could also be badly formed! We ended up using a bit of C# within SSIS to split this data out so that the sub-elements could be analysed correctly. I could go on, but I think as a few simple examples the above will do. Whilst I'm an advocate of data warehousing, I also think that the self service tools, such as PowerPivot, can be complementary to a data warehouse. You'll often find, for example, that there's some data that's not in the data warehouse yet - after all some organisations have a lot of data, and it will take time before all the business analysis etc can be carried out to get this data into the warehouse. Another example is proto-typing, analysts can very quickly build a proto-type model, which can then be production-ised at a later stage. I think this video from Chris Webb is really worth watching - it covers these examples and a few more. In summary there's always going to be a need for users to go digging for data and producing their own reports - the data warehouse can't always deliver everything immediately. PowerPivot is going to be great in this situation, but it will work best when it's used alongside cleaned and conformed data, which is exactly what a data warehouse provides.

PowerPivot CTP3 Released

Following last week's release of the November CTP for SQL Server 2008 R2, the latest CTP for PowerPivot is now also released! The links for the downloads are: 32 Bit PowerPivot Download 64 Bit PowerPivot Download PowerPivot Main Download Page Included in the new CTP3 features are several new DAX functions, especially quite a few around time intelligence. Some look quite comparable to MDX, whereas others possibly give even more functionality than MDX. This is something I hope to blog about soon.....

Creating a Custom Gemini/PowerPivot Data Feed – Method 1 – ADO.NET Data Services

There are already a few good Gemini/PowerPivot blogs that provide an introduction into what it is and does so there is no need for repetition.  What I haven’t seen are examples of how existing investments can be harnessed for Gemini/PowerPivot based self-service analytics. This series of posts focuses on various ways of creating Custom Data Feeds that can be used by Gemini/PowerPivot natively – Providing a direct feed from otherwise closed systems opens up new channels of analytics to the end user. Gemini/PowerPivot supports reading data from Atom-based data feeds, this post looks at a quick way of creating an Atom-based feed that can be consumed by Gemini/PowerPivot.  By far the simplest way to develop an Atom-based data feed is to employ ADO.NET Data Services in conjunction with ADO.NET Entity Framework.  With very few (in fact one and a bit!) lines of code, a data source can be exposed as a feed that Gemini/PowerPivot can read natively.  I am going to use the AdventureWorksDW sample hosted by a SQL Server 2008 R2 instance for this – obviously Gemini/PowerPivot natively reads SQL Server databases, so creating a custom feed over the top may seems a little pointless.  However, this technique may be useful for quick wins in several scenarios, including: - Preventing the need for users to connect directly to the underlying data source. - Restricting access to various elements of the data source (tables/columns etc) - Applying simple business logic to raw data. ADO.NET Data Services are a form of Windows Communication Foundation (WCF) services, and therefore can be hosted in various environments.  Here, I will simply host the ADO.NET Data Service inside an ASP.NET site. To create a Native Gemini/PowerPivot feed, you take seven steps: 1 - Create ASP.NET Web Application 2 - Create Entity Data Model 3 - Create the Schema 4 - Create the Data Service 5 - Load From Data Feed 6 - Create Relationships 7 - Test Step 1) Create ASP.NET Web Application I’m using Visual Studio 2008 here to create an ASP.NET Web Application. Step 2) Create Entity Data Model Add an ADO.NET Entity Data Model item to the project, these files have a .edmx extension and allow us to create a schema that maps to the underlying database objects. Step 3) Create the Schema We simply require a 1:1 mapping so will ‘Generate from Database’.  Incidentally, the ‘Empty Model’ option allows you to build a conceptual model of the database resulting in custom classes that can be optionally mapped to the database objects later. Create a Microsoft SQL Server connection to AdventureWorksDW2008. Select the appropriate database objects, I’ve selected the following tables: - DimCurrency - DimCustomer - DimDate - DimProduct - DimPromotion - DimSalesTerritory - FactInternetSales Once the wizard has completed, a new .edmx and associated cs file is created that respectively contain an Entity Relationship Diagram and a set of Auto Generated Classes that represent the database objects. Due to the way the Entity Framework handles Foreign Key Constraints we have to apply a workaround to ensure the Foreign Keys on the FactInternetSales table are exposed and brought into Gemini/PowerPivot.  A previous post Exposing Foreign Keys as Properties through ADO.NET Entity Framework walks through the workaround.     Step 4) Create the Data Service Add an ADO.NET Data Service item to the project. The service class inherits from a generic version of the System.Data.Services.DataService object, so we need to inform the compiler what class to base the generic object on.  We essentially want to base our Data Service on the class representing our newly created Entity Data Model.  The class name is derived from the database name, unless changed when the Entity Data Model was created, so in our case the class name is AdventureWorksDW2008Entities. The auto generated service class contains a ‘TODO’ comment that asks you to ‘put your data source class name here’.  The comment needs replacing with AdventureWorksDW2008Entities. The final step is to expose the resources in the Entity Data Model.  For security reasons, a data service does not expose any resources by default.  Resources need to be explicitly enabled. To allow read only access to the resources in the Entity Data Model the InitializeService method needs updating with a single line of code.  The code snippet below details the final class implementation, notice the AdventureWorksDW2008Entities reference at line 1 and the the explicit resource enablement at line 6. Code Snippet public class GeminiDataService : DataService<AdventureWorksDW2008Entities>     {         // This method is called only once to initialize service-wide policies.         public static void InitializeService(IDataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);         }     } That’s all that’s needed, by default, ADO.NET Data Services conform to the Atom standard, so in theory the Service is ready to be consumed by Gemini/PowerPivot. Before we try, it’s worth giving the service a quick test, building and running the solution (F5) launches Internet Explorer navigating to the Service hosted by the ASP.NET Development Server. You are first presented with an XML document containing elements that represent database objects, you can further drill into the objects by amending the URL.  For example, if you want to see the contents of the DimPromotion table then append DimPromotion to the end of the URL: http://localhost:56867/GeminiDataService.svc/DimPromotion (Case sensitive) Note:  You may need to turn off Feed Reader View in Internet Explorer to see the raw XML (Tools->Internet Options–>Content->Settings–>Turn On Feed Reader View – make sure this is unchecked) As a slight aside, the URL can be further enhanced to, filter, top n rows, extract certain properties etc etc. Here are a couple of examples: URL Effect http://localhost:56867/GeminiDataService.svc/DimCustomer?$top=5 Return the top 5 Customers http://localhost:56867/GeminiDataService.svc/DimCustomer(11002) Return Customer with id 11002 http://localhost:56867/GeminiDataService.svc/DimCustomer(11002)/FirstName Return the First Name of Customer 11002 http://localhost:56867/GeminiDataService.svc/DimProduct(310)?$exapnd=FactInternetSales Returns Product with id 310 and all related Internet Sales Records Confident that the feed is working, we can now deploy the service, and start using the feed in Gemini/PowerPivot.  Step 5) Load From Data Feed Open up Excel 2010, launch the Gemini/PowerPivot Client (by selecting ‘Load & Prepare Data’) Select ‘From Data Feed’ from the ‘Get External Data’ section of the Gemini/PowerPivot Home Ribbon to launch the Table Import Wizard. Specify the Url from the ADO.NET Data Services feed created earlier, in my case: http://localhost:56867/GeminiDataService.svc as the 'Data Feed Url’ and click Next. Incidentally, you can use the majority of the enhanced Urls to, for example only select the DimProduct table should you so wish, however by specifying the root Url for the service you have access to all objects exposed by the service. From the Table Import Wizard Select the required tables, in my case I’ll select them all.  (You can optionally rename and filter the feed objects here too). Following the summary screen, the Gemini/PowerPivot Client then gets to work importing the data from the ADO.NET Data Service: Once completed, Gemini/PowerPivot displays all the data from all of the feed objects as if it came directly from the underlying database. Step 6) Create Relationships There is one final step before we can test our model using an Excel Pivot Table.  We need to create the relationships between the tables we have imported.  The Gemini/PowerPivot Client provides a simple, if a little onerous way of creating relationships, the ‘Create Relationship’ action on the Relationships section of the Home Ribbon launches the Create Relationship wizard: Each table needs relating back to the primary Fact table which results in the following relationships: Step 7) Test We are now ready to start our analysis, selecting PivotTable from the View section of the Gemini/PowerPivot Client Home ribbon creates a pivot table in the underlying Excel workbook attached to your custom fed Gemini/PowerPivot data model.         So, to allow fast access to, for example, potentially sensitive data, through Gemini/PowerPivot you can quickly build a custom data feed that can be consumed natively by the Gemini/PowerPivot Client data feed functionality.

HACK: Exposing Foreign Keys as Properties through the ADO.NET Entity Framework

First post for months; the PerformancePoint Planning announcement forced some redirection and rebuilding.  We’ve grieved, we’ve moaned, but at some point, you just have to move on. ----------------- I’m not a fan of hacks – it normally means you are doing something wrong, but in this case, where I’m after a quick win, I’ve had to work out and resort to a bit of a hack.  It actually looks like the issue I’m facing maybe addressed in Entity Framework v2 (Microsoft .NET 4.0) – so maybe it’s more of a workaround than a hack after all ;o) I’m using the ADO.NET Entity Framework and ADO.NET Data Services to expose a subsection of a database for consumption by Gemini.  In order to relate the exposed database objects together in Gemini, I need to apply this hack to ensure I have Foreign Keys available in my Gemini models to support creating the relationships.  By default, the Entity Framework exposes Foreign Keys as Navigation Properties rather than Scalar Properties.  Gemini does not consume Navigation Properties. Lets take the scenario where I want to create an Entity Framework Model based on the following tables from the AdventureWorksDW2008 sample database: -FactInternetSales -DimCustomer -DimProduct -DimSalesTerritory Step 1)  Identify the table(s) that contain Foreign Keys.  In this case FactInternetSales. Step 2)  Load those table(s) into the Entity Framework Model on their own.  This ensures the Foreign Keys are set as Scalar Properties.  If you load in all the tables at once, the Foreign Keys are not exposed as Scalar Properties. Step 3)  Load in the related tables. (DimCustomer, DimProduct, DimSalesTerritory) At this point a bunch of Navigation Properties would have been set up, along with relationships between the related tables but the trouble now is the project will no longer build.  If you try you receive the following error for each relationship: Error 3007: Problem in Mapping Fragments starting at lines 322, 428: Non-Primary-Key column(s) [CustomerKey] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. Step 4) Manually remove the relationships between tables. Clicking on the relationship line on the diagram and hitting delete, removes the relationship. Step 5) Remove all Association Sets By editing the edmx file manually in a text or XML editor you need to remove all <AssociationSet>…</AssociationSet> occurrences from the <EntityContainer> section: <EntityContainer Name="AdventureWorksDW2008Model1StoreContainer">     <EntitySet Name="DimCustomer" EntityType="AdventureWorksDW2008Model1.Store.DimCustomer" … />     <EntitySet Name="DimProduct" EntityType="AdventureWorksDW2008Model1.Store.DimProduct" … />     <EntitySet Name="DimSalesTerritory" EntityType="AdventureWorksDW2008Model1.Store.DimSalesTerritory" … />     <EntitySet Name="FactInternetSales" EntityType="AdventureWorksDW2008Model1.Store.FactInternetSales" … />     <AssociationSet Name="FK_FactInternetSales_DimCustomer" Association="AWDW08.FK_FactInternetSales_DimCustomer">         <End Role="DimCustomer" EntitySet="DimCustomer" />         <End Role="FactInternetSales" EntitySet="FactInternetSales" />     </AssociationSet>     <AssociationSet Name="FK_FactInternetSales_DimProduct" Association="AWDW08.FK_FactInternetSales_DimProduct">         <End Role="DimProduct" EntitySet="DimProduct" />         <End Role="FactInternetSales" EntitySet="FactInternetSales" />     </AssociationSet>     <AssociationSet Name="FK_FactInternetSales_DimSalesTerritory" Association="ADW08.FK_FactInternetSales_DimSalesTerritory">         <End Role="DimSalesTerritory" EntitySet="DimSalesTerritory" />         <End Role="FactInternetSales" EntitySet="FactInternetSales" />     </AssociationSet> </EntityContainer> The project should now build, with the foreign keys exposed as Scalar Properties.  Obviously no inherent relationships exist, so this could be dangerous in certain applications.  For Gemini however, providing you setup the relationships manually, it works a treat.