Adatis

Adatis BI Blogs

DAX Calculated Tables in Power BI

Calculated Tables have been around for a while in Power BI, but today I found a real life scenario for using them. I connected to Manchester United’s Facebook page and plugged the data into Microsoft’s Cognitive Services. In essence, I want to measure Sentiment Analysis and find out how many times a supporter has mentioned one of Manchester United’s rival football teams. You are probably wondering what this has to do with a DAX Calculated Table, so let me explain.  I have pulled down the Facebook data (from the API), but when trying to undertake a GROUP BY for Likes, Loves, etc. in the Query Editor, Power BI hangs and the query never resolves.  Whilst I cannot pinpoint exactly why this happens, I would guess that the number of API calls to Facebook are exceeded and some form of timeout occurs. This blog will walk you through how to create a DAX Calculated Table and apply a Group By to get the count of reaction types.  There are a number of articles already out there showing examples of a Calculated Table and I have provided the links at the bottom of the post. Existing Query Currently, my query looks like the below: The only remaining task is to apply a COUNT of all records, GROUPED BY Reactions.Type and id.  If I try and use the Query Editor functionality within the UI, the transformation step never completes. I am left with the following message in bottom right hnd side of the Query Editor: After waiting two hours for the GROUP BY query to resolve, I gave up.  The alternative is to use a DAX Calculated Table and I will show you how I achieved this: Calculated Table In order to create A Calculated Table, come out of the Query Editor, navigate to the Modeling tab and select New Table. Now we can write some DAX.  Pasting the below syntax into the new Table will achieve the Group By on the ‘Reaction Man United’ query. ReactionTotalsManUnited = GROUPBY (   ReactionsManUnited, ReactionsManUnited[id], ReactionsManUnited[reactions.type],  "TotalReactions", COUNTX( CURRENTGROUP(), ReactionsManUnited[reactions.type])  )  Let me break down the code: o   Calculated Table named as ‘ReactionTotalsManUnited’ o   GROUP BY function, grouping all reaction Id’s (‘id’) and types (‘reactions.type’) o   COUNTX function applied over reaction type, using the CURRENTGROUP() function to ensure the unique count is made by Id and Type within the ‘ReactionsManUnited’ table. Finally, to test the new DAX table works, I have created a basic KPI Card.  It is aggregating exactly expected. Conclusion Let’s recap.  I have shown you how to use three DAX expressions, albeit nested together in one statement.  This demonstrates how powerful and flexible the language is. o  GROUP BY o  COUNTX o  CURRENTGROUP I made use of the Calculate Table functionality due to poor performing queries made to the Facebook API.  There are many other reasons for using them, with some good examples provided in Chris Webb’s blog.  Where possible, you should always use Query Editor (and M language) for ad hoc transformations, although a DAX expression can sometimes get around slow performing queries.  DAX measures are evaluated at run time and in memory, whereas the Query Editor needs to pull down and refresh data after every applied step.  I would strongly recommend that all budding Power BI developers learn DAX, in order to get the most out of your Power BI reports.  The Calculated Table function is just one of over 200 different expressions within Power BI. Further Reading o   Microsoft MSDN – http://bit.ly/2l34vsW    o   Power BI Blog - http://bit.ly/2lBWRJc o   Reza Rad’s Blog - http://bit.ly/2lBKjkW o   Chris Webb’s blog - http://bit.ly/2m3IDlg o   List of DAX Expressions (Paul Turley’s blog) - http://bit.ly/2mfBZ8y Contact Me If you would like a copy of the workbook or have any questions about this blog, please leave a comment below or contact me on Twitter (@CallumGAdatis ).

New Features in SQL Server 2016 – Part 2: Analysis Services

This blog solely focuses on the new Analysis Services features of SQL Server 2016 CTP2.  For anyone who may missed it – click here to view my opening blog on the Database Engine. Although there have not been any major Analysis Services (SSAS) enhancements in CTP2, it is pleasing Microsoft are still looking to improve this part of the BI Stack.  The majority of them seem to be geared towards Tabular and DAX, although there are plans to release further Multidimensional functionality in CTP2 and beyond. There are five key enhancements for SSAS: 1.      Tabular Model Partitions (Tabular). 2.      Many 2 Many (Tabular). 3.      Analysis Services PowerPivot mode. 4.      New DAX Functions (Tabular). 5.      Enable/disable attribute hierarchies. Tabular Model Partitions SQL Server SSAS 2016 CTP2 includes new parallel processing functionality for tables with two or more partitions, increasing processing performance. There are no configuration settings for this.  More information on this feature is displayed below: 1.      IN SSAS Tabular, partitioning big data marts will help drastically when processing new data.  Instead of having to process all data, you can just partition the relevant ones. 2.      In order to create, manage, and process partitions in SQL Server Management Studio, you must have the appropriate Analysis Services permissions defined in a security role. 3.      Parallel processing is also available in SSAS.  This happens automatically, when processing a table with more than one partitions.  Although you can choose to independently process a partition if required. Many to Many Dimensions (Tabular) There isn’t too much to show or say here yet apart from the fact that this feature is FINALLY HERE!  Gone are the complications of workarounds (thanks to Marco Russo and Alberto Ferrari), we can now just use the Many to Many functionality in Tabular.  Multidimensional already has this feature, which is one of the reasons why developers have yet to buy in to Tabular.  Unfortunately, Microsoft have yet to reveal much detail on to how this feature will work.  I just hope it performs as well as the current workarounds. Analysis Services PowerPivot mode To install PowerPivot for SharePoint is now a lot easier.  If you already know how to configure SharePoint 2013, this will help you greatly.  Taken from the MSDN Microsoft website, you simply use the Install Wizard and do the following: 1.      Select ‘Analysis Services’ from the standard Instance Feature. 2.      Choose ‘Analysis Services server mode’ and configure administrators (screenshot below).   New DAX Functions There are many new DAX functions, which are all detailed on MSDN website – click here for further detail. I have picked out 5 functions that will be highly beneficial when creating an SSAS Tabular cube.  I am looking at this form a very BI focused background so I would recommend looking at all of the new functions to see if they benefit your needs. 1.      TOPn a.      This is not actually a new function but has been updated. b.     Now allows TRUE/FALSE/ASC/DESC to specify sorting direction. 2.      CALENDAR a.      Returns a table with a single column named “Date” that contains a common set of dates. The range of dates is from the specified start date to the specified end date. b.      See Similar – CALENDARAUTO 3.      MEDIAN a.      Very simple – returns the median numbers in a column. b.      See Similar – MEDIANX (uses an expression). 4.      NATURALINNERJOIN a.      Inner join of a table with another table. The tables are joined on common columns in the two tables. If the two tables have no common column names, an error is returned. b.      See similar – NATURALLEFTOUTERJOIN 5.      SUMMARIZECOLUMNS a.      Enables you to group data together and return a summary table. Enable/Disable Attribute Hierarchies Again, this functionality is tailored towards SSAS Tabular.  There is a new setting that ties back to the Tabular mode metadata 1.      ColumnUsage property a.      DAXUsage                                                     i.     Attribute hierarchies can only be used in measures. b.     UnrestrictedUsage                                                     i.     Can be set in the XMLA (in SSDT) or by using an ALTER statement in the Tabular model.                                                    ii.     An example taken from MSDN:        <Alter>   <ObjectDefinition...>     <Database>       <Dimensions>         <Dimension>           <Attributes>             <Attribute>                <ddl500:ColumnUsage value=”ddl500_500”>                DAXUsage | UnrestrictedUsage                </ddl500:ColumnUsage>   Multidimensional Features The full SQL Server 2016 release will include some key enhancement to Multidimensional SSAS.  These include: 1.      Netezza as a Data Source (Netezza Data Warehouse | IBM - NDM Technologies). 2.      General performance improvements. 3.      Unnatural hierarchies. 4.      Distinct counts. 5.      DBCC support. Checks the logical and physical integrity of objects in the specified database. 6.      Expose on-premises multidimensional cubes in the cloud with Power BI. I will provide further information on the improvements, as and when Microsoft announce them. References For more information on all of the new SSAS SQL Server 2016 features, the below resources/blogs are highly recommended. ·        Official Microsoft Page - https://msdn.microsoft.com/en-us/library/bb522628(v=sql.130).aspx ·        Brew your own Business Intelligence Blog – http://byobi.com/blog/2015/05/ssas-related-enhancements-in-sql-server-2016/ ·        Jorg Klein SQL Server Blog - http://sqlblog.com/blogs/jorg_klein/archive/2015/05/22/bi-on-your-terms-with-sql-server-2016.aspx

SQL PASS Summit 2014 – Kick Off

Day 1 has kicked off in Seattle, a remarkable city. Having arrived a week early, I’ve had plenty of time to check out the sights and the food and have enjoyed it immensely - a brilliant venue for PASS! There were a few announcements at this morning’s Keynote, mostly regarding Azure. Azure SQL Databases are gaining larger index handling, parallel queries, extended events and in-memory columnstore for data marts. Joseph Sirosh gave a talk about Machine Learning & Information management showing a cool example of Azure Stream Analytics using Microsoft Kinect sensor information of customer interactions in a shop being uploaded straight into Power Map! I am looking forward to hearing more on Machine Learning. There are also a handful of great improvements for Power BI. I am most looking forward to the new Live Ops Dashboards and drill-through actions! Combo Reports also look promising… Moving onto the first session, I chose to attend ‘What’s new in Microsoft Power Query for Excel’. As it turns out there’s not a massive amount of new stuff – some new data sources and a tick box when you do a merge to remove name prefixes. However one of these new sources is the long-awaited Analysis Services data source. The new ODBC Data Source is a great addition also. There was a mention regarding the possibility of a decoupled M-query SSIS component! We probably won’t hear of anything until later in 2015, unfortunately. I would say this was not a level 300 session, more like 100/200. The second session was ‘SQL Server 2014 Always On (High Availability and Disaster Recovery)’: a nice overview of what was brought in in 2012 and the newer 2014 features – these including an increased maximum number of secondary replicas, increased availability of readable secondary’s and the Add Azure Replica Wizard. Despite not being a DBA and it being a level 300 session, I found it easy to follow and absorbable. I feel many of the DBAs in the room may not have taken away any information they would have not already have known, however. Niko Neugebauer gave a fantastic, in depth session on ‘ETL Patterns with Clustered Columnstore Indexes’. It was a fast moving talk, despite the time spent waiting for some executions. It demanded your full attention! Definitely worthy of its 400 level. It left me a little tired for Marco Russo’s ‘DAX Patterns’ session which showed some examples of workarounds for common tabular weaknesses like distinct counts for Type 2 slowly changing dimensions and cumulative quantities. Overall it was a strong day. I am looking forward to tomorrow. More to follow…

KPI’s in Power View and DAX Query for KPI’s

I was recently approached by a client where we have assisted in implementing a Microsoft Analysis Services tabular solution and a supporting management pack of SSRS reports.  They were asking about utilising some of the KPI’s defined in their tabular cube in the SSRS reports using DAX queries.  I was at a bit of a loss and most web help seemed to suggest it couldn’t be done: Blog comment from 2012 However, with a bit of time to investigate it appears Power View does now support Tabular KPI’s and as such a DAX query should be able to pull them out.  My KPI base measure is named “Highest Ranking” and I am able to view its value, status or goal in Power View.  A little work with every SQL developer’s best friend Profiler and I had what I was looking for.  It appears that when you create a KPI SSAS is defining some measures in the background which do some of the work of the MDX KPI functions. Therefore the following DAX query against my cycling Demo cube contains the expected results and could therefore be used in SSRS reporting. EVALUATE ADDCOLUMNS( VALUES('DimRider'[Name]) , "Highest_Ranking", 'FactRanking'[Highest Ranking], "Highest_Ranking_Goal", 'FactRanking'[_Highest Ranking Goal], "Highest_Ranking_Status", 'FactRanking'[_Highest Ranking Status] ) ORDER BY 'DimRider'[Name]   I also tested for the existence of other KPI functions such as Trend and Weight but these do not appear to be present.  It is also interesting that the use of a KPI over a measure does not change the measure name but just group it in the field list and as such there is no need for a value function. For more info on KPI’s in Power View there is additional documentation here.  I am currently unsure of the required versions for this functionality so if it is missing from your environment I would love to hear from you.

RANKX–Analysis Services 2012 Tabular

RANKX was a new function made available in DAX at the launch of SQL Server 2012; however Ranking is a widely used function in most BI projects, and was very much anticipated. Having worked extensively with Tabular on a recent project we had the need to rank over large sets of data, with models based on billions of rows. Whilst we were extremely impressed with the performance of Tabular, we found something not quite right with RANKX. Suppose we had the following very simple format of data, and wanted to perform a simple Rank over it. CountryKey Date Units SalesValue 53 24/01/2012 62 81.290619 156 03/01/2012 3575 5186.565208 48 07/05/2012 321 362.836524 157 16/04/2012 7 8.005637 134 19/08/2012 20 44.12 232 29/07/2012 40 49.313156 178 05/03/2012 47 61.25 63 26/03/2012 51 73.369509 153 17/06/2012 3061 6026.4 31 08/07/2012 3622 5005.038257 140 13/02/2012 9870 7713.968564 14 27/05/2012 29220 46728.26302 98 20/08/2012 297 377.653459 131 30/07/2012 172 182.591931 48 27/03/2012 348 411.801792 157 06/03/2012 3 8.01422 In order to a Rank the above data by country over all time we would probably write a simple DAX query as follows: DEFINE   MEASURE FactSales[UnitCount]= SUM(FactSales[Units])   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount]) EVALUATE   SUMMARIZE(     FactSales    ,FactSales[CountryKey]    ,"ProductSalesUnits",FactSales[UnitCount]    ,"Rank", FactSales[Rank]   ) ORDER BY   FactSales[UnitCount] DESC Now you can see from the query above, in order to avoid any doubt we only rank over a single table and we have no joins to other tables at all. This works fine resulting in the data below. FactSales[CountryKey] [ProductSalesUnits] [Rank] 227 91818113 1 226 21487836 2 39 9033389 3 14 7597054 4 82 4656585 5 109 3678302 6 75 3371681 7 140 2511239 8 107 1850520 9 208 1443241 10 31 1092047 11 153 997860 12 201 914827 13 59 870366 14 163 840927 15 Now assume we rank over a lot more data and wish to apply a simple CALCULATETABLE in order to filter some data out. We may write a DAX Query as follows: DEFINE   MEASURE FactSales[UnitCount]= SUM(FactSales[Units])   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount]) EVALUATE   CALCULATETABLE (     SUMMARIZE(       FactSales      ,FactSales[CountryKey]      ,"ProductSalesUnits",FactSales[UnitCount]      ,"Rank", FactSales[Rank]     )   ,DATESBETWEEN('Date'[CalDate], DATE(2012,1,1), DATE(2012,8,31)) ) ORDER BY   FactSales[UnitCount] DESC In this instance we are joining to a Date dimension, but that is it.  The above query yields the below result. FactSales[CountryKey] [ProductSalesUnits] [Rank] 227 83147875 1 226 19446567 2 39 8137335 3 14 6769390 4 82 4100168 5 109 3321496 6 75 3010890 7 140 2165554 8 107 1655472 9 208 1274740 10 31 987122 11 153 908938 12 201 853322 13 59 771677 14 That works, we get results as we would expect within seconds. Now, if instead of ranking over an integer field, let’s apply the RANKX function to a real number. In this example we get a worrying result set using the DAX as follows: DEFINE    MEASURE FactSales[SalesValue]= FactSales[Sales Value]   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount]) EVALUATE   CALCULATETABLE (     SUMMARIZE(       FactSales      ,FactSales[CountryKey]      ,"ProductSalesValue",FactSales[Sales Value]      ,"Rank", FactSales[Rank]     )   ,DATESBETWEEN('Date'[CalDate], DATE(2012,1,1), DATE(2012,8,31)) ) ORDER BY   FactSales[UnitCount] DESC Which outputs: FactSales[CountryKey] [ProductSalesValue] [Rank] 227 84074007.25 1 226 29928143.25 3 14 10859628.74 4 39 8451588.111 4 109 7964922.769 6 82 6254219.85 6 75 4730390.37 7 107 2466064.97 9 208 1904009.18 10 140 1862708.961 11 153 1311217.35 11 22 1207366.72 13 59 1182179.95 15 Now let’s be clear, all we have done is simply change the measure from an Integer to a Float, the rest of the data is the same. You will notice that there are tie’s in the data that there should not be. Having scratched our heads for hours, rebuilt the model, re wrote the DAX, and had a number of colleagues check it over we found that when no calculate table is applied, then we get the correct answer again as follows: DEFINE   MEASURE FactSales[SalesValue]= FactSales[Sales Value]   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount]) EVALUATE   SUMMARIZE(     FactSales    ,FactSales[CountryKey]    ,"ProductSalesValue",FactSales[Sales Value]    ,"Rank", FactSales[Rank] ) ORDER BY   FactSales[UnitCount] DESC FactSales[CountryKey] [ProductSalesValue] [Rank] 227 92885561.31 1 226 33237033.3 2 14 12253005.68 3 39 9414266.358 4 109 8928147.606 5 82 7120811.54 6 75 5296490.13 7 107 2756500.54 8 208 2146627.18 9 140 2115750.609 10 153 1479858.53 11 22 1339255.82 12 59 1324799.98 13 105 1320651.83 14 31 1277065.779 15 So what we have learnt here is that RANKX seems to give the wrong answer, but only when ranking over real numbers, and only when we filter the data set in some way using CALCULATETABLE. Not being able to find a clear reason for this behaviour we eventually gave in and raised it with Microsoft. Having spent a week or so working it through with the great support team at Microsoft, it seems that this is a current “feature” of RANKX. It is believed to be a floating point arithmetic issue that is driven from how floating point numbers are stored. It is documented here: http://support.microsoft.com/kb/78113/EN-US There is also some further information documented here: http://technet.microsoft.com/en-us/library/gg492146.aspx http://www.microsoft.com/en-us/download/details.aspx?id=4106 I have been told by Microsoft that this is the same as it is in Excel, and Analysis Services, however still does not explain why the CALCULATETABLE makes a difference.  I am still working with Microsoft to see if we can get to the bottom of it.   However, if precision beyond 15 significant figures within the rank is not important to you (up to 99 Billion with 2DP) this issue is very easily worked around. Take the following DAX query: DEFINE   MEASURE FactSales[UnitCount]= FactSales[Royalty Value]   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), ROUND(FactSales[UnitCount],2)) EVALUATE   CALCULATETABLE (     SUMMARIZE(       FactSales      ,FactSales[CountryKey]      ,"ProductSalesValue",FactSales[UnitCount]      ,"Rank", FactSales[Rank]     ) ,DATESBETWEEN('Date'[CalDate], DATE(2012,1,1), DATE(2012,8,31)) ) ORDER BY   FactSales[UnitCount] DESC Which now correctly outputs: FactSales[CountryKey] [ProductSalesValue] [Rank] 227 84074007.25 1 226 29928143.25 2 14 10859628.74 3 39 8451588.111 4 109 7964922.769 5 82 6254219.85 6 75 4730390.37 7 107 2466064.97 8 208 1904009.18 9 140 1862708.961 10 153 1311217.35 11 22 1207366.72 12 59 1182179.95 14 105 1196551.61 13 31 1132926.109 15 You will notice that all I have done is ROUND the measure in the RANKX function to 2 decimal places, this will stop the calculation engine from ranking over an approximated value, and force it RANK on a decimal with less than 15 significant figures, which will in turn fix the ranking. Further to this, rounding the values prior to it being loaded into the model will also work, just be sure to keep below the 15 Significant figures if possible. So in summary, it seems that the calculation engine, when using the RANKX function does not handle the float data type as we may expect. This can be easily rectified by losing some precision on the RANK measure you define. We have been informed that using the Currency function will also fix the problem. I hope this stops others going through the same pain. As and When i get any further information on this issue I will post it up.  

Allocations in PowerPivot Using DAX

Although I didn't mention it, the inspiration for my last post was the excellent MDX Solutions 2nd Edition. Whilst flicking through the book a while back, I though it would be interesting to see how DAX and PowerPivot could handle the so called 'Common Calculations and Selections in MDX'. This post continues that theme, focusing on one calculation in particular that's in the aforementioned MDX book, namely 'Unweighted Allocations down the Hierarchy'. Essentially this business problem is dealing with allocating data that is entered at a higher level (e.g. Quarter) down to a lower level (e.g. Day). It varies from business to business what the users actually want to see at the day level in this situation. Some expect to see the Quarter amount, some want to see a blank value and then some users want the amount in the quarters allocated down to the day level. If the expectation is that data should be allocated, then one way of doing the allocation is to use a ratio. In the MDX book, this is achieved by the following MDX: 1.0 / Descendants( Ancestor( [Date].[Calendar].CurrentMember, [Date].[Calendar].[Calendar Quarter] ), [Date].[Calendar].CurrentMember.Level, SELF ).Count If we put this into a query and run it against the Calendar hierarchy in AdventureWorksDW we get the following results: Therefore our quarter-entered value can simply be multiplied by this ratio to get the correct value at each level in the Calendar hierarchy. PowerPivot So the challenge is how to be able to do this in PowerPivot. I started off by looking for some data in AdventureWorksDW that would fit the bill. A good candidate seems to be the FactSalesQuota table, which I imported into PowerPivot along with its related dimensions: Once in the pivot table, I wanted to be able to calculate the correct ratio at the daily and monthly levels. For the daily level this is 1 / 90, as there are 90 days in the Q1 that year, and for the month level it is 1/3 as there are 3 months in the quarter. Given that there's no MDX style Ancestor() function in DAX, I ended up having to have a different calculation at different levels. Something that I learnt from this forum post is that you need to use IF() in order to isolate a calculation at a different level. Therefore the DAX that I came up with was: =IF( COUNTROWS(VALUES('DimDate'[FullDateAlternateKey])) = 1,     1 / CALCULATE( COUNTROWS('DimDate'), ALL('DimDate'[FullDateAlternateKey], 'DimDate'[EnglishMonthName]) ),     1 / CALCULATE( COUNTROWS( DISTINCT( 'DimDate'[EnglishMonthName]) ), ALL('DimDate'[EnglishMonthName]) )    ) The COUNTROWS(VALUES('DimDate'[FullDateAlternateKey])) = 1 allows me to check that we're at the day level, and if we are at the day level, then count the number of days in the quarter. Otherwise, if we're at the month level we calculate the number of months in the current quarter. When this DAX is put into a new measure we get the following results: This produces the same results as the MDX statement. The next step was then to apply this ratio to the Sales Amount Quota for the current quarter. This was achieved using the following DAX: =IF(       COUNTROWS( VALUES('DimDate'[FullDateAlternateKey]) ) < 90,       CALCULATE( Sum('FactSalesQuota'[SalesAmountQuota]),        All('DimDate'[FullDateAlternateKey], 'DimDate'[EnglishMonthName]) ) * 'FactSalesQuota'[Ratio],       Sum('FactSalesQuota'[SalesAmountQuota])    ) The < 90 at the start of the statement allows me to check if we're at the day level or the month level. If we're at either of these levels, then we want to pick up the Quarter-level SalesAmountQuota and multiply by the ratio. Otherwise, at the Quarter and Year levels, we just apply the original SalesAmountQuota value. This produces the following results: Conclusion The use of the MDX Scope statement would have been nice to pinpoint exactly where I wanted the calculation to occur, but then again, the use of IF() keeps it quite simple, which is better for an Excel power user. Whilst MDX is very very powerful, can you imagine a business user getting to grips with MDX statements such as Scope or Freeze? The bottom line is that the DAX calculation produces exactly what I want - the new AllocatedSalesQuota measure is correct at all levels in the Calendar hierarchy. Even with an IF() statement, like everything else in PowerPivot, the calculation performs very quickly.

DAX Closing Balances

One of the often required calculations for users in reports is to be able to calculate a point-in-time closing balance, which is especially relevant when dealing with stock levels or financial data. In the Analysis Services world, at least in the Enterprise Edition, we're lucky that we have a variety of Aggregation Functions that can deal with semi-additive measures such as a closing balance. Or we can always turn to MDX if we need to which will achieve the same thing. In PowerPivot, we don't have Aggregation Functions, but there is DAX, which has a variety of time intelligence functions up its sleeve. The idea behind this post is to see how PowerPivot would deal with the requirement of reporting a closing balance measure. DAX Time Intelligence Functions One of the DAX functions that we can use for our closing balance is called ClosingBalanceMonth(), which will simply evaluate an expression at the end of a given month. There's also the similar ClosingBalanceQuarter() and ClosingBalanceYear(). Having noticed these functions within PowerPivot, my first port of call was to set up a simple PowerPivot model by taking data from the FactInternetSales table in AdventureWorksDW, plus all it's related dimensions of course. What I ended up with was the following simple pivot table in Excel 2010: Sales Amount obviously isn't a semi-additive measure, but I'm treating it as one in this example. My first port of call was to use ClosingBalanceMonth() to try and get the value for the month to be equal to the value for last day in the month. I managed to come up with the following formula: =CLOSINGBALANCEMONTH(Sum([SalesAmount]), 'DimDate'[FullDateAlternateKey], ALL('DimDate'[FullDateAlternateKey])) This produces the results that are shown below, in the measure called ClosingBalanceFormula: It does a good job of calculating a closing balance for each month, but it presents that closing monthly balance on each day. Therefore, I started looking for an alternative approach and found the LastDate() function. It returns the last date that's in context for the passed date column. The last date that's in context at the month level will be the last date in the month, but thankfully the last date that's in context at the day level will be the current day. Therefore I came up with the following formula: ='FactInternetSales'[Sum of SalesAmount](LASTDATE('DimDate'[FullDateAlternateKey])) This produced the results that I wanted - the daily closing balance at the day level and the monthly closing balance at the month level:   A bit more digging lead me to find out that a combination of the Calculate() and LastDate() functions gets to the same result, as an alternative. That formula is: =CALCULATE(Sum('FactInternetSales'[SalesAmount]), LASTDATE('DimDate'[FullDateAlternateKey])) Conclusion It's a shame that the ClosingBalanceMonth() formula didn't quite work, but perhaps getting that approach to work will come out in the wash with more DAX experience. However, as with MDX or any kind of expression language, there's always going to be a load of different ways to get to the same result. It's much more efficient to work knowing that you've got several ways around a particular problem, rather than being forced down a particular path, and in fact, I think the range of calculation functions available shows the richness of the DAX language.