Simon

Simon Whiteley's Blog

PowerBI Optimisation P3– Extracting and Source Controlling PowerBI Data Models

Source Control – once seen as “something proper developers do” – has been an integral part of the way business intelligence developers work for a long time now. The very idea of building a report, data model or database without applying some kind of source control actually pains me slightly.

However, there has been a push for “Self-Serve” reporting tools to strip out anything that looks remotely like a technical barrier for business users - This includes the ability to properly track changes to code.

We find ourselves in a very familiar situation – versions of PowerBI desktop files are controlled by including version numbers in file names. I’ve seen several copies of “Finance Dashboard v1.2.pbix”. This is obviously dangerous – who’s to say that someone didn’t open up the file, edit it and forget to increment the name. Once a file has been shared, there’s no controlling what changes happen at that point. If this happened to an SSIS package, for example, we would still be able to perform a code comparison. This would highlight differences between the two packages so we could accurately see what caused the changes. This is not currently possible with PBIX files in their entirety.

We can, however, compare the data model behind the file. This allows us to check for changes in business logic, amendments to DAX calculations, additions of new fields etc. If the performance of two PBIX files different drastically even if they were meant to be the same “version”, then this is a reasonable starting point!

Extracting the Data Model from a PowerBI PBIX File

Firstly, we need to extract the JSON that describes the Tabular Model embedded model (technically, this is TMSL, the tabular model scripting language, but it’s still JSON…)

We can do this by connecting to the model via SSMS. I’ve talked about the steps required to do this here.

So, assuming you have found your temporary SSAS port and connected via SSMS, you should see something like this:

image

As we would with any other Tabular model, you can right-click and script out the database as so:

image

If we do this to a new query window, you’ll see the various JSON objects that describe your PowerBI model:

image

This script contains the details for all tables, attributes, DAX measures etc required for your data model.

Comparing PowerBI Data Models

What if someone has been using a specific version of my PowerBI desktop file, but they’ve modified it and it has stopped working? For a Tabular model, I’d compare the model definition to source control which will automatically highlight any changes.

Now that we can script out our PowerBI model, we can apply the same principles. Say, for example, I make a couple of changes to my sample PowerBI report and want to figure out how it has changed compared to a baseline script I exported previously.

The easiest option is to use a tool like Textpad – here you can compare two text documents and it will highlight any differences it finds between the two. For example, I changed the name of a table and removed a column, the text comparison highlights this change as below:

image

I can now be confident that if someone sends me a PBIX file, I can check to see if there are any data model changes without having to manually eyeball the two side by side. This alone is a huge leap forward in manageability of models.

The next step would be to add this file to an actual Source Control provider, such as Visual Studio Team Services. This tool is free for the first 5 users and can be used with Visual Studio 2015 Community Edition – which is also free!

Essentially you would add this exported script to your source control directory each time you updated the model. By checking in your new model, you can compare previous versions, much like with the TextPad editor above.

Final Thoughts

In the end, this isn’t real, true Source Control. If you make a mistake, you can only view what the previous configuration was, you cannot roll back code directly into your PowerBI model. It is, however, a step towards managing PowerBI with a bit more discipline and rigour.

I don’t see this as a huge drawback as rumours on the wind are hinting at larger steps in this direction coming with future releases. Let’s hope we’re not having to work around these problems for much longer!

 

 

PowerBI Optimisation P2–What’s using all my memory?

If you're a regular user of PowerBI, you're probably aware of the size limitations around datasets and it's very likely you've hit them more than once whilst writing reports on top of large datasets. It's difficult to see where size savings can be made directly through PowerBI, but we can use traditional tabular optimisation techniques to help us!

For those not in the know, a single dataset can be up to 1Gb in size, with excel files limited to 250mb. Each user also has a storage limit as follows:

  • Free users have a maximum 1 GB data capacity.
  • Pro users of Power BI Pro have 10 GB maximum capacity.
  • Pro users can create groups, with a maximum 10 GB data capacity each.

For more information about the limits themselves and how to view your current usage, there's PowerBI blog about it here: https://powerbi.microsoft.com/en-us/documentation/powerbi-admin-manage-your-data-storage-in-power-bi/

But what if you're hitting that 1Gb data limit? There's very little within PowerBI itself to help you understand which tables are the largest, where you could make some savings, or generally anything about your model itself. The answer is to connect to the model via SSMS and take advantage of the Tabular system views, as described here.

What determines Tabular model size?

It’s worth discussing this briefly before going into the details. Put very simply, the XVelocity engine used by the tabular model will hold more data if there are more unique values for a column column. The key to avoiding large models is, therefore, to avoid columns with huge numbers of lots of distinct values. Text fields will generally be pretty bad for this, although there are common design patterns to avoid the worst offenders.

A simple example is to look at a DateTime column – this combination of date and time means that each minute of each day is a unique value. Even if we ignore seconds, we’re adding 1140 new, distinct records for every day within the system.

If we split this into two fields, a date and a time, this problem goes away. Each new date adds just a single record, whilst we will never have any new hours and minute combinations, so that’s a controllable field.

There are a few techniques to avoid these problems if you find them, I’d advise heading over to Russo & Ferrari for some general tips here and some more detailed techniques here.

Accessing Memory Usage Data

So - following the above instructions, connect to your data model and open a new DMX query:

clip_image001

Here you can use SQL syntax to query several DMVs behind the model - not all of them will be relevant in the cut-down tabular instance that PowerBI uses but there is one in particular that will help us manage our model size - DISCOVER_OBJECT_MEMORY_USAGE.

Admittedly, on it’s own this is pretty incomprehensible. We can filter down the results slightly into something that makes a little sense, but you’ll generally get a big list of model entities with numbers against them – OK as a starter but not great as an actual model optimisation tool:

image

Stopping here we would at least have a hit-list of the worst-offending columns and we can use this to start tackling our model. But there are much better ways to approach this problem!

Tabular Memory Reports

There are several free tools made available within the SSAS community for people to analyse their current SSAS memory usage. These tools simply query this same data but apply a bit of data modelling and make the data much more accessible.

For straight tabular, I would tend to use Kasper de Jonge’s old excel spread, which pulls in data quite reliably, however there is an updated PowerBI Model found here.

However, this doesn’t play nicely with the PowerBI flavour of tabular just yet, so I would advise using the SQLBI.com Vertipaq Analyser.

Following their instructions and pointing it at my temporary tabular instance, we can refresh successfully and use their categorisations to explore the model. I’ve added some conditional formatting to help see where the issues are.

I can see, for example, which of the tables in my model are the worst offenders, and what’s causing it:

image

Interestingly the Customer dimension is pretty huge in my example. It has a lot less data than my fact but the dictionaries required are pretty hefty. Dictionaries are built using string lookups and are heavily affected by high volumes of unique values – so I can presume I’ve got some pretty big text strings in this dimension.

Looking at the Column breakdown, I can see where the offenders are:

image

This tells a slightly different story – my main offenders are from one of the hidden date dimension tables (A sign that relying on PowerBI’s inbuilt date functionality can be a memory drain) and the Sales Order Number – a unique identifier for my fact, obviously this is going to have a large number of distinct values.

The other columns we can do more about – Email address is the next offender. We can assume each customer, of all 18,000 will have a unique email address. However, it’s very rare that we would want to do analysis on the email address specifically, this is a good candidate to remove from the model. At the very least, we could consider keeping only the domain which will yield much fewer unique values.

 

Hopefully the above will help you move forward in reducing your PowerBI data model size – I’ll be posting about Performance Analysis & Source Control over the next couple of days.

PowerBI Optimisation 1 – Connecting Via Management Studio

I recently gave a talk to the London PowerBI UserGroup and I kicked things off with a confession - "I don't do much report building in PowerBI". Perhaps an odd way to qualify myself to speak to that particular audience. But I am, however, a cloud solution architect - I spend my time designing large scalable cloud systems to process vast amounts of data and PowerBI is a common tool used on top of these systems.

Why then, do we accept the lack of controls available within PowerBI? Given any other end-user system I'd want to know about performance bottlenecks, about data model efficiency and, more than anything, I'd want it in source control.

First and foremost, the talk is available here.

The key to it all, is realising that PowerBI Desktop, when running, starts a SQL Server Analysis Services processes in the background. It doesn't just use the same engine as Tabular, it literally runs tabular in the background without telling you.

Open up a PowerBI Desktop file and, after you've seen the "initialising model…" window, you'll see this process in the background - one for each PBID session.

clip_image001

So - if the model is using Tabular in the background, we must be able to actually connect to the model!

First - Find your Temporary SSAS Port

There are two straight forward ways we can achieve this:

1. By far the easiest, is to open up DaxStudio if you have it installed.

When you open DaxStudio, it gives you a Connect window, which lists all of the PowerBI processes you have running in the background, as well as any Tabular services:

clip_image002

When you connect to a PBI file here, you'll see the Port listed

clip_image003

In this case, my port is 5524 -be aware that this will change every time you open PowerBI Desktop, so you can't hardcode anything looking for your "powerbi port".

2. Alternatively, you can find the "msmdsrv.port.txt" file related to your specific instance.

Take a look in your user appdata folder, you should find a Microsoft/Power BI Desktop/ folder with some analysis services details:

C:\Users\<YourUser>\AppData\Local\Microsoft\Power BI Desktop\AnalysisServicesWorkspaces\

You'll see an instance for each of your PBI Desktop instances, I've only got one at the moment:

clip_image004

Inside this folder, in another folder called "Data", you'll find the file we're looking for:

clip_image005

Opening this file, we see:

clip_image006

Pretty straight forward, and no DAX required. Obviously if you have multiple instances, you'll need to figure out which of these relates to the instance you're after.

Connect via SSMS

Now that we know our port, we can simply open up management studio, connect to analysis services and enter "localhost:" and the port number from earlier.

clip_image007

 

Once connected, you'll see a model connection - each PBIX file will have a GUID for this instance, but you can drill down and see the objects underneath, exactly as you would with a Tabular model:

clip_image008

You can now write queries, browse the model and basically treat it as a Tabular instance. The Database itself will use a generated GUID, and several internal tables will do the same - you can see above that a hidden data table has been created for every datekey included in my model.

We'll discuss the applications of this in my next post - namely how this unlocks performance tuning, monitoring and source control.

Power BI Visual Studio Online Content Packs – Analysing BI Team Performance

I spend a fair amount of time championing the use of data and analytics around my client’s companies, convincing people from all walks of life that making their data more visible and available around the company will be hugely beneficial.

I was recently challenged on this – If I’m such a firm believer in sharing data and improvement through analysis, why am I not sharing my own data? It’s a very good point, and not one that I had an answer for readily available.

I’m currently engaged on a project which uses Visual Studio Online for both ALM and source control. Fortunately, Microsoft have recently released the PowerBI VSO Content Pack, making the data held within your account available for dashboarding.

The above link describes the steps to connect, but I found the dashboards required a little setting up before they were completely useful.

You are first presented with a mixed bag of charts and metrics, many of which will contain no data. This is because the data model has different entities depending on the project template (Agile, Scrum or CMMI) chosen within TFS, as well as the source control binding (TFS or Git).

I removed many of the charts from the default dashboard then went exploring in the exposed report model – I was very happy with what I found, the VSO object model exposed pretty much every metric I could think of to report on the activity of a BI development team, including report templates for each template/source version you could be using.

I gave myself 15 minutes to see if I could pull out a reasonable dashboard and was pleasantly surprised by what could be done in so little time.

image

So – how do you analyse an analysis team?

How is the Project Going?

Firstly, we’re an agile team. We run iterations loosely based around Scrum principles, we manage our client projects through story backlogs and report daily on blockers, impediments etc. This information is key to us operationally, but it also tells a useful story.

How many new features were added in the sprint? How many individual user stories, each representing a distinct piece of business value, were delivered? How much effort is remaining in the backlog (and therefore how many additional sprints would be required to deliver all known functionality?). How many bugs have been raised – and how effective are we at dealing with them?

clip_image003

What’s the current Sprint Status?

The day to day metrics also tell a valuable story – was the sprint smooth and predictable, or was it a rush to deliver towards the end? How much work is still remaining in the current sprint? Are there any blocked tasks or impediments that may be seen as a risk to delivery?

clip_image005

What actual work has been done?

Stories and tasks only tell one side of the story – a task may represent a change to a single piece of code, or a large update that touches much of the system. Simply counting tasks therefore limits our understanding of how productive we were during a sprint.

Fortunately, we can also analyse the source control history, looking at the changesets committed and their contents. This provides some insight into the complexity of those completed tasks – it’s not a perfect measure but gets us a little closer. We can now ask questions such as:

How many individual changesets were committed? Who commits most regularly? What kind of work was done – what is the most common file amended? Is there someone who hordes changes and causes potential problems by not regularly committing their work? Is our development behaviour changing overtime as we review our practices and learn from them?

clip_image007

Finally, it’s also worth noting that the content pack has been fully set up with synonyms to allow for the Q&A Natural Language query bar to be activated. So if there’s a metric not included in the dashboards, users can simply type their question into the query bar.

For example, I want to better understand the type of changes we’re doing – are we creating new entities or modifying existing code? For this, I tried the following, with the relevant chart appearing before I’d even finished typing:

clip_image009

There’s a whole lot more content in the packs under the individual report tabs, but this gave me a good point to start that conversation. I can now provide weekly dashboard updates to my project sponsors, showing just how much progress we’re making.

This is a huge boost to my ability to champion data and I’m expecting it to actually improve some of our working habits. Now, if anyone interrupts me mid-flow I can simply grab my phone, load up the Power BI app and pull out some insights from the team’s current performance, wherever I am.