Adatis

Adatis BI Blogs

Data Data Revolution – The Results

This blog will take you through the Power BI Dashboard, Data Data Revolution – The Results, which is the product of the data collected from the demo presented in the last SQLBits conference (for further details, please check my previous blog http://blogs.adatis.co.uk/josemendes/post/Data-Data-Revolution). This dashboard provides a breakdown on the player’s preferences and performance split by different indicators. In the following video, I’ll show some of the possible conclusions we can gather from the analysis of the data.

Data Data Revolution

Following the DISCO theme, Adatis decided to present all the SQLBits attendees with a challenge based on the game Dance Dance Revolution. At the end of the game, the players were presented with two Power BI dashboards, one that streamed the data in near real time and the other representing historical data. This blog will detail the different components used in the demo.        (High Level Architecture)   The starting point The first requirement was to have a game that could run on a laptop and store the output data in a file. Based on the theme of the conference, we chose the game Stepmania 5 (https://www.stepmania.com/download/). After understanding how it worked and what type of details we wanted to capture, we adapted the program so it was possible to save the output in a TXT file every time a key was pressed. Following is an example of how the data was structured. {"Player": "0", "Row": "768", "Direction": "Left", "NoteType": "Tap", "Rating": "OKAY", "Health": "Alive", "Combo": "0", "Score": "0", "Artist": "Katrina feat. Sunseaker", "Song": "1 - Walking On Sunshine", "Difficulty": "Easy"}   Capturing player details To complement the game output, we decided to create an MVC application that had two functions, capturing the player details in an Azure SQL DB, and, upload a new Game ID along with the player details to a reference BLOB stored in an Azure Storage Container.   Sending the data to an Event Hub Since we wanted to stream the data in near real time, we needed an application that could read the data from the output file as soon as it was updated. To achieve this, we built a C# application that was sending the data to an Event Hub. To make sure we didn’t upload duplicate data, we implemented a logic that compared the last row with the previous one. If they were different, the row was uploaded and if not, the program would wait for the next input.   Distributing the data To distribute the data between the Azure SQL DB and the Power BI dataset, we used two separate Stream Analytics Jobs. The first job was using the Event Hub and the reference BLOB as inputs and the Azure SQL DB as output, while the second job was using the same inputs but having a Power BI dataset as an output. Due to the dataset limitations, we ensured that all the formatting was applied in the Stream Analytics Query (eg. cast between varchar and bigint, naming conventions, …).   Power BI streaming datasets In this scenario, the streaming datasets only work properly when created by the Stream Analytics Job. Any of the following actions invalidates the connection between the jobs and the dataset: · Create the dataset in Power BI · Change column names · Change column types · Disable the option Historic data analysis When the dataset crashes, the only solution to fix the issue is to delete and re-create it. As a result, all the linked reports and dashboards are deleted.   Representing the data By the time the demo was built, the connectivity of live datasets to the Power BI Desktop was not available, which means the live streaming dashboard was built using the online interface. It is important to note that it is impossible to pin an entire page as a dashboard when using live datasets since it won’t refresh as soon as the data is transmitted. Instead, each individual element must be pinned to the dashboard, adding some visual limitations.   The performance of the players could be followed by checking the dashboard streaming the results in near real time. The use of the word near was used several times in the blog because the streaming is limited not only by the internet connection but also by the Power BI concurrency and throughput constraints, meaning the results were not immediately refreshed. The second report was built using Power BI Desktop and was connected to the Azure SQL DB. At the end of the game, the players could obtain the following information: · Who was the winner · How did they perform during the game · The number of hits for each rating · Which direction they were more proficient

Power BI Mobile Feature Review

In March Microsoft released Deep Touch integration for iOS amongst several other improvements to the Power BI Mobile application. This blog will look at a few of those features and examine some of the areas that still need work. So far it seems Microsoft are doing a pretty good job of developing Power BI for the mobile platform and this is most apparent when they exploit some of the built-in functionality that make mobiles so handy! One of the best features of the latest iOS is 3D Touch integration and Power BI has fully grasped this bull by the horns. Using a Deep Touch, you can launch a pop-up menu offering some of the most useful features such as search and notifications but also quick access to your recently accessed dashboards.     Another big issue that the Power BI team have tackled head on is how to make rich visualisations mobile optimised. For this they have two solutions, the first being the desktop and mobile view options within Power BI desktop. Desktop view of Report Mobile view of Report  The mobile view essentially de-constructs your report and lets you drag and drop your visualisations into the mobile template. By default, this view will always be displayed when viewing the report on a mobile device unless you rotate the device into landscape mode in which case the desktop version loads. I have mixed feelings about this feature. On the one hand, I like that I can see both views but if the report were to remain in mobile view but expand to fill the horizontal space as well, this could open up a lot more opportunities for mobile reporting.  However, despite gaining some control in how the report looks on a mobile device there are some pretty major limitations for the time being. Firstly, you cannot specify different properties, such as text size, for the desktop and mobile view. This means that you would need to consider both views when creating a report that will be both mobile and desktop otherwise your visual will be sacrificed as seen above in the deliveries and fragile items cards. Another drawback is that each visual element has to be snapped to the prescribed grid and this includes the transparent grab handle/border that is used to select the visual. This border is half a grid square tall in the mobile view which means that you get a lot of white space, something you want to reduce in a mobile view.  Finally, visuals cannot be overlaid. Where I have circles around some of my cards in the desktop view, this is not possible in the mobile view.  Whilst you can add an image or logo you could not make use of any backgrounds whether they be an image or just a flat colour. Thankfully, all custom visuals will work in mobile view and any cross filtering, auto play or auto zoom features (maps) are preserved perfectly. Microsoft’s second solution is focussed around dashboards. From within the Power BI service you can arrange your pinned visuals into a mobile dashboard by switching the view as shown below.   However, the best part is that it if you access a dashboard that doesn’t already have a mobile view then the app will automatically optimise the visuals into a mobile view so you don’t have to! One of Power BI’s most notable features is Q&A – a method to query your data using natural language. Using a recognisable messenger format this feature is really well replicated in the mobile app and adds another layer of accessibility for non-techie, non-deskbound users.     A relatively new feature to Power BI is the ability to add custom links to a dashboard visual. This can be utilised quite nicely in the mobile app as it will make use of the deep linking technology in iOS so that I can launch a relevant app instead of just a web page. Here I have set a google maps URL as the custom URL for the map visual. Clicking on this in the mobile app launches my google maps app, not a webpage!    Overall I can see that the Power BI mobile team have not tried to just recreate the desktop version but have embraced the mobile platform and have made use of existing features within iOS to present a feature rich app that has a familiar feel to it. Whilst there are some limitations, my next blog will look at how to create a mobile optimised report right from the start so that your users can benefit from mobile BI straight away!

Hierarchy Navigation In Power BI

Until recently, the overall functionality of the Matrix Visual in Power BI has been limited. However, this all changed when Microsoft announced the Power BI March update, which gives users access to the Matrix Preview. This can currently be used alongside the old matrix. In this blog, I will be comparing the old Matrix and the new Matrix Preview. The updates featured in the latter are as follows: Performance improvements Drilling into hierarchies Stepped layout Cross highlighting and filtering from matrix headers and cell contents This article will only focus on the drilling into hierarchies’ functionality. Click here to find more information around the Matrix updates, along with the extra features not covered in the blog. Hierarchy Drill Through One of the visible changes in the new matrix preview it is the ability to show all the headers collapsed by default making the user experience easier when dealing with large datasets. The image below shows this new feature compared to the old and new Matrix. There is also the capability to show both levels of the hierarchy simultaneously , which is again done using the hierarchy navigation buttons as illustrated in the image below. You can also drill up and down on individual columns using the right click function as shown in the image below. The benefit of this is that it gives the user a more detailed drill down of a desired column. Further drill down options are available, for example, the ability to drill down on row category headers. In normal mode (without drill mode turned on), other datasets in other row category headers will be faintly visible. By turning on the drill down mode it allows users to works on a specific category row header in isolation. The following images show the differences in the two views. Conclusion The Matrix Preview has brought about interesting and useful tools making it more interactive. The ability to be able to drill up and down within a report particularly stands out for me. It is also worth mentioning that other features, not covered in this blog give users increased customisation when working on reports – showing how impressive the Matrix Preview is.  April`s Power BI update includes more features for the Matrix Preview. My next blog will be looking at the following two features added for Matrix Preview: Rename axis titles New matrix visual enhancements: column sorting, column resizing, and word wrap   Further Reading (Power BI Blogs) https://powerbi.microsoft.com/en-us/blog/power-bi-desktop-march-feature-summary/#matrix https://powerpivotpro.com/2017/03/two-great-new-power-bi-features/

IoT Hub, Device Explorer, Stream Analytics, Visual Studio 2015 and Power BI

As we saw in my previous blog, the IoT Hub allow us to collect millions of telemetry data and establish bi-directional communication between the devices, however, more than quantity, what we need is valuable insights that will lead to smart decisions. But how can we do that? Collecting the data There are thousands of sensors we can use, depending on the purpose. If we check the Microsoft documentation we will find tutorials for the Raspberry Pi, Arduino, Intel Edison or even simulators created with .Net, Java or Node. The first step is always the creation of the IoT Hub on the Azure Portal. Next, we have to add the devices, which can either be done using C# and the IoT Hub Extension for VS 2015 or the Device Explorer. This last tool, provided by Microsoft, can easily register new devices in the IoT Hub and check the communication between the device and the cloud. Once the devices are properly configured we will need to store the data, which can be done using a SQL Azure Database.   Represent the data Now that we collected the data, we want to be able to represent it. One of the best ways to do that, is by creating some Power BI reports and dashboards, which will be populated via Stream Analytics. A good example of a similar architecture and example dashboards can be found on Piotr’s blog Using Azure Machine Learning and Power BI to Predict Sporting Behaviour. Note that on his example, he used Event Hubs instead of the IoT Hub.   Insights and actions Let’s imagine a transportation company is collecting the telemetry from a food truck equipped with speed, location, temperature and breaking sensors. In order to assist their delivery process, they have a report being refreshed with real time data that triggers some alerts when certain values are reached. One of the operators received an alert from the temperature sensor, and after checking the dashboard he realizes the temperature is too high and it will affect the quality of the products being transported. Instead of calling the driver and make him aware of the situation, because the sensors are connected to an IoT Hub, he can simply send a command to the sensor and reduce the temperature.   More info: https://github.com/Azure/azure-iot-sdks/commit/ed5b6e9b16c6a16be361436d3ecb7b3f8772e943?short_path=636ff09 https://github.com/Azure/connectthedots https://sandervandevelde.wordpress.com/2016/02/26/iot-hub-now-available-in-europe/ https://powerbi.microsoft.com/en-us/blog/monitor-your-iot-sensors-using-power-bi/ https://blogs.msdn.microsoft.com/mvpawardprogram/2016/12/06/real-time-temperature-webapp/

Power BI Streaming Datasets–An Alternative PowerShell Push Script

I attended the London Power BI Meetup last night. Guest speaker was Peter Myers On the topic of "Delivering Real-Time Power BI Dashboards With Power BI." It was a great session. Peter showed off three mechanisms for streaming data to a real time dashboard: The Power BI Rest API Azure Stream Analytics Streaming Datasets We've done a fair bit at Adatis with the first two and whilst I was aware of the August 2016 feature, Streaming Datasets I'd never got round to looking at them in depth. Now, having seen them in action I wish I had - they are much quicker to set up than the other two options and require little to no development effort to get going - pretty good for demo scenarios or when you want to get something streaming pretty quickly at low cost. You can find out more about Streaming Datasets and how to set them up here: https://powerbi.microsoft.com/en-us/documentation/powerbi-service-real-time-streaming/ If you create a new Streaming Dataset using 'API' as the source, Power BI will provide you with an example PowerShell script to send a single row of data into the dataset.  To extend this, I've hacked together a PowerShell script and that loops and sends 'random' data to the dataset. If you create a Streaming Dataset that matches the schema below, the PowerShell script further below will work immediately (subject to you replacing the endpoint information). If you create a different target streaming dataset you can easily modify the PowerShell script to continually push data into that dataset too. I’ve shared this here, mainly as a repository for me, when I need it, but hopefully to benefit others too. Streaming Dataset Schema Alternative PowerShell Script Just remember to copy the Power BI end point to the relevant location in the script. You can find the end point (or Push URL) for the Dataset by navigating to the API Info area within the Streaming Dataset management page within the Power BI Service: # Initialise Stream $sleepDuration = 1 #PowerBI seldom updates realtime dashboards faster than once per second. $eventsToSend = 500 #Change this to determine how many events are part of the stream $endpoint = "[INSERT YOUR ENDPOINT HERE]" # Initialise the Payload $payload = @{EventDate = '' ; EventValue = 0; EventSource = ''} # Initialise Event Sources $eventSource = @('Source1', 'Source2', 'Source3') # Iterate until $eventsToSend events have been sent $index = 1 do { # Update payload $payload.EventDate = Get-Date -format s $source = Get-Random -Minimum 0 -Maximum 3 $payload.EventSource = $eventSource[$source] $value = Get-Random -Minimum 0.00 -Maximum 101.00 $payload.EventValue = $value # Send the event Invoke-RestMethod -Method Post -Uri "$endpoint" -Body (ConvertTo-Json @($payload)) # Report what has been sent "`nEvent {0}" -f $index $payload # Sleep for a second Start-Sleep $sleepDuration # Ready for the next iteration $index++ } While ($index -le $eventsToSend) # Finished "`n{0} Events Sent" -f $eventsToSend

PowerBI Optimisation P3– Extracting and Source Controlling PowerBI Data Models

Source Control – once seen as “something proper developers do” – has been an integral part of the way business intelligence developers work for a long time now. The very idea of building a report, data model or database without applying some kind of source control actually pains me slightly. However, there has been a push for “Self-Serve” reporting tools to strip out anything that looks remotely like a technical barrier for business users - This includes the ability to properly track changes to code. We find ourselves in a very familiar situation – versions of PowerBI desktop files are controlled by including version numbers in file names. I’ve seen several copies of “Finance Dashboard v1.2.pbix”. This is obviously dangerous – who’s to say that someone didn’t open up the file, edit it and forget to increment the name. Once a file has been shared, there’s no controlling what changes happen at that point. If this happened to an SSIS package, for example, we would still be able to perform a code comparison. This would highlight differences between the two packages so we could accurately see what caused the changes. This is not currently possible with PBIX files in their entirety. We can, however, compare the data model behind the file. This allows us to check for changes in business logic, amendments to DAX calculations, additions of new fields etc. If the performance of two PBIX files different drastically even if they were meant to be the same “version”, then this is a reasonable starting point! Extracting the Data Model from a PowerBI PBIX File Firstly, we need to extract the JSON that describes the Tabular Model embedded model (technically, this is TMSL, the tabular model scripting language, but it’s still JSON…) We can do this by connecting to the model via SSMS. I’ve talked about the steps required to do this here. So, assuming you have found your temporary SSAS port and connected via SSMS, you should see something like this: As we would with any other Tabular model, you can right-click and script out the database as so: If we do this to a new query window, you’ll see the various JSON objects that describe your PowerBI model: This script contains the details for all tables, attributes, DAX measures etc required for your data model. Comparing PowerBI Data Models What if someone has been using a specific version of my PowerBI desktop file, but they’ve modified it and it has stopped working? For a Tabular model, I’d compare the model definition to source control which will automatically highlight any changes. Now that we can script out our PowerBI model, we can apply the same principles. Say, for example, I make a couple of changes to my sample PowerBI report and want to figure out how it has changed compared to a baseline script I exported previously. The easiest option is to use a tool like Textpad – here you can compare two text documents and it will highlight any differences it finds between the two. For example, I changed the name of a table and removed a column, the text comparison highlights this change as below: I can now be confident that if someone sends me a PBIX file, I can check to see if there are any data model changes without having to manually eyeball the two side by side. This alone is a huge leap forward in manageability of models. The next step would be to add this file to an actual Source Control provider, such as Visual Studio Team Services. This tool is free for the first 5 users and can be used with Visual Studio 2015 Community Edition – which is also free! Essentially you would add this exported script to your source control directory each time you updated the model. By checking in your new model, you can compare previous versions, much like with the TextPad editor above. Final Thoughts In the end, this isn’t real, true Source Control. If you make a mistake, you can only view what the previous configuration was, you cannot roll back code directly into your PowerBI model. It is, however, a step towards managing PowerBI with a bit more discipline and rigour. I don’t see this as a huge drawback as rumours on the wind are hinting at larger steps in this direction coming with future releases. Let’s hope we’re not having to work around these problems for much longer!    

PowerBI Optimisation P2–What’s using all my memory?

If you're a regular user of PowerBI, you're probably aware of the size limitations around datasets and it's very likely you've hit them more than once whilst writing reports on top of large datasets. It's difficult to see where size savings can be made directly through PowerBI, but we can use traditional tabular optimisation techniques to help us! For those not in the know, a single dataset can be up to 1Gb in size, with excel files limited to 250mb. Each user also has a storage limit as follows: Free users have a maximum 1 GB data capacity. Pro users of Power BI Pro have 10 GB maximum capacity. Pro users can create groups, with a maximum 10 GB data capacity each. For more information about the limits themselves and how to view your current usage, there's PowerBI blog about it here: https://powerbi.microsoft.com/en-us/documentation/powerbi-admin-manage-your-data-storage-in-power-bi/ But what if you're hitting that 1Gb data limit? There's very little within PowerBI itself to help you understand which tables are the largest, where you could make some savings, or generally anything about your model itself. The answer is to connect to the model via SSMS and take advantage of the Tabular system views, as described here. What determines Tabular model size? It’s worth discussing this briefly before going into the details. Put very simply, the XVelocity engine used by the tabular model will hold more data if there are more unique values for a column column. The key to avoiding large models is, therefore, to avoid columns with huge numbers of lots of distinct values. Text fields will generally be pretty bad for this, although there are common design patterns to avoid the worst offenders. A simple example is to look at a DateTime column – this combination of date and time means that each minute of each day is a unique value. Even if we ignore seconds, we’re adding 1140 new, distinct records for every day within the system. If we split this into two fields, a date and a time, this problem goes away. Each new date adds just a single record, whilst we will never have any new hours and minute combinations, so that’s a controllable field. There are a few techniques to avoid these problems if you find them, I’d advise heading over to Russo & Ferrari for some general tips here and some more detailed techniques here. Accessing Memory Usage Data So - following the above instructions, connect to your data model and open a new DMX query: Here you can use SQL syntax to query several DMVs behind the model - not all of them will be relevant in the cut-down tabular instance that PowerBI uses but there is one in particular that will help us manage our model size - DISCOVER_OBJECT_MEMORY_USAGE. Admittedly, on it’s own this is pretty incomprehensible. We can filter down the results slightly into something that makes a little sense, but you’ll generally get a big list of model entities with numbers against them – OK as a starter but not great as an actual model optimisation tool: Stopping here we would at least have a hit-list of the worst-offending columns and we can use this to start tackling our model. But there are much better ways to approach this problem! Tabular Memory Reports There are several free tools made available within the SSAS community for people to analyse their current SSAS memory usage. These tools simply query this same data but apply a bit of data modelling and make the data much more accessible. For straight tabular, I would tend to use Kasper de Jonge’s old excel spread, which pulls in data quite reliably, however there is an updated PowerBI Model found here. However, this doesn’t play nicely with the PowerBI flavour of tabular just yet, so I would advise using the SQLBI.com Vertipaq Analyser. Following their instructions and pointing it at my temporary tabular instance, we can refresh successfully and use their categorisations to explore the model. I’ve added some conditional formatting to help see where the issues are. I can see, for example, which of the tables in my model are the worst offenders, and what’s causing it: Interestingly the Customer dimension is pretty huge in my example. It has a lot less data than my fact but the dictionaries required are pretty hefty. Dictionaries are built using string lookups and are heavily affected by high volumes of unique values – so I can presume I’ve got some pretty big text strings in this dimension. Looking at the Column breakdown, I can see where the offenders are: This tells a slightly different story – my main offenders are from one of the hidden date dimension tables (A sign that relying on PowerBI’s inbuilt date functionality can be a memory drain) and the Sales Order Number – a unique identifier for my fact, obviously this is going to have a large number of distinct values. The other columns we can do more about – Email address is the next offender. We can assume each customer, of all 18,000 will have a unique email address. However, it’s very rare that we would want to do analysis on the email address specifically, this is a good candidate to remove from the model. At the very least, we could consider keeping only the domain which will yield much fewer unique values.   Hopefully the above will help you move forward in reducing your PowerBI data model size – I’ll be posting about Performance Analysis & Source Control over the next couple of days.

PowerBI Optimisation 1 – Connecting Via Management Studio

I recently gave a talk to the London PowerBI UserGroup and I kicked things off with a confession - "I don't do much report building in PowerBI". Perhaps an odd way to qualify myself to speak to that particular audience. But I am, however, a cloud solution architect - I spend my time designing large scalable cloud systems to process vast amounts of data and PowerBI is a common tool used on top of these systems. Why then, do we accept the lack of controls available within PowerBI? Given any other end-user system I'd want to know about performance bottlenecks, about data model efficiency and, more than anything, I'd want it in source control. First and foremost, the talk is available here. The key to it all, is realising that PowerBI Desktop, when running, starts a SQL Server Analysis Services processes in the background. It doesn't just use the same engine as Tabular, it literally runs tabular in the background without telling you. Open up a PowerBI Desktop file and, after you've seen the "initialising model…" window, you'll see this process in the background - one for each PBID session. So - if the model is using Tabular in the background, we must be able to actually connect to the model! First - Find your Temporary SSAS Port There are two straight forward ways we can achieve this: 1. By far the easiest, is to open up DaxStudio if you have it installed. When you open DaxStudio, it gives you a Connect window, which lists all of the PowerBI processes you have running in the background, as well as any Tabular services: When you connect to a PBI file here, you'll see the Port listed In this case, my port is 5524 -be aware that this will change every time you open PowerBI Desktop, so you can't hardcode anything looking for your "powerbi port". 2. Alternatively, you can find the "msmdsrv.port.txt" file related to your specific instance. Take a look in your user appdata folder, you should find a Microsoft/Power BI Desktop/ folder with some analysis services details: C:\Users\<YourUser>\AppData\Local\Microsoft\Power BI Desktop\AnalysisServicesWorkspaces\ You'll see an instance for each of your PBI Desktop instances, I've only got one at the moment: Inside this folder, in another folder called "Data", you'll find the file we're looking for: Opening this file, we see: Pretty straight forward, and no DAX required. Obviously if you have multiple instances, you'll need to figure out which of these relates to the instance you're after. Connect via SSMS Now that we know our port, we can simply open up management studio, connect to analysis services and enter "localhost:" and the port number from earlier.   Once connected, you'll see a model connection - each PBIX file will have a GUID for this instance, but you can drill down and see the objects underneath, exactly as you would with a Tabular model: You can now write queries, browse the model and basically treat it as a Tabular instance. The Database itself will use a generated GUID, and several internal tables will do the same - you can see above that a hidden data table has been created for every datekey included in my model. We'll discuss the applications of this in my next post - namely how this unlocks performance tuning, monitoring and source control.

Power BI Maps Handling Duplicate City Names

The Bing map engine behind the map visualisation in Power BI is very intuitive allowing users to provide textual data such as City or Country or Postcode to map metrics, instead of just latitude and longitude as most other applications do. However one thing which is not immediately obvious is how to get around the issue of duplicate City/Town names. In this blog I will explain how to map your metrics when your data source contains duplicate cities/towns. To start with we have a simple data set with quarterly sales for 6 different cities based in 5 different states which is being loaded from a CSV into Power BI. Straight away you can see that we only have 2 distinct city names.   As soon as we try to map the sales data by city, we get an obvious problem all of the Bristol sales are being assigned to Bristol, England, while the Georgetown sales are appearing in Guyana. Adding state to the Location field does nothing to help the problem as Power BI only reads a single input in the Location field. So the solution is to create a new column containing both City and State data. To do this you need to complete the following steps: 1. Click “Edit Queries” 2. Select the data source in question. 3. Select the two or more columns which contain the data we want to merge eg: City and State      -If additional geographical data is available such as Country then this can be included in the merged column. 4. Navigate to the "Add Columns" menu and select "Merge Columns" 5. Choose the separator value and name the new column For simplicity I have just called this “Merged” and separated the values using only a space. Once the new column has been created it can be dropped into the Location field of the map visualization. As you can see from the screenshot below I now have 6 data points, showing all three variations of Bristol, and all three variations of Georgetown. One final tip, is to ensure you have set the Data Category value for the column in question.  In this case I have set the Data Category to City to help Bing identify the type of data I believe I am providing it. The only problem with this, is if you set the Data Category value incorrectly no data will be displayed as shown in this final screenshot where I have changed the Data Category to “Continent”

Setting Up The Power BI Analysis Services Connector

The Power BI Analysis Services Connector is used in order to expose a Tabular model to Power BI allowing end users to consume data from the model directly for building of reports and ad-hoc analysis. The setup of the connector is very straightforward however you will should bear the following in mind - Only Tabular models are supported with the connectors – you will not be able to use this to enable reporting from a multidimensional database. - The Analysis Services Connector performs best if it is hosted on the same server that hosts the Tabular model. - The speed of the internet connection between the server running the Analysis Server Connector and the Power BI service is crucial to performance. - You can’t run the Analysis Services Connector on a server also running either the Data Management Gateway or the Power BI Personal Gateway. Installation Steps 1. Download the connector from http://www.microsoft.com/en-us/download/details.aspx?id=45333 2. Run the Analysis Services Connector Setup Program on the machine hosting the Tabular model. 3. Once the installation has completed you will be given the option to launch the connector     4. Enter the login details required to connect to Power BI 5. If the details are correct you will see the below screen: 6. Enter the account details required to connect to the Tabular Instance – clearly this needs to be an account with access to read from the Tabular model. 7. Give the connection a suitable name and a friendly error message to be displayed to users in the case that the connection fails This should complete the wizard, the next step is to log onto the Power BI site – if all has gone well you should see the model as per the below.

Power BI MDS Dashboard

Master Data Services tables such as the staging tables have been well blogged and documented, but there are a host of other system tables and views that can help you with an MDM project. By querying this tables and views, we can gather MDS metrics such as the number of members, number of validation errors and staging errors, all of which can then be exposed to a data steward via a dashboard. Given all of the recent updates around Power BI, I decided to use Power BI Desktop in order to build an MDM dashboard for this blog post. The goal in this case is to produce a dashboard that allows a data steward to get an overview of metrics on an MDS model by model basis. Power BI Desktop The first step when working with Power BI is to import some data. Once we start Power BI Desktop the welcome screen gives you a shortcut to get data, or we can do this via the ribbon: In this case we need to take data from SQL Server, so clicking the SQL Server option requires us to enter a server name and optionally a database name. Once we do this and pick the MDS database, we get a list of tables choose from: Clicking on Load will load the tables into Power BI. Master Data Services Objects The question is what tables do we need? If you navigate the MDS database you will see that there is a mix of Master Data tables (e.g. post-fixed with _EN for entity), application tables and metadata views (prefixed with viw_SYSTEM). Its the system views that we need to start with, as we want to allow a breakdown by model, entity and version. To do this we need to load in the following views: --Models SELECT ID AS ModelId, Name AS ModelName FROM MDM.viw_SYSTEM_SCHEMA_MODEL --Entities SELECT E.Id AS EntityId, E.Name AS EntityName, E.Model_ID, e.Model_Name, E.Model_MUID FROM MDM.viw_SYSTEM_SCHEMA_ENTITY E --Get a count of versions --This is used as a total and by version status SELECT V.ID AS VersionId, V.Name AS VersionName, V.Model_MUID, V.Model_Name, V.Status AS VersionStatus FROM [mdm].[viw_SYSTEM_SCHEMA_VERSION] V Next it would be useful to report on the member count in each entity. This is a little harder as its not stored in the metadata tables, but we can combine the MDS metadata views with the SQL Server sys.dm_db_partition_stats Dynamic Management View in order to return the row counts per member: --This gives the row count per entity for all models SELECT E.Name AS EntityName, M.Name AS ModelName, M.MUID, M.ID AS ModelId, 'mdm.' + E.EntityTable AS TableName, P.row_count FROM MDM.viw_SYSTEM_SCHEMA_ENTITY E INNER JOIN MDM.viw_SYSTEM_SCHEMA_MODEL M ON M.MUID = E.Model_MUID INNER JOIN sys.dm_db_partition_stats P on P.object_id = OBJECT_ID('mdm.' + e.EntityTable) AND P.index_id = 1 Finally a few more metrics that would be useful are the number of staging errors and a breakdown of the members by validation status (Validation Succeeded, Validation Failed, Awaiting Validation). To do this we can take in the following tables and stored procedures: --Get the current count of errors that have occured during staging SELECT Entity_ID AS EntityId, ErrorMemberCount, Version_ID FROM mdm.tblStgBatch --Pass in the appropriate version id EXEC [mdm].[udpValidationStatusSummaryGet] --This gets the validation issues by model/entity: SELECT ValidationIssue_ID, Version_ID, Model_ID, ModelName, Entity_ID AS EntityId, EntityName, Member_ID, BusinessRuleName FROM [mdm].[viw_SYSTEM_USER_VALIDATION] Power BI Relationships and Visualisations Once we import the data, Power BI Desktop will detect relationships automatically for us, but can alter them by clicking on Manage Relationships on the ribbon if we wish. The following shows the relationships between the various MDS objects mentioned above: Once we’re happy with the model and the relationships, then we need to start dragging and dropping to build the Power BI report. As an example we can create a simple card visualisation to show the count of various metrics: This can be used to produce the following metrics, the count of entities, versions, validation issues and staging errors: Equally we can create a simple column chart by using the [mdm].[viw_SYSTEM_USER_VALIDATION] view. The count of issues is the Value section of the chart, whereas the Business Rule Name is on the Axis: Putting it all together we can get the following report, shown for the sample Product model: Conclusion This is just a start, there are other metrics that you could put onto an MDM dashboard – as an example you could include MDM ETL metrics if you’ve logged them. But overall the combination of the rich MDS metadata and intuitive Power BI interface means its relatively easy to produce simple dashboards.

Power BI Visual Studio Online Content Packs – Analysing BI Team Performance

I spend a fair amount of time championing the use of data and analytics around my client’s companies, convincing people from all walks of life that making their data more visible and available around the company will be hugely beneficial. I was recently challenged on this – If I’m such a firm believer in sharing data and improvement through analysis, why am I not sharing my own data? It’s a very good point, and not one that I had an answer for readily available. I’m currently engaged on a project which uses Visual Studio Online for both ALM and source control. Fortunately, Microsoft have recently released the PowerBI VSO Content Pack, making the data held within your account available for dashboarding. The above link describes the steps to connect, but I found the dashboards required a little setting up before they were completely useful. You are first presented with a mixed bag of charts and metrics, many of which will contain no data. This is because the data model has different entities depending on the project template (Agile, Scrum or CMMI) chosen within TFS, as well as the source control binding (TFS or Git). I removed many of the charts from the default dashboard then went exploring in the exposed report model – I was very happy with what I found, the VSO object model exposed pretty much every metric I could think of to report on the activity of a BI development team, including report templates for each template/source version you could be using. I gave myself 15 minutes to see if I could pull out a reasonable dashboard and was pleasantly surprised by what could be done in so little time. So – how do you analyse an analysis team? How is the Project Going? Firstly, we’re an agile team. We run iterations loosely based around Scrum principles, we manage our client projects through story backlogs and report daily on blockers, impediments etc. This information is key to us operationally, but it also tells a useful story. How many new features were added in the sprint? How many individual user stories, each representing a distinct piece of business value, were delivered? How much effort is remaining in the backlog (and therefore how many additional sprints would be required to deliver all known functionality?). How many bugs have been raised – and how effective are we at dealing with them? What’s the current Sprint Status? The day to day metrics also tell a valuable story – was the sprint smooth and predictable, or was it a rush to deliver towards the end? How much work is still remaining in the current sprint? Are there any blocked tasks or impediments that may be seen as a risk to delivery? What actual work has been done? Stories and tasks only tell one side of the story – a task may represent a change to a single piece of code, or a large update that touches much of the system. Simply counting tasks therefore limits our understanding of how productive we were during a sprint. Fortunately, we can also analyse the source control history, looking at the changesets committed and their contents. This provides some insight into the complexity of those completed tasks – it’s not a perfect measure but gets us a little closer. We can now ask questions such as: How many individual changesets were committed? Who commits most regularly? What kind of work was done – what is the most common file amended? Is there someone who hordes changes and causes potential problems by not regularly committing their work? Is our development behaviour changing overtime as we review our practices and learn from them? Finally, it’s also worth noting that the content pack has been fully set up with synonyms to allow for the Q&A Natural Language query bar to be activated. So if there’s a metric not included in the dashboards, users can simply type their question into the query bar. For example, I want to better understand the type of changes we’re doing – are we creating new entities or modifying existing code? For this, I tried the following, with the relevant chart appearing before I’d even finished typing: There’s a whole lot more content in the packs under the individual report tabs, but this gave me a good point to start that conversation. I can now provide weekly dashboard updates to my project sponsors, showing just how much progress we’re making. This is a huge boost to my ability to champion data and I’m expecting it to actually improve some of our working habits. Now, if anyone interrupts me mid-flow I can simply grab my phone, load up the Power BI app and pull out some insights from the team’s current performance, wherever I am.

Adatis Hackathon Jan 2015 – Power BI Designer Preview

In the January Adatis hackathon we were split into 3 teams and got the chance to play with some bleeding edge technology. I was captain of the Gleam team, which meant we got our hands on the new Power BI Designer preview. To check out the other teams see here: STREAM (stream analytics), DREAM (Azure ML). We tackled two main dashboards in our hack build, firstly creating excel mock-ups then reproducing inside Power BI Designer. This was pretty challenging as some of the chart types simply weren’t there and there was some awkward mashing of data to get it into different charts. The end results weren’t quite as pretty as we were hoping, but it’s still very quick to put together. The move towards a “dashboard” as opposed to large, single charts means we can create richer, more informative systems using the tool. However our main focus was the designer itself - this is the start of the Power BI Suite leaving the clutches of Excel. Don’t get me wrong, when Power BI first started coming about it was right at home as a series of Add-Ons for Excel but as it has grown, and SSRS has become more dated, the corporate market started to look towards it for reporting. Power BI as a completely Excel-based system is outside of the development ecosystem, it’s not source controlled or versioned. For this alone I feel it justified for it to leave the nest and out into the real world as a standalone system. That said… The preview designer is currently lacking functionality, it actually has less functionality than it did in Excel. There were certainly fewer chart types available than in its Excel counterpart and we did come across a few (very annoying) bugs. However, this is only a preview and  since the Hackday there have been several updates for it that add various bits of functionality as well as bug fixes. So even if we are not 100% there yet it does seem that this new product is heading in the right direction. There’s still a question of where it’s heading and how it will develop – but the Power BI Dashboards currently previewing in the US, and new features such as the Power BI API are really moving the technology into a mature, usable state. We will be keeping a close eye on this as it develops and too see what new features get added in future updates  of the preview and eventually in the real thing!

SQL PASS Summit 2014 – Kick Off

Day 1 has kicked off in Seattle, a remarkable city. Having arrived a week early, I’ve had plenty of time to check out the sights and the food and have enjoyed it immensely - a brilliant venue for PASS! There were a few announcements at this morning’s Keynote, mostly regarding Azure. Azure SQL Databases are gaining larger index handling, parallel queries, extended events and in-memory columnstore for data marts. Joseph Sirosh gave a talk about Machine Learning & Information management showing a cool example of Azure Stream Analytics using Microsoft Kinect sensor information of customer interactions in a shop being uploaded straight into Power Map! I am looking forward to hearing more on Machine Learning. There are also a handful of great improvements for Power BI. I am most looking forward to the new Live Ops Dashboards and drill-through actions! Combo Reports also look promising… Moving onto the first session, I chose to attend ‘What’s new in Microsoft Power Query for Excel’. As it turns out there’s not a massive amount of new stuff – some new data sources and a tick box when you do a merge to remove name prefixes. However one of these new sources is the long-awaited Analysis Services data source. The new ODBC Data Source is a great addition also. There was a mention regarding the possibility of a decoupled M-query SSIS component! We probably won’t hear of anything until later in 2015, unfortunately. I would say this was not a level 300 session, more like 100/200. The second session was ‘SQL Server 2014 Always On (High Availability and Disaster Recovery)’: a nice overview of what was brought in in 2012 and the newer 2014 features – these including an increased maximum number of secondary replicas, increased availability of readable secondary’s and the Add Azure Replica Wizard. Despite not being a DBA and it being a level 300 session, I found it easy to follow and absorbable. I feel many of the DBAs in the room may not have taken away any information they would have not already have known, however. Niko Neugebauer gave a fantastic, in depth session on ‘ETL Patterns with Clustered Columnstore Indexes’. It was a fast moving talk, despite the time spent waiting for some executions. It demanded your full attention! Definitely worthy of its 400 level. It left me a little tired for Marco Russo’s ‘DAX Patterns’ session which showed some examples of workarounds for common tabular weaknesses like distinct counts for Type 2 slowly changing dimensions and cumulative quantities. Overall it was a strong day. I am looking forward to tomorrow. More to follow…

KPI’s in Power View and DAX Query for KPI’s

I was recently approached by a client where we have assisted in implementing a Microsoft Analysis Services tabular solution and a supporting management pack of SSRS reports.  They were asking about utilising some of the KPI’s defined in their tabular cube in the SSRS reports using DAX queries.  I was at a bit of a loss and most web help seemed to suggest it couldn’t be done: Blog comment from 2012 However, with a bit of time to investigate it appears Power View does now support Tabular KPI’s and as such a DAX query should be able to pull them out.  My KPI base measure is named “Highest Ranking” and I am able to view its value, status or goal in Power View.  A little work with every SQL developer’s best friend Profiler and I had what I was looking for.  It appears that when you create a KPI SSAS is defining some measures in the background which do some of the work of the MDX KPI functions. Therefore the following DAX query against my cycling Demo cube contains the expected results and could therefore be used in SSRS reporting. EVALUATE ADDCOLUMNS( VALUES('DimRider'[Name]) , "Highest_Ranking", 'FactRanking'[Highest Ranking], "Highest_Ranking_Goal", 'FactRanking'[_Highest Ranking Goal], "Highest_Ranking_Status", 'FactRanking'[_Highest Ranking Status] ) ORDER BY 'DimRider'[Name]   I also tested for the existence of other KPI functions such as Trend and Weight but these do not appear to be present.  It is also interesting that the use of a KPI over a measure does not change the measure name but just group it in the field list and as such there is no need for a value function. For more info on KPI’s in Power View there is additional documentation here.  I am currently unsure of the required versions for this functionality so if it is missing from your environment I would love to hear from you.

Power BI preview– Mobile Integration

I have had the invite through to view the Power BI preview today (Get yours here).  The first thing I wanted to do was get some reports up and start displaying them through my Surface and its Power BI App. The app comes pre-installed with samples but to link it up with your own site you need to hit browse from the app menu.  From there you can add a location.  What caught me out was where do i navigate to to get my Power BI preview reports? Well it is hidden away in the documentation here that you don’t try and navigate directly to your Power BI site as I did but you directly give the SharePoint site and it will pick up the Power BI app from there.  From there you can navigate to the reports you have uploaded to your preview site and favourite the ones you want to appear on the home screen.  The couple I have on there can be seen here: You can control which sheets get displayed by changing the browser view options of the workbook, as described here More to come on Power BI shortly..