Adatis BI Blogs

Power BI Composite Models and Aggregations

So something which feels like its gone under the radar a bit is the addition of composite models and aggregations in Power BI. These have been around a couple of months now in Preview but I’ve not seen much buzz around the features. After recently attending PASS Summit and seeing Christian Wade’s 1 Trillion row demo, I thought it was worth blogging about – especially as I expect parts of this functionality to be expanded to Azure AS in the near future. If you think about it, the majority of BI query’s are done at some form of aggregation level but users will still want the detail, and so these features essentially unlock the ability to report against giant datasets at both an aggregated and granular level at the same time – something that was not physically possible beforehand. Power BI is now able to work with multiple petabytes of data with ease over trillions of rows with pretty much instant response times – so this is a bit of a game changer.   Composite Models Previously with Power BI, you were restricted to either DirectQuery or Import for a single data source. With Composite models that has all changed. A report can now include data connections from more than one connection in any combination, so you are able to connect to both DirectQuery AND Import in the same model and combine the data. This then opens up a world of possibilities that were not possible before.  With this functionality, we also have the ability to create many-to-many relationships, but I won’t be doing into detail in this for this blog. As part of this also comes some functionality called Storage Mode which unlocks table-level storage features. This is the next part of the jigsaw.   Storage Mode This allows you to specify whether tables are either imported or queried on the fly. There is now a third option “Dual” which acts as either a cached or not cached table depending on the context of the query that's submitted at runtime. Setting the correct storage mode has many advantages such as: Better query performance (no need to push real time queries to the dataset for relatively static tables such as those used for filters). Ability to use larger datasets (interactive analysis is better for datasets you don't want to cache into memory such as tables with significant data volume). Data refresh optimisation (only cache data that's necessary, meaning quicker refresh times). Reducing latency in real-time data (no need to re-read those static tables each time). To access the storage mode, you either select the table in the report pane and click Properties, or navigate to the table in the new Modelling View (requires exposing this in Options as its in preview). Changing a table storage mode will then prompt us to change the related tables to Dual. This propagation logic is designed to help with models that contain many tables!   Modelling View Before I get to aggregations, I also need to introduce the new modelling view. This will be changing to fall in line with what you may be familiar with on SSAS / Azure AS. It feels like this is one of the first steps to integrate SSAS further into the Power BI spectrum and facilitate enterprise datasets. By updating the modelling view, you now have the ability to create additional diagrams, thus allowing you to break out particularly complex models by subject area rather than trying to join it all up within one model. For instance, if you are working with 5 tables in your model, this isn't a particularly big deal – now multiply that by 10 and all of a sudden it becomes a pain to manage, and this new feature will help alleviate that. The feature also allows you to multi select objects in one go and update properties through a new side-pane. An example of this new feature can be seen below.   Aggregations Finally, the most exciting feature of the lot – aggregations. Without the new modelling view, without the composite models, without the new storage modes – this would not be possible. Aggregations allow us to create a newly defined table within the model but with all fields of the original table it is created from to be aggregated/grouped in some or another. These aggregations/groups include count, group by, max, min, sum, and count rows. This can be set to either set to Import mode with/without incremental refresh, or via DirectQuery and optimised by using columnstore indexes. This then unlocks faster query performance over huge datasets via the cache at aggregated level using fractions of resource compared to detailed levels. You then have the flexibility to set the aggregate table to import, while leaving the granular table to DirectQuery, which will speed up performance when navigating the report. The aggregate table can also be hidden so that user will not even be aware of the implementation. You can also have more than one aggregation table, potentially one for a particular set of reports and then a lower grain aggregate table for analysts, and this can be done through the precedence option. The tool then will then query the aggregation table with highest precedence level first to see if it can resolve the runtime query before moving down the levels. Aggregations don’t just work for measures such as SUM or COUNT. It also works for more complex measures - all of the components of a measure are always folded down to the sum, min, max, count, level and then those sub query’s work out whether they can hit the cache or not.  Its also worth mentioning that you can check if its hit the cache via the DAX editor.   Conclusion For me, all these features mentioned above will be a bit of a game changer in the right scenario. I’ve ran into scaling problems before and re-working the logic at the Warehouse level for both a granular and aggregated datasets certainly added some overhead. While the functionality obviously doesn't work with Live Connection into SSAS cubes, this new functionality opens up options for using Power BI to do of the activities which historically would have only been done in SSAS. As Christian Wade also alluded to at PASS, Power BI will soon become a superset of SSAS and so we may well be doing less and less with SSAS and more with Power BI in this area as time goes by.

Tabular Automation and NuGet

In a recent blog post, I wrote about processing an Azure Analysis Services tabular model using Azure Functions. In it, there’s a lengthy process of downloading some DLLs and uploading them to the Azure Function. Handily, the Analysis Services team at Microsoft have released the Analysis Services NuGet package, which means that the necessary DLLs can be automatically installed to an Azure Function without much hassle. This blog is going to go through the steps of adding the NuGet package to your Azure Function. Add a new file to your function called project.jsonInput the following code in the newly created file{   "frameworks": {     "net46":{       "dependencies": {         "Microsoft.AnalysisServices.retail.amd64": "15.0.2"       }     }    } }Then save the Azure Function to proceed with the NuGet restore and compile your function. You should see the following logs in your log window.That is the entire process. Much easier than documented previously!

SQL Server Data Tools Deployment Method

The Deploy command in SQL Server Data Tools (SSDT) provides a simple and intuitive method to deploy a tabular model project from the SSDT authoring environment. However, this method should not be used to deploy to production servers. Using this method can overwrite certain properties in an existing model.Deploying a tabular model using SSDT is a simple process; however, certain steps must be taken to ensure your model is deployed to the correct Analysis Services instance and with the correct configuration options.Deployment in SSDT requires the properties page of the Tabular model to be configured properly. The properties and options are as follows:PropertyDefault SettingDescriptionProcessing OptionDefaultThis property specifies the type of processing required when changes to objects are deployed. This property has the following options:Default – Metadata will be deployed and unprocessed objects will be processed including any necessary recalculation of relationships, hierarchies and calculated columns.Do Not Process – Only the metadata will be deployed. After deploying, it may be necessary to run a process operation on the deployed database to update and recalculate data.Full – This setting specifies that both the metadata is deployed and a process full operation is performed. This assures that the deployed database has the most recent updates to both metadata and data.Transactional DeploymentFalseThis property specifies whether or not the deployment is transactional. By default, the deployment of all or changed objects is not transactional with the processing of those deployed objects. Deployment can succeed and persist even though processing fails. You can change this to incorporate deployment and processing in a single transaction.Server<localhost>This property, set when the project is created, specifies the Analysis Services instance by name to which the model will be deployed. By default, the model will be deployed to the default instance of Analysis Services on the local computer. However, you can change this setting to specify a named instance on the local computer or any instance on any remote computer on which you have permission to create Analysis Service objects. Edition<same as the edition of the instance the workspace server is located>This property specifies the edition of the Analysis Services server to which the model will be deployed. The server edition defines various features that can be incorporated into the project. By default, the edition will be of the local Analysis Services server. If you specify a different Analysis Services server, for example a production Analysis Service server, be sure to specify the edition of the Analysis Services server.Database<project name>This property specifies the name of the Analysis Services database in which model objects will be instantiated upon deployment. This name will also be specified in a reporting client data connection or an .bism data connection file. You can change this name at any time when you are authoring the model. Model NameModelThis property specifies the model name as shown in client tools, such as Excel and Power BI, and AMO.This is where you can set what type of processing the Tabular model does once it has been deployed. For a development environment, it is best practice to set to Do Not Process due to the frequency of deployments. For deployments to more stable environments, such as UAT or Pre-Production, the Processing Option can be changed away from Do Not Process.The Deployment Server, Version of SQL Server and Database Name can all be configured from this properties page. Then, by right-clicking on the project, you can deploy your database to your server.

Process an Azure Analysis Services Tabular Model from an Azure Function

A couple of weeks ago I wrote a blog post detailing how to process a tabular model from a stored procedure. The challenge there was to have a way of processing a tabular model on demand from a web application. The challenge still exists, but how do you achieve the same in Azure using Platform as a Service (PaaS) objects which do not have the benefit of a full server and items, such as msdb and SQL Agent, to aid in the process?In this post, I’m going to show you how to process a tabular model only using Azure PaaS offerings. Not only am I going to show you how to do process a tabular model on-demand, but also how to process a tabular model on a schedule. This post has taken inspiration and part of the code base from the a Microsoft blog: Automating Azure Analysis Services processing with Azure Functions.Azure FunctionsBefore we begin properly, it’s worth spending some time introducing Azure Functions. According to Microsoft, Azure Functions are:…a solution for easily running small pieces of code, or "functions," in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Java, or PHP. Pay only for the time your code runs and trust Azure to scale as needed. Azure Functions lets you develop server less applications on Microsoft Azure.They are super useful for extending the capabilities of any solution and not just limited to what we’re going to cover here.On-Demand RefreshThe use of Azure Functions creates a trigger for the on-demand refresh of a tabular model from the web application or web hook, this is to make sure that selected elements of data in a tabular model, for example hot partitions, are always up to date. The following describes the process that Azure Functions will be involved in this scenario:The steps that are needed to create an Azure Function for On-Demand Refresh are as follow:1) Create an Azure Function AppNavigate to the Azure Portal and create a Function App (the name changes quite a bit, so don’t be concerned if it’s not exactly displayed as it is below)2) Create a new FunctionAfter you’ve created the Function App, we need to add a new Webhook + API function, which we’ll use as the basis for our on-demand refresh. Click on the + button next to Functions, select Webhook + API, choose C# as your language and click Create this function.3) Configure the FunctionDownload the latest client libraries for Analysis Services. This needs to be done to your local machine so you can then copy these files to your Azure Function App. After you’ve downloaded the client libraries, the DLLs can be found in C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies. The two files you need are:C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\Microsoft.AnalysisServices.Core.DLLC:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\Microsoft.AnalysisServices.Tabular.DLLThe documentation references the 130 assemblies, which is not correct and will not work. You need the assemblies in 140.In order to add these assemblies to your function, you’ll need to add a folder called “bin”. To do this, select your Function App, click Platform features, and under Development Tools, click Advanced tools (Kudu).In Kudu, click Debug console and select Cmd. Navigate to the site\wwwroot\OnDemandHttpTrigger folder and add the “bin” folder here by clicking the + button.Once you’ve added the “bin” folder, go back over to the Azure portal and select your function OnDemandHttpTrigger.On the right under View files, navigate to the bin folder. Then click the Upload button to add the two previously mentioned DLLs to the bin folder.You should see the two DLLs in your bin folder now.4) Add Azure Analysis Services Connection String to the Function AppThis step is going to add the connection string to the Azure Analysis Services (AAS) service to the entire Function App, not just individual functions. Click the name of your Function App, then select Platform features. Select Application settings under General Settings.Now we need to add our connection string under the Connection strings section. You’ll need your AAS server name and a user ID and password that has access to the AAS database.You can find your AAS server name by navigating to your AAS database in the Azure portal and copying the value constructed after clicking Show server connection strings:Your connection string should look like this:Provider=MSOLAP;Data Source=<your aas server>; Initial Catalog=<aas database name>;User ID=<your username>;Password=<your password>Back in the screen for the Function App, fill in the Name textbox with a name for your connection string and paste your connection string in the Value text box:Click Save at the top to save these settings for your Functions.5) Time for CodeOur Function App has been configured, now we need to add code to the function. The function comes with a working function, for which to test out the functionality, but we don’t need everything that is on offer.We’re going to programmatically process the tabular model. In doing so, we’ll leverage Analysis Services Management Objects (AMO). If you’re new to AMO, start here.Paste in the following code (all code referenced can also be download from my GitHub Repo):#r "Microsoft.AnalysisServices.Tabular.DLL"#r "Microsoft.AnalysisServices.Core.DLL"#r "System.Configuration"using System;using System.Configuration;using Microsoft.AnalysisServices.Tabular;using System.Net;public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log){log.Info("C# HTTP trigger function processed a request.");// parse query parameterstring status = req.GetQueryNameValuePairs().FirstOrDefault(q => string.Compare(q.Key, "status", true) == 0).Value;if (status == null){// Get request bodydynamic data = await req.Content.ReadAsAsync<object>();status = data?.status;}if (status == "execute"){log.Info($"C# trigger function started at: {DateTime.Now}");  try            {Microsoft.AnalysisServices.Tabular.Server asSrv = new Microsoft.AnalysisServices.Tabular.Server();var connStr = ConfigurationManager.ConnectionStrings["AASTabular"].ConnectionString;asSrv.Connect(connStr);Database db = asSrv.Databases["azureadventureworks"];Model m = db.Model;//db.Model.RequestRefresh(RefreshType.Full);     // Mark the model for refreshm.RequestRefresh(RefreshType.Full);     // Mark the model for refresh//m.Tables["Date"].RequestRefresh(RefreshType.Full);     // Mark only one table for refreshdb.Model.SaveChanges();     //commit  which will execute the refreshasSrv.Disconnect();            }catch (Exception e)            {log.Info($"C# trigger function exception: {e.ToString()}");            }log.Info($"C# trigger function finished at: {DateTime.Now}"); }return status == "execute"?req.CreateResponse(HttpStatusCode.OK, "Successfully Processed Tabular Model ") :req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a status on the query string or in the request body");}Click the Save button at the top.6) Test, Test, TestClick the Run button at the top to test the function The function can also be tested in a web browser, and be called by a Web App using the POST HTTP method.Now we have a fully functioning method of refreshing an Azure Analysis Services tabular model on-demand.Scheduled RefreshThe use of Azure Functions creates a trigger for the scheduled refresh of a tabular model, this is to make sure that the entire tabular model has the latest data and is always up to date. The following describes the process that Azure Functions will be involved in this scenario:The steps that are needed to create an Azure Function for Scheduled Refresh are as follow:1) Create a FunctionWe’ve created our Function App, and now we need to add a new Timer Trigger function, which we’ll use as the basis for our scheduled refresh. Click on the + button next to Functions, select Timer, choose C# as your language and click Create this function.2) Configure the TimerWhat use is a timer without a schedule? To give the timer a schedule, click Integrate, set the schedule and click Save.The Schedule text box expects a CRON expression to define the days and times that the function should execute. Click the little Documentation button on the screen above to read about CRON expressions. The schedule I’ve set is to run once everyday at 09:30AM.3) Configure the FunctionSee step 3 of the On-Demand Function for detailed steps. You’ll need to create the bin folder and upload the DLLs to the bin folder.4) Time for CodeWe’ve configured our function, so now it’s time to add the code. The code base is much simpler than the On-Demand code base, mainly because it’s doing fewer tasks. But the AMO section is exactly the same. Paste in the following code:#r "Microsoft.AnalysisServices.Tabular.DLL"#r "Microsoft.AnalysisServices.Core.DLL"#r "System.Configuration"using System;using System.Configuration;using Microsoft.AnalysisServices.Tabular;public static void Run(TimerInfo myTimer, TraceWriter log){log.Info($"C# Timer trigger function started at: {DateTime.Now}");  try            {Microsoft.AnalysisServices.Tabular.Server asSrv = new Microsoft.AnalysisServices.Tabular.Server();var connStr = ConfigurationManager.ConnectionStrings["AASTabular"].ConnectionString;asSrv.Connect(connStr);Database db = asSrv.Databases["azureadventureworks"];Model m = db.Model;//db.Model.RequestRefresh(RefreshType.Full);     // Mark the model for refreshm.RequestRefresh(RefreshType.Full);     // Mark the model for refresh//m.Tables["Date"].RequestRefresh(RefreshType.Full);     // Mark only one table for refreshdb.Model.SaveChanges();     //commit  which will execute the refreshasSrv.Disconnect();            }catch (Exception e)            {log.Info($"C# Timer trigger function exception: {e.ToString()}");            }log.Info($"C# Timer trigger function finished at: {DateTime.Now}"); }Click the save button at the top.5) Test, Test, TestClick the Run button at the top to test the function Now we have a fully functioning method of refreshing an Azure Analysis Services tabular model on-demand.ConclusionI have shown you how simple it is to invoke two methods of refreshing a tabular model using Azure Functions: an On-Demand refresh and a refresh by Schedule. I hope that you take inspiration and use these methods in your use of both Azure Analysis Services and Azure Functions.

Generating Usage Statistics from a SSAS Tabular Cube

Once you have users accessing your cube it’s almost inevitable at some point that someone will ask you to generate usage statistics from it, and there are a number of methods to achieve this. In this quick blog post, I’ll detail them and my experiences with each, and then use this data to create a PBI report.   Native Tabular Properties The first method is natively through the tabular cube properties. This also has the added bonus (read impact) that it will optimise future aggregations based on usage – in fact that’s its main purpose. This can be done by setting the CreateQueryLogTable to true, setting up the QueryLogConnectionString (to point to the DB where the usage table requires hosting), setting the QueryLogSamping rate (10 means every 10th query will be logged), and finally the name of the QueryLog table. Advantages of this method is that its very easy to setup with limited knowledge required and it could potentially improve performance if you have an environment where users submit repetitive queries. Unfortunately there are also a number of disadvantages which led me to find other methods. Firstly, it creates a degree of overhead on the cube if its sampling too often; we actually had visible performance related complaints once we turned it on – either through the sampling or change to the “optimised” aggregations. Depending on the sampling rate, you could also find that users who rarely use the cube are not picked up as part of the stats.  As well as this any changes to the cube structure will cause the logging table to be reset. The table is also limited in terms of what it actually logs (as you can see below) – useful if you just want just the user and timestamp info but not much else, and no real ability to configure.   AS Trace To that extent, I looked for other tools to do the same task but better and I found AS Trace. Originally built for SQL Server 2012, it works fine on 2014 – and provides you the ability to run a trace against the cube activities (and log to a table) exactly like the SQL profiler but without the overhead of the GUI which adds unnecessary memory/processor power. It also runs as a windows service allowing it to restart automatically when the server reboots. If this is the case, the tool also logs the existing data to a History table and truncates the logging table. Exactly what I was after. The tool collects information based on a preconfigured Analysis Services Profiler template, which can be optimised depending on which events you are interested in. I initially ran it using most events selected, and with a limited user set it was generating in the region of 25,000 rows a day. This was clearly not maintainable for a long period of time. I then used the following blog post to understand what each event of the profiler was giving me and then just created a lightweight trace definition file to give me what I wanted. I limited it to Query Begin, Query End (for DAX/MDX statements) and Audit Logon/Logout (for session data). The setup is very straight forward, just run the install.bat as an escalated privileged account, and check it installs the service correctly. Next, add your SSAS service account to the Logon of the service, make sure the account has “Log on as Service” and membership to the database you are writing to in the form of DDL and DML access, i.e. able to create tables, write to tables – and lastly admin rights to the instance of SSAS you intend to use. Next, configure the ASTrace.exe.config file with the parameters you want the tool to use. This includes the location of the cube (can handle multiple cubes), the location of the trace definition file, the location of the DB instance and table you want to log to and lastly whether you want to preserve history on restart. The only thing I couldn’t do here, is set the schema of the table it was using to log to, which defaults to dbo. All that’s left is to start the service, and check the log file to see if it has created any errors on start-up. If not, the table should be created correctly and awaiting input. I also saw another method while researching using Extended Events (XEvents) but did not implement this once AS Trace provided me with the information I needed.   View / Power BI Report I initially used the data to run a limited set of queries to extract total users, and total queries for a given time period. This was useful to a degree but from the data collected I realised I could be doing so much more. This lead me to do some analysis across the type of metrics being logged, and allowed me to create a view on top of the tables of what I thought might be useful on a report. I removed all the redundant columns it was tracking, and created some friendly names for the EventSubclass, and other columns. I used the PATINDEX function to check the query statement for existence of some important values – while not an exact science, it would give me a good picture of the split between certain user groups and KPIs being run. I’ve included the view definition below. I ended up limiting the data to EventClass 10 as this seemed to capture all the necessary data. The only downside I have seen so far is that users querying through the Power BI web service are anonymised under the service account name. I’m currently looking into options to resolve this which I’ve seen as configuration options on Power BI – to allow through the username as long as it can be matched at the other end. SELECT RowNumber AS ID, SPID AS SessionID, CurrentTime AS DateQueried, NTUserName AS Username, CASE EventSubClass WHEN 0 THEN 'MDX Query (Excel)' WHEN 3 THEN 'DAX Query (Power BI)' WHEN 1 THEN 'METADATA Query' END AS QueryType, CASE Success WHEN 1 THEN 'Successful Query' ELSE 'Query Error' END AS SuccessfulQuery, CONVERT(DECIMAL(10,2),CONVERT(DECIMAL(18,3),CPUTime)/1000) AS CPUTimeSec, CONVERT(DECIMAL(10,2),CONVERT(DECIMAL(18,3),Duration)/1000) AS DurationSec, TextData AS Query, CASE PATINDEX('%Mexico%',TextData) WHEN 0 THEN 0 ELSE 1 END AS MexicoMarket, CASE PATINDEX('%Colombia%',TextData) WHEN 0 THEN 0 ELSE 1 END AS ColombiaMarket, CASE PATINDEX('%CS4%',TextData) WHEN 0 THEN 0 ELSE 1 END AS CS4, ServerName FROM [dbo].[ASTraceTable] WHERE EventClass = 10 Once I had the view, creating the report was relatively straight forward, and can be seen below. I included metrics for number of queries by user (blurred out) which also doubled as a filter, the % split of queries for things such as Excel/Power BI, a measure of queries by timeframe, a logarithmic scaled display for queries by query duration, and lastly a split of queries by KPI. I intend to tweak these once I receive more data from the trace, but was relatively happy with the information that they were providing. Please let me know if you have any comments.

Extracting Users from AD using SSIS

I've recently been working on a project which required KPI level security alongside the traditional row level security secured at a geography level. This would limit what financial data a user could see within a cube, without having to create multiple cubes or use perspectives (which would not actually secure the data). To achieve this, I needed to populate a set of 'KPI User/Role' tables stored in Master Data Services (MDS) with a list of users who were stored in a particular AD group. I would need these tables updated on a regularly basis to grant/revoke access. We could then use these names along with the USERNAME() function in DAX to filter.   The Solution One method to solve my problem would be by using SSIS. The package could  be setup to run as part of a SQL Agent Job, either by a schedule or on demand. My list of users were stored in an AD group called LH_FIN. To start with you will need to truncate and clear your MDS staging tables that you are about to populate.  You can then use the data flow to process the majority of the logic, by creating a script component task. The purpose of this is to loop through Active Directory and pick up the user details that belong to the specified AD Group or set of AD groups if dealing with multiple roles. A number of variables are defined which the task uses to complete the lookup. strLDAP – the LDAP directory on which to perform the lookup strDomain – the domain on which the AD group(s) belong strADPrefix – the AD group prefix from which to return user information about strADParent – the parent group which contains the AD groups which you are looking up (may not need to be used if only looking up a single AD group) To extract users from multiple groups, make sure the prefix stored in the variable strADPrefix covers both groups. Once the rows are extracted it would then be a case of using SSIS to split the data accordingly on the AD Group Name. The following code can be used in the script: The first section sets up the objects required to interrogate the directory, and the fields we expect to return from the accounts – the most important of which is memberof which is used to check versus our AD Prefix. It also filters out items such as service accounts and disabled accounts to speed up the interrogation process. Public Overrides Sub CreateNewOutputRows() Dim domain As String = Variables.strDomain Dim searchRoot As New DirectoryEntry(Variables.strLDAP, Nothing, Nothing, AuthenticationTypes.Secure) Dim dirSearch As New DirectorySearcher(searchRoot) dirSearch.SearchScope = SearchScope.Subtree 'LogonName, GroupsUserBelongsTo, Department, JobTitle, MailAddress, DisplayName dirSearch.PropertiesToLoad.Add("samaccountname") dirSearch.PropertiesToLoad.Add("memberof") dirSearch.PropertiesToLoad.Add("department") dirSearch.PropertiesToLoad.Add("title") dirSearch.PropertiesToLoad.Add("mail") dirSearch.PropertiesToLoad.Add("displayname") 'filter to user objects dirSearch.Filter = "(objectCategory=person)" 'filter to user objects dirSearch.Filter = "(objectClass=user)" 'filter out disabled accounts dirSearch.Filter = "(!userAccountControl:1.2.840.113556.1.4.803:=2)" 'filter out password never expires accounts, i.e. service accounts dirSearch.Filter = "(!userAccountControl:1.2.840.113556.1.4.803:=65536)" 'sets chunk size for retrieving items dirSearch.PageSize = 1000 The next section of code performs the search, and for any LDAP objects it finds within the filter set, returns the properties requested. These properties are then stored in key/value pairs. Dim props As ResultPropertyCollection Dim values As ResultPropertyValueCollection Dim key As String Dim userAccountName As String Dim departmentHome As String Dim jobtitle As String Dim GroupName As String Dim email As String Dim displayName As String Dim groups As New ArrayList Using searchRoot 'Return all LDAP objects, LDAP://acl/CN=Tristan Robinson,OU=Employees,DC=ACL,DC=local 'CN = Common Name, OU = Organisational Unit, DC = Domain Component Using results As SearchResultCollection = dirSearch.FindAll() For Each result As SearchResult In results 'For each object return properties, i.e. displayname, memberof, etc props = result.Properties For Each entry As DictionaryEntry In props key = CType(entry.Key, String) 'For each property, inspect the property and record its value 'Logon Name If key = "samaccountname" Then values = CType(entry.Value, ResultPropertyValueCollection) userAccountName = CType(values.Item(0), String) End If 'Department If key = "department" Then values = CType(entry.Value, ResultPropertyValueCollection) departmentHome = CType(values.Item(0), String) End If 'Job Title If key = "title" Then values = CType(entry.Value, ResultPropertyValueCollection) jobtitle = CType(values.Item(0), String) End If 'E-Mail If key = "mail" Then values = CType(entry.Value, ResultPropertyValueCollection) email = CType(values.Item(0), String) End If 'Display Name If key = "displayname" Then values = CType(entry.Value, ResultPropertyValueCollection) displayName = CType(values.Item(0), String) End If 'Groups User Belongs To (array/collection) If key = "memberof" Then values = CType(entry.Value, ResultPropertyValueCollection) groups = GetGroups(values) End If Next The final section filters the data into the output buffer if from the array list we’ve extracted above, we have matching strings from our original AD Prefix variable. It will then reset, and loop round for the next account. 'Export user details to buffer if it passes the logical test For Each item As String In groups 'Avoids computer accounts, i.e. ending with $ If userAccountName.EndsWith("$") = False And item.ToString.StartsWith(Variables.strADPrefix) Then 'And item.ToString <> (Variables.strADParent) Output0Buffer.AddRow() If String.IsNullOrEmpty(userAccountName) Then Output0Buffer.UserAccountName_IsNull = True Else Output0Buffer.UserAccountName = userAccountName End If If String.IsNullOrEmpty(domain) Then Output0Buffer.Domain_IsNull = True Else Output0Buffer.Domain = domain End If If String.IsNullOrEmpty(item.ToString) Then Output0Buffer.GroupName_IsNull = True Else Output0Buffer.GroupName = item.ToString End If If String.IsNullOrEmpty(jobtitle) Then Output0Buffer.JobTitle_IsNull = True Else Output0Buffer.JobTitle = jobtitle End If If String.IsNullOrEmpty(email) Then Output0Buffer.Email_IsNull = True Else Output0Buffer.Email = email End If If String.IsNullOrEmpty(displayName) Then Output0Buffer.DisplayName_IsNull = True Else Output0Buffer.DisplayName = displayName End If End If Next groups.Clear() userAccountName = "" departmentHome = "" jobtitle = "" GroupName = "" email = "" displayName = "" Next End Using End Using End Sub I also required a function to split the list of groups a user belonged to and store them in another array list. Private Function GetGroups(ByVal values As ResultPropertyValueCollection) As ArrayList Dim valueList As ArrayList = New ArrayList() For Each Item As Object In values Dim memberof As String = Item.ToString() Dim pairs As String() = memberof.Split(",".ToCharArray) Dim group As String() = pairs(0).Split("=".ToCharArray) valueList.Add(group(1)) Next Return valueList End Function End Class Once a list of users has been extracted, you will need to do a lookup against the existing list and only stage those that are new. This can be achieved through a simple lookup component . You can then move the rows into the MDS staging table ready for the load into MDS. After the data flow has processed successfully, the next stage is to sweep the records into MDS using one of the built in stored procedures in the product. The results of which can be seen in the MDS Import View on the MDS site. The following entities can then be used to power the DAX: User (Username, Team, AD Login – populated from the script above)   Role (Role – populated manually) User Role (User, Role – joins users to roles, populated manually) KPI (KPI – populated manually) KPI Role (Role, KPI – joins roles to KPIs, populated manually) These are then processed to the DW by ETL, and a UserKPI view is written across the tables to provide an AD Login to KPI pair. For filtering the measures in the cube, you can then apply the following piece of DAX to the original measure – Gross Sales in the example below: Gross Sales:= IF( CONTAINS( CALCULATETABLE( 'UserKPI', UserKPI[KPIName] = "Gross Sales"), UserKPI[ADLogin], USERNAME() ), [Gross Sales (ACTUAL)] , BLANK () ) This concludes this blog post – hopefully this will be useful for anyone that wants to extract users from AD to control access / security within an application. Data could also be loaded direct into DB tables rather than MDS if required.  Please feel free to comment!

UK Power BI Summit 2017: What’s next for Power BI?

During the keynote at the UK Power BI Summit, Microsoft announced some new features coming into Power BI. Chris Webb also provided a session on Azure Analysis Services, hinting at what may be next and more importantly, how they may impact Power BI.  Without further ado, I will dive straight in. Power BI Visuals Some exciting visuals were demoed by Will Thompson (Program Manager for Power BI). They were shown in an exclusive preview of Power BI, not available to the general public: -          Themes o   Finally! Companies can now easily ‘skin’ a suite of reports to use standardised logos, fonts, etc.  o   The themes will be driven by JSON file. o   All themes can be uploaded into, using the ‘Themes Gallery’. -          Slicer Improvements o   A numeric, sliding scale slicer will be available as a new configuration option. o   Currently only designed for attributes.  o   Measure slicers are not  an immediate focus, but Microsoft are giving it thought. -          Matrix/Table Visuals o   Whilst they currently exist, a new visual will be added. This includes: §  Drill down capabilities. §  Include/Exclude – like what you see in charts currently. §  Ability to highlight rows and columns, which also cross filters and highlights the selections on accompanying charts. Analysis Services All of the features discussed are for Azure Analysis Services, but I am sure they will also be made available on-prem soon after GA release: -          Live Connect to Analysis Services o   Add ad hoc measures within the Power BI Model. o   It does not update your actual Analysis Services model, therefore, adding measures in Power BI need to be done so with caution. o   Currently Tabular only, but should eventually come into Multidimensional too. o   Available in both Azure (cloud) and on-prem. -          Power Query/M Integration o   This is the missing jigsaw to the Power BI/Analysis Services puzzle. -          Table Level Security o   Ability to implement this and row-level security together. -          Migrating existing Power BI Models o   This will be awesome! Imagine being able to move your complex Power BI model (which is now performing badly) into an Azure Analysis Services model? o   At the moment, you have to create the tabular model again. Conclusion Whilst no official dates have been given, I hope that March’s release will include the visual features, such as themes. The ones specific to Azure Analysis Services are ideas that have come from many blogs (professional experts) and the Microsoft roadmap.  If there is a feature that your customer needs or just something you believe should be in Power BI, the Ideas forum is a great place to submit your suggestion. Further Reading For up to date Microsoft announcements, I would definitely advise following the below blogs and twitter users: -          Blogs o   Power BI Ideas - o   Power BI Blog - o   Power BI Blog Announcements - o   Chris Webb’s blog - -          Twitter o   Will Thompson (Power BI Program Manager) - @Will_MI77 o   Riccardo Muti (SSRS Program Manager) @RiccardoMuti Contact Me If you have any questions, please comment below or contact me directly on Twitter (@DataVizWhizz ) .

SSAS Tabular Tables Losing Structure of Measures

What is the issue While recently working on a SQL Server 2012 Tabular cube in Visual Studio I came across an extremely frustrating issue where the visual layout of my measures was re-ordered. You are most probably aware that the actual structure or ordering of the measures within the workspace of a Tabular table in Visual Studio holds no relevance to how Visual Studio sees or uses the measures however, if you are like me then you will naturally organise your measures into some kind of pattern making them easier to use, maintain or locate. In this instance, I had all of my visible ‘switching’ measures listed in the first column and then their hidden, dependant measures which were different date analysis calculations listed in adjacent columns. For example, all of my ‘Year to Date’ calculations were in the second column, my ‘Month to Date’ calculations were in the third column, etc. Eventually I had over 20 visible switching measures in the first column, each with 8 separate hidden date analysis calculations giving me a total of 160 measures which were all neatly organised in the table’s workspace. An example of what this may look like is shown in the following diagram; At a later point I opened the table to add a new measure and to my surprise all of the existing measures had moved position and been restructured as shown in the following diagram; For the solution, we were using TFS as our source control and by going back through the version history of the SSAS Tabular project I found that a change to a completely different table which was checked in weeks earlier had the changes to this table’s layout checked in with it. Unfortunately, as there had been several other changes and check ins since this change, rolling back wasn’t an option. Here I had two options, painstakingly cut and paste each measure back into its original location or leave the table in its new format and move on. Although I have a slight OCD side to me when it comes to things like this for times-sake of the project I left it how it was; as I mentioned earlier the structure of the measures on a table holds no real relevance in SQL Server 2012 or 2014, it just purely helps with management and maintenance of the groups of measures. A colleague has also run into this issue since and their table was re-structured differently but Visual Studio seems to either add an additional column or take an existing one away and restructure the measures accordingly. Potential solutions I am unaware to the exact reason for this happening and therefore can’t give a suitable way to structure your tables for prevention. There is only one way I found to correct this issue once it has occurred other than manually cutting and pasting your measures back to the way you want. Luckily one of our team had not worked on the project since the initial check-in of the project which had the restructured table and had not got the latest code of the project. I copied their Model.bim.layout file which is located in the project folder in File Explorer and replaced my version with it. After restarting Visual Studio and re-opening my Project I had found that my table structure was back to its original layout. The issue here is that we have multiple developers working on the same Tabular project but different child tables therefore simply going through every table to check it hasn’t been restructured before every check-in would be inefficient as a layout change could easily be missed. The solution that our development team adopted was to separately attach the Model.bim.layout file to the solution in a Solution Items folder and then have this stored in source control as shown in the diagram below; This file should then be used to replace the existing Model.bim.layout file in the developer’s local Tabular project whenever they start working on the Tabular project and then replace the file in the Solution Items after they have finished working on the Tabular project. Checking this file in at the same time as their changes to the Tabular project allows the file to have an independent version history and the separation of this file from the Tabular project prompts the developer to manually check the layout of the tables before checking in the Model.bim.layout file, hopefully adding an additional check to prevent this issue from happening. Although this extra step which is needed whenever changes to the Tabular project are made will add time to the check-in process it is in my opinion worth it to ensure the structure of your tables in a Tabular project are kept consistent and easily maintainable; especially when they start to get large and complex. I have only witnessed or heard of this happening in Visual Studio 2013 with Tabular projects in both SQL Server 2012 and SQL Server 2014. I am hoping that with the many improvements and fixes that Microsoft has released for SQL Server 2016 Tabular projects this will no longer happen but if you have witnessed this in SQL Server 2016 then please comment below and let me know. Also, if you know of a better way to prevent this issue from occurring or have more information on why it happens again, please comment below.

Microsoft BI – Coming Soon to SQL Server 2016

After attending the Pass Summit 2016 a couple of weeks ago, I attended a number of sessions that provided an insight into the direction Microsoft are heading with BI.  I thought I’d share this with the community. Looking back to October 2015, the official Reporting Roadmap blog from Microsoft stated their intent in dramatically improving the visualisations, integration and harmonisation of both on-prem (SSRS) and cloud based services (Power BI).  Whilst reporting appeared to be a renewed focus, they are constantly driving other areas of BI - such as analysis services and database/data warehousing development in the cloud. Now, for the interesting bit.  Here is what we can expect in the SQL Server 20916 BI Stack in the near future: -          SQL Server 2018 CTP (to be released within the next month). o   This seems very early, considering 2016 has only just been released!  Let’s wait for an official Microsoft announcement. -          Azure DW o   Auto Create Statistics §  Currently statistics have to be generated and updated on the fly. o   Replicated Distributed Table §  One that will excite any cloud based SQL Server developer. §  This will reduce data transfer between distributions/nodes and consequently improve performance. -          Azure Data Lake o   General Availability (GA) is imminent. o   Future Features: §  Polybase, so that you can connect the two big MMP platforms (ADL and DW). §  SSIS connectors (released with GA) for Store. §  Python and R in U-SQL. -          SSIS o   Lots of new connectors, including: §  ADLS. §  CRM. -          SSAS o   Migrating Power BI Models into Tabular. §  This is coming very soon apparently, but will be developed in Azure SSAS first. o   Object Level Security in Tabular §  We currently have row level, but there are talk to secure a physical object, not just a row. §  Even better news - Microsoft want to integrate the two together, which will make security awesome in Tabular. -          SSRS o   Supporting (not pinning) Excel reports in RS report. §  This will come, but Power BI is the focus right now and we may have to wait a while. -          Power BI o   Additional and better Pivot Table functionality. o   Integrating Active Directory dynamically. o   Potential to use Direct Query and Imported modes together – as a hybrid. §  Functionality is possible, but performance needs to be weighed up by Microsoft before anything will emerge. o   Quick Calcs. §  Only ‘Percent of Current Total’ currently available. §  Potential is to offer lots more – such as YTD, MAT, Current Year vs. Previous Year, etc. §  This is for the users who aren’t familiar with DAX. o   Template organisational Content Packs. §  The ability to give the user the ability to personalise colours, fonts, etc. within a structured (organisational) content pack. -          Power BI Embed o   Application developer will be able to limit user sessions and therefore, reduce the charge per 1 hour costs that come with it.   There are some features/issues Microsoft do not plan to change.  Although, the good thing about Microsoft is that they are community driven, so if you feel strongly about anything (and get support from your peers), they may change their minds. -          SSRS o   Q&A, Query Editor (Power Query), R integration, etc. not being developed. §  Pretty obvious really.  Whilst they are going to introduce almost everything from Power BI, some elements of functionality are just not needed for on pre purposes. §  R Scripts may come one day, but not a focus right now. -          Power BI o   Source Control §  No immediate plans to integrate with TFS or modularise the pbix files (for a more developer based solution) §  Not surprising as this is a self-service tool, not a development team. §  Work around is to upload pbix files into OneDrive and use the versioning as an element of Source Control or add a file into Visual Studio. §  Keep Voting on if you want this! (Currently 278 votes). · -          Power BI Embed o   Licence model §  ‘Speak to Marketing’ is what we were told.   Is everyone excited?  I certainly am.  Microsoft are openly admitting that any new BI related feature will be developed in the cloud first, but this was pretty obvious.  For all you on-prem fanatics (me included), at least the bugs/undocumented features will be ironed out before we get to use them!  My personal favourite has to be the direction SSRS is going.  It was recently labelled a ‘mature’ product, but now RS has suddenly become a cool and current tool to use.  The fact Power BI and SSRS will become almost the same product, tells us all that on-prem is still an important part of the Microsoft licensing strategy.   I am sure there are lots of other features coming into the BI stack over the next 6 months to a year and these were just the ones I had heard about.  Feel free to comment if you have additional ones to share.

New Features in SQL Server 2016 – Part 2: Analysis Services

This blog solely focuses on the new Analysis Services features of SQL Server 2016 CTP2.  For anyone who may missed it – click here to view my opening blog on the Database Engine. Although there have not been any major Analysis Services (SSAS) enhancements in CTP2, it is pleasing Microsoft are still looking to improve this part of the BI Stack.  The majority of them seem to be geared towards Tabular and DAX, although there are plans to release further Multidimensional functionality in CTP2 and beyond. There are five key enhancements for SSAS: 1.      Tabular Model Partitions (Tabular). 2.      Many 2 Many (Tabular). 3.      Analysis Services PowerPivot mode. 4.      New DAX Functions (Tabular). 5.      Enable/disable attribute hierarchies. Tabular Model Partitions SQL Server SSAS 2016 CTP2 includes new parallel processing functionality for tables with two or more partitions, increasing processing performance. There are no configuration settings for this.  More information on this feature is displayed below: 1.      IN SSAS Tabular, partitioning big data marts will help drastically when processing new data.  Instead of having to process all data, you can just partition the relevant ones. 2.      In order to create, manage, and process partitions in SQL Server Management Studio, you must have the appropriate Analysis Services permissions defined in a security role. 3.      Parallel processing is also available in SSAS.  This happens automatically, when processing a table with more than one partitions.  Although you can choose to independently process a partition if required. Many to Many Dimensions (Tabular) There isn’t too much to show or say here yet apart from the fact that this feature is FINALLY HERE!  Gone are the complications of workarounds (thanks to Marco Russo and Alberto Ferrari), we can now just use the Many to Many functionality in Tabular.  Multidimensional already has this feature, which is one of the reasons why developers have yet to buy in to Tabular.  Unfortunately, Microsoft have yet to reveal much detail on to how this feature will work.  I just hope it performs as well as the current workarounds. Analysis Services PowerPivot mode To install PowerPivot for SharePoint is now a lot easier.  If you already know how to configure SharePoint 2013, this will help you greatly.  Taken from the MSDN Microsoft website, you simply use the Install Wizard and do the following: 1.      Select ‘Analysis Services’ from the standard Instance Feature. 2.      Choose ‘Analysis Services server mode’ and configure administrators (screenshot below).   New DAX Functions There are many new DAX functions, which are all detailed on MSDN website – click here for further detail. I have picked out 5 functions that will be highly beneficial when creating an SSAS Tabular cube.  I am looking at this form a very BI focused background so I would recommend looking at all of the new functions to see if they benefit your needs. 1.      TOPn a.      This is not actually a new function but has been updated. b.     Now allows TRUE/FALSE/ASC/DESC to specify sorting direction. 2.      CALENDAR a.      Returns a table with a single column named “Date” that contains a common set of dates. The range of dates is from the specified start date to the specified end date. b.      See Similar – CALENDARAUTO 3.      MEDIAN a.      Very simple – returns the median numbers in a column. b.      See Similar – MEDIANX (uses an expression). 4.      NATURALINNERJOIN a.      Inner join of a table with another table. The tables are joined on common columns in the two tables. If the two tables have no common column names, an error is returned. b.      See similar – NATURALLEFTOUTERJOIN 5.      SUMMARIZECOLUMNS a.      Enables you to group data together and return a summary table. Enable/Disable Attribute Hierarchies Again, this functionality is tailored towards SSAS Tabular.  There is a new setting that ties back to the Tabular mode metadata 1.      ColumnUsage property a.      DAXUsage                                                     i.     Attribute hierarchies can only be used in measures. b.     UnrestrictedUsage                                                     i.     Can be set in the XMLA (in SSDT) or by using an ALTER statement in the Tabular model.                                                    ii.     An example taken from MSDN:        <Alter>   <ObjectDefinition...>     <Database>       <Dimensions>         <Dimension>           <Attributes>             <Attribute>                <ddl500:ColumnUsage value=”ddl500_500”>                DAXUsage | UnrestrictedUsage                </ddl500:ColumnUsage>   Multidimensional Features The full SQL Server 2016 release will include some key enhancement to Multidimensional SSAS.  These include: 1.      Netezza as a Data Source (Netezza Data Warehouse | IBM - NDM Technologies). 2.      General performance improvements. 3.      Unnatural hierarchies. 4.      Distinct counts. 5.      DBCC support. Checks the logical and physical integrity of objects in the specified database. 6.      Expose on-premises multidimensional cubes in the cloud with Power BI. I will provide further information on the improvements, as and when Microsoft announce them. References For more information on all of the new SSAS SQL Server 2016 features, the below resources/blogs are highly recommended. ·        Official Microsoft Page - ·        Brew your own Business Intelligence Blog – ·        Jorg Klein SQL Server Blog -

Maintaining Hierarchy State On Report Refresh

I recently had a request from a client who wanted to display use a parent child hierarchy to navigate their report. The issue with this however is when you click the hierarchy the report refreshes, the data is updated in the charts, but the hierarchy collapses to its original state as below. This wasn’t suitable for the clients needs as it was a large hierarchy with several levels of depth.   What was required was to maintain the state of the hierarchy so it would stay expanded during report refresh. After a little thought I came up with the following solution: 1. Setup a standard report with an indented hierarchy with expandable sections. This has been covered in many places so I’ll only give a brief overview, for this example report I have used adventure works multidimensional model with the following MDX query as the source for the hierarchy data:- Add the fields in a matrix like this:   Right click the Organizations text box, select ‘Textbox Properties’ and in the Alignment tab set the Left padding property to the following value (change the number to adjust indentation): Following this you need to setup groupings in order for the expanding hierarchy. Add a grouping on the UniqueName property On the visibility section set the toggle property to HIDE (ignore the expression for now -) and set the "’Display can be toggled by this report item” property as per the below. In the advanced section set the recursive parent property to [Organisations.ParentUniqueName] and this should give a working report with an expandable hierarchy.   2. Add a hidden parameter to the report called OrganisationLevel. Set the default value to the top of the hierarchy. Add another column to the matrix, add text to it for the link. Select the text and right-click the selected text navigate to properties and place an action to go to original report and pass the OrganisationLevel parameter with a value of [Organisations.UniqueName]   3. Add a dataset dsHierarchyVisibleMembers with the below MDX query – this query uses the OrganisationLevel parameter to work out which level in the hierarchy the user has clicked on and the entities at the same level which we can then use to control the state of the expanding hierarchy.    Now set the visibility property on the row grouping to the following value: The last step is to set the hierarchy collapsed/expanded state, which we using the following formula for the ‘InitialToggleState’ property of the text box to the following. Once these steps are completed clicking one of the elements in the report will pass the parameter, but the hierarchy state is maintained between clicks. Clicking View on ‘Northeast Division’ the left report displays the following in the right – note the chart has been updated but that the hierarchy expansion state remains as it was before the user clicked.   That’s all for now, hope this is of use.

KPI’s in Power View and DAX Query for KPI’s

I was recently approached by a client where we have assisted in implementing a Microsoft Analysis Services tabular solution and a supporting management pack of SSRS reports.  They were asking about utilising some of the KPI’s defined in their tabular cube in the SSRS reports using DAX queries.  I was at a bit of a loss and most web help seemed to suggest it couldn’t be done: Blog comment from 2012 However, with a bit of time to investigate it appears Power View does now support Tabular KPI’s and as such a DAX query should be able to pull them out.  My KPI base measure is named “Highest Ranking” and I am able to view its value, status or goal in Power View.  A little work with every SQL developer’s best friend Profiler and I had what I was looking for.  It appears that when you create a KPI SSAS is defining some measures in the background which do some of the work of the MDX KPI functions. Therefore the following DAX query against my cycling Demo cube contains the expected results and could therefore be used in SSRS reporting. EVALUATE ADDCOLUMNS( VALUES('DimRider'[Name]) , "Highest_Ranking", 'FactRanking'[Highest Ranking], "Highest_Ranking_Goal", 'FactRanking'[_Highest Ranking Goal], "Highest_Ranking_Status", 'FactRanking'[_Highest Ranking Status] ) ORDER BY 'DimRider'[Name]   I also tested for the existence of other KPI functions such as Trend and Weight but these do not appear to be present.  It is also interesting that the use of a KPI over a measure does not change the measure name but just group it in the field list and as such there is no need for a value function. For more info on KPI’s in Power View there is additional documentation here.  I am currently unsure of the required versions for this functionality so if it is missing from your environment I would love to hear from you.

SSRS Language parameter using SSAS Translations

Problem SSAS translations are known to deal with warehousing language requirements well.  We recently had a requirement for a multi-language report suite on top of a user browsed cube. First thought was SSAS translations, these will pick up the users local language settings and allow an elegant solution to the problem.  However a requirement was put forward for a language picker in the SSRS report suite to over-ride the users locale if they so desire.  This was not so obvious. Data source We started with a list of languages to build a picker from, this was built by importing the table available here to a SQL Server and building it as a dataset in our report: It is important to bring through the decimal value in column 4 and the short string in column 2 as we will see later. Parameter Build Use the dataset on MSDN to build a language select parameter.  Use the language name in the label field and the decimal locale identifier in the value field.  The setup should look something like this: Default value setup If you just need English as the default use the UK code 2057 as your default value, however a more elegant final solution is a little more complex.  Reporting services provides the built in variable User!Language which holds the users language in the short string ‘en-gb’ style format.  So although we can access the users language it needs translating into a localeId before we can pass it to SSAS.  Fortunately for us the same table we used above can be used to translate between short string and locale. I achieved this using a further hidden parameter defaulted to the built in User!Language variable, the parameter was set up as detailed below: Using the dataset where LanguageCode = short string in our source table If this parameter is run first (it must be earlier in the report parameters list) it will contain the users language in the form ‘en-gb’ in the value and ‘2057’ in the label field.  It is then simple to set the default of the first dropdown parameter to the value of the hidden parameter’s label using the following syntax. =Parameters!UserLanguage.Label I am open to suggestions of a way to get the default without a hidden parameter or a large switch statement so if you find something more elegant please let me know!  It is also worth noting that if a user has a language that your source table does not have an entry for you are obviously going to lose the defaulting behaviour.  Parameter Ordering It is essential to make sure that the parameters are ordered in your parameter list in the sequence they need to be evaluated in.  In this case the hidden user language parameter needs to be first, followed by the visible language drop down parameter.  Only then should other report parameters be added. Passing Parameter value to SSAS In order for SSAS to provide us the data in the language requested we need to pass the locale to analysis services.  We can do this by over-riding the locale in the connection string.  Go to your report data source and double click for properties.  Then we need to edit the connection string expression as circled. Here we need to make sure our connection to SSAS is provided with the locale from the parameter.  We use the following expression to build our connections string. ="Data Source=localhost;Initial Catalog=" & CHR(34) & "Adventure Works DW 2008R2" & CHR(34) & ";Locale Identifier=" & Parameters!Language.Value (CHR(34) is the “ character that we wanted to escape properly.) This builds the string detailed below for the parameter selection Spanish. =Data Source=localhost;Initial Catalog="Adventure Works DW 2008R2";Locale Identifier=3082 This should now preview and allow you to select the language and your data sets will return any translations you have present in the cube. Notes If you are using a shared data source reference this is not going to work for you as it is only possible to expression a data source held in the report.  In this case perhaps you can use a solution along the lines of the one detailed by Mosha here: Another warning is to finish your report design first because you cant refresh datasets and metadata once the expression is set Tested on 2008R2 only.   Hope this helps everyone and allows you to select your existing translations in a flexible way through SSRS.  Your comments are welcome.