Adatis BI Blogs

Azure Functions Key Vault Integration

IntroductionA really useful feature that recently came in to public preview for Azure Functions is native integration with Key Vault when retrieving application settings. This feature allows you to source applications settings directly from Key Vault without making any changes to your function code and without having to implement any additional infrastructure code to handle connecting to Key Vault and retrieving secrets. The original blog post announcing this feature can be found at Key Vault integration leverages another recent Azure Functions feature called Managed Identities. When creating a function app you can easily create a system assigned managed identity by enabling it through the Azure Portal or including a property within the ARM template (see for more information); this allows you to interact with many Azure services using Azure AD authentication without the need to worry about managing service principals and the inevitable downtime that occurs when their keys expire.As the feature is currently in preview there is a limitation in that it doesn’t support rotation of the secrets meaning only a single version of the secret is supported however, I’ll run through a workaround for that issue in this post.Now that we’ve covered the basics of what the feature does I’ll run through a quick demo to demonstrate how it can help us when we’re developing function apps. This post isn’t intended to be an introduction to Azure Functions or Azure Key Vault so I have assumed a basic knowledge of each and only covered the elements of the set up that are related to the Key Vault integration.DemoThe first step is to create our resources in Azure. For the demo we need to create new resource group that contains a function app with its associated storage account and an instance of Azure Key Vault:Next, we need to enable the system assigned managed identity on our function app, we do this by navigating to the function app in the portal, clicking on the “Platform Features” tab and then clicking on “Identity”:From there we can set the status of the system assigned managed identity to “On” and click save to apply the change:Next, we need to grant our function app permissions to retrieve secrets from Key Vault; we do this by navigating to Key Vault in the portal and clicking on “Access Policies”:Now we need to click on “Add New” and click on the “Select Principal” panel that is displayed, which brings up a new blade where we can search for the managed identity created for our function app in Azure AD:Now we’ve selected our principal we need to give it the appropriate permissions. To retrieve secrets our function app only needs “Get” permissions on secrets so we can select that option and click ok:The Key Vault UI isn’t the most intuitive as most people forget the next step but we need to click the save button to commit the changes to our access policies:Now that we’ve created our Azure resources and set Azure Functions up so that it can communicate with Key Vault we need to add a secret to Key Vault and get our function app to retrieve it. To do this I have created the following script that will add the given secret to Key Vault and then set the required application settings on our function app to allow the value to be retrieved:Once we run the script our secret will be added to Key Vault and the relevant application setting that acts as a pointer to the secret is added to our function app meaning that if we always use this script to deploy our secrets our function app will always be using the latest version of the secret meaning we are able to work around the current limitation that means the url in the application setting for our function app must include the version of the secret. After running the script we can take a look at the application settings for our function app and see that the reference to the secret is added:Now we’ve added our secret to Key Vault and created the reference to the secret in our function app we can test that our functions are able to retrieve the secret. To do this we can create a new HTTP triggered function that has the following code to use the Environment.GetEnvironmentVariable function to retrieve the value of our application setting:When we run the function the result returned is as follows, which matches the value we added to Key Vault! Obviously, in the real world we wouldn’t want to expose the value of our secret through outside of our function but this allows us to see the value that was returned by the app setting.ConclusionTo conclude, we’ve shown how easy it is to integrate Azure Functions with Azure Key Vault. With the newly released integration we can leverage managed identities to access Key Vault without the need to write any additional code or have the overhead of managing service principals, this means we can ensure that our secrets are stored safely in Key Vault thereby improving the security of our serverless application.

Analysis of Spatial Data Using Cosmos DB

IntroductionRecently, while researching Cosmos DB, I came across the in-built capabilities for managing spatial data.Cosmos DB is Microsoft’s globally distributed, multi-model database. It has the capability to store various types of data, such as document, graph and key-value and can elastically scale to cope with varying needs. The piece of Cosmos DB that this post will be discussing is the spatial capabilities of the document model.The problem I have chosen to solve using the spatial functionality is working out which airfields are within the range of an aircraft when given its true airspeed and fuel endurance in hours; with the range being calculated by multiplying the airspeed by the endurance.The DataThe first step was to find a data set containing a list of all of the world’s airfields, this was found on GitHub at The data set contains the details we need, which are:ICAO code – this is the unique identifier for an airportAirport NameLatitude and Longitude of the AirportThe next step was to create a Cosmos DB account in the Azure Portal and write a C# app to do some transformations on the data and load the documents into our Cosmos DB collection.I first created a C# object that matched the structure of the JSON data:using Newtonsoft.Json; namespace LoadAirportData { public class Airport { [JsonProperty("id")] public string Id => this.Icao; [JsonProperty("icao")] public string Icao { get; set; } [JsonProperty("iata")] public string Iata { get; set; } [JsonProperty("name")] public string Name { get; set; } [JsonProperty("city")] public string City { get; set; } [JsonProperty("state")] public string State { get; set; } [JsonProperty("country")] public string Country { get; set; } [JsonProperty("elevation")] public int Elevation { get; set; } [JsonProperty("lat")] public double Latitude { get; set; } [JsonProperty("lon")] public double Longitude { get; set; } [JsonProperty("tz")] public string TimeZone { get; set; } } } I then created a C# object that matched the structure of the document I wanted to insert into Cosmos DB. The “Point” data type is used to serialize the latitude and longitude into the GeoJSON structure that Cosmos DB supports:using Microsoft.Azure.Documents.Spatial; using Newtonsoft.Json; namespace LoadAirportData { public class AirportDocument { public AirportDocument(Airport airport) { Id = airport.Icao; Iata = airport.Iata; Name = airport.Name; Location = new Point(airport.Longitude, airport.Latitude); } [JsonProperty("id")] public string Id { get; set; } [JsonProperty("iata")] public string Iata { get; set; } [JsonProperty("name")] public string Name { get; set; } [JsonProperty("location")] public Point Location { get; set; } } } Finally I created a method that dropped the Cosmos DB database, recreated the database and the document collection then loaded the documents into the collection:using Microsoft.Azure.Documents.Client; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Microsoft.Azure.Documents; using Newtonsoft.Json; using System.Net; using System.Collections.Concurrent; namespace LoadAirportData { public class CosmosDbImporter { private const string ENDPOINT_URL = "<YOUR ENDPOINT>"; private const string PRIMARY_KEY = "<YOUR KEY>"; private const string DATABASE_NAME = "airports"; private const string COLLECTION_NAME = "airportData"; private const string IMPORT_PATH = @"C:\Temp\Airports.json"; public async Task ImportData() { var documentClient = new DocumentClient(new Uri(ENDPOINT_URL), PRIMARY_KEY); // Delete the database if it exists try { await documentClient.DeleteDatabaseAsync(UriFactory.CreateDatabaseUri(DATABASE_NAME)); } catch (DocumentClientException ex) { if (ex.StatusCode != HttpStatusCode.NotFound) throw; } // Create the Database await documentClient.CreateDatabaseIfNotExistsAsync(new Database() { Id = DATABASE_NAME }); // Create the collection and switch on spatial indexing DocumentCollection collection = new DocumentCollection() { Id = COLLECTION_NAME }; collection.IndexingPolicy = new IndexingPolicy(new SpatialIndex(DataType.Point)); await documentClient.CreateDocumentCollectionIfNotExistsAsync(UriFactory.CreateDatabaseUri(DATABASE_NAME), collection); // Read the file and deserialize to our Airport object var data = System.IO.File.ReadAllText(IMPORT_PATH); var airports = JsonConvert.DeserializeObject<Dictionary<string, Airport>>(data); // Upload documents to CosmosDB await Task.WhenAll( from partition in Partitioner.Create(airports.Values).GetPartitions(50) select Task.Run(async delegate { using (partition) { while (partition.MoveNext()) { Console.WriteLine($"Processing {partition.Current.Icao}"); var airportDocument = new AirportDocument(partition.Current); await documentClient.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DATABASE_NAME, COLLECTION_NAME), airportDocument); } } })); } } } One thing to note is the above code enables spatial indexing when creating the collection, without this enabled performance is extremely poor when performing spatial queries.The beauty of Cosmos DB is that it is able to elastically scale to the performance level specified by the user through the number of RUs (request units) that are allocated to the collection. While loading the data into Cosmos DB I wanted to scale up my database to take advantage of the multithreading in my C# app and speed up my processing so I just went in to the Azure Portal and adjusted the number of RUs allocated to the collection, the change was almost instant and my process instantly sped up. Once I had finished importing the data I was able to scale my database back down so I’m no longer paying for any unused capacity.QueryingNow the data is in Cosmos DB we can begin to play around with some spatial queries.To query airfields within a certain distance of a specified point I can run the following query which returns all of the airfields within 25km of Blackbushe airport. As you can see, the SQL syntax for querying Cosmos DB is very similar to T-SQL meaning it’s very easy to re-use your SQL Server skills:SELECT AS ICAO, AS Name, ST_DISTANCE(airports.location, {"type": "Point", "coordinates": [-0.8475000262,51.3238983154]}) AS Distance FROM airports WHERE ST_DISTANCE(airports.location, {"type": "Point", "coordinates": [-0.8475000262,51.3238983154]}) < 25000The above query returns the following results, which are the 7 airfields that are within 25km of Blackbushe:[ { "ICAO": "EGHL", "Name": "Lasham Airport", "Distance": 19964.7890768588 }, { "ICAO": "EGVO", "Name": "RAF Odiham", "Distance": 11985.957064869535 }, { "ICAO": "EGTF", "Name": "Fairoaks Airport", "Distance": 20229.321025944442 }, { "ICAO": "EGLF", "Name": "Farnborough Airport", "Distance": 7286.035340157135 }, { "ICAO": "EGLK", "Name": "Blackbushe Airport", "Distance": 0 }, { "ICAO": "EGLM", "Name": "White Waltham Airfield", "Distance": 20312.693531316185 }, { "ICAO": "EGLP", "Name": "Brimpton Airfield", "Distance": 23311.94703537874 } ]The AppThe next step is to create an application that uses the functionality to find the airfields within the range of an aircraft. To do this I created a basic ASP.NET MVC application that has a form with the following fields:When the form is submitted the following C# code is executed:[HttpPost] public async Task Index(AirportFinderModel model) { var documentClient = new DocumentClient(new Uri(ENDPOINT_URL), PRIMARY_KEY); var baseAirfield = await documentClient.ReadDocumentAsync(UriFactory.CreateDocumentUri(DATABASE_NAME, COLLECTION_NAME, model.BaseAirfield)); var availableDistanceMeters = (model.EnduranceHours * model.TrueAirspeed) * 1852; var result = documentClient .CreateDocumentQuery(UriFactory.CreateDocumentCollectionUri(DATABASE_NAME, COLLECTION_NAME)) .Where(a => a.Location.Distance(baseAirfield.Document.Location) <= availableDistanceMeters) .ToList(); return View("Result", new AirportFinderResultModel() { MapPoints = JsonConvert.SerializeObject(result.Select(a => new MapPoint() { Title = a.Id, Description = a.Name, Longitude = a.Location.Position.Longitude, Latitude = a.Location.Position.Latitude })) }); } The above code connects to Cosmos DB and retrieves the details for the base airfield that was specified, it then calculates the range of the aircraft in meters by multiplying the endurance (in hours) by the true airspeed in knots (nautical miles per hour) and then multiplying that my 1852 (number of meters in a nautical mile). A Linq query is then run against Cosmos DB using the built-in spatial functions to find airfields within the specified distance. The result is then converted into a JSON array that can be understood by the Google Maps API that is being used on the client side.The client side uses the Google Maps API to plot the airfields on a map, giving us a view like the one below when given a base airfield of Blackbushe (EGLK), a true airspeed of 100kts and an endurance of 4.5 hours:The current functionality of the app is extremely basic but could easily be expanded to make the range calculation more accurate by looking at wind and other factors that can affect range. This could be done by creating a polygon representing our range and then using the ST_WITHIN function to find all airfields within that polygon. The functionality could also be enhanced to take into account other attributes of the airfield by deciding if an airfield is suitable based on runway length and other facilities.ConclusionAs you can see, using Cosmos DB it is extremely easy to analyse spatial data and the elastic scaling capability allows you to easily increase capacity of your application to cope with increased usage and large amounts of data. The spatial capabilities of Cosmos DB can make it a viable alternative to other databases such as SQL Server that aren’t always as easy to scale on demand and don’t have the flexibility that a document model can give.As always, if you have any questions or comments then let me know.

Extraction and Analysis of GeoSpatial data with Azure Data Lake Analytics

I recently had to implement a solution to prove it was possible to integrate a shape file (.SHP) in Azure Data Lake Store (ADLS) for post geographic spatial analysis using Azure Data Lake Analytics (ADLA). A shape file is a data set used by a geographic analysis application that stores a collection of geographic features, such as streets or zip code boundaries, in the form of points, lines or area features.As you already figured, storing a shape file in ADLS is not a difficult goal to achieve, however, how can you possibly use ADLA to obtain the geographic data from the file? In this blog I’ll explain how we can extract the data to a supported format, such as CSV, and use it to run geographic spatial analysis in ADLA, with the support of the spatial data types introduced in the SQL Server 2008 (details here).As always, whenever we face a limitation of ADLA, C# is our best friend. In order to read the content of a shape file, we need to start by adding a geospatial assembly to our solution, which, in my case, was the “Catfood” ESRI Shapefile Reader (details here).The shape file used in this example contains a list of parks in London. The following code demonstrates how to extract the metadata and the geographic shapes to a CSV file. The only shapes extracted are polygons, although it is possible to add more if needed. public static void CreateWorkForThreads() { //Create a new dataset and store the data in a table DataSet ds = CreateNewDataSet(); DataTable dt = ds.Tables[0]; int i; int count = 0; // Parse the shapefile and select the columns we are interested in using (Shapefile shapefile = new Shapefile(@"path\file.shp")) { foreach (Shape shape in shapefile) { string[] metadataNames = shape.GetMetadataNames(); string geometry = ""; int countParts = 0; int countShape = 0; DataRow dr = dt.NewRow(); //Extract the metadata. The first iteraction will extract the name of the columns if (metadataNames != null) { foreach (string metadataName in metadataNames) { if (count == 0) dr[metadataName] = metadataName; else dr[metadataName] = shape.GetMetadata(metadataName); } } //Shape is not part of the metadata, so manually defining the name of the column if (count == 0) { dr["shape"] = "shape"; } else { // cast shape based on the type switch (shape.Type) { case ShapeType.Point: // a point is just a single x/y point ShapePoint shapePoint = shape as ShapePoint; MessageBox.Show("Point (" + shapePoint.Point.X.ToString() + ", " + shapePoint.Point.Y.ToString() + ")"); break; case ShapeType.Polygon: // a polygon contains one or more parts - each part is a list of points which // are clockwise for boundaries and anti-clockwise for holes // see ShapePolygon shapePolygon = shape as ShapePolygon; foreach (PointD[] part in shapePolygon.Parts) { countShape = 0; if (countParts == 0) geometry = "("; else geometry = geometry + " | ("; foreach (PointD point in part) { if (part.Length - 1 != countShape) geometry = geometry + point.X + " " + point.Y + " |"; else geometry = geometry + point.X + " " + point.Y + " )"; countShape++; } countParts++; } break; default: break; } //Build our Polygon. //Eg. POLYGON((-122.358 47.653, -122.348 47.649| -122.348 47.658, -122.358 47.658, -122.358 47.653)) dr["shape"] = "POLYGON(" + geometry + ")"; } dt.Rows.Add(dr); count++; } } //Extract the data to a csv file using (System.IO.StreamWriter sw = new System.IO.StreamWriter(@"path\filename.csv")) { foreach (DataRow row in dt.Rows) { object[] array = row.ItemArray; for (i = 0; i < array.Length - 1; i++) { sw.Write(array[i].ToString() + ","); } sw.WriteLine(array[i].ToString()); } } } public static DataSet CreateNewDataSet() { DataSet dsTemp = new DataSet(); DataTable dtTemp = new DataTable("londonparks"); dtTemp.Columns.Add("id", typeof(string)); dtTemp.Columns.Add("parkname", typeof(string)); dtTemp.Columns.Add("street", typeof(string)); dtTemp.Columns.Add("postcode", typeof(string)); dtTemp.Columns.Add("shape", typeof(string)); dsTemp.Tables.Add(dtTemp); return dsTemp; }Now that we have a valid file that can be processed by ADLA, we can upload it to ADLS and start performing geospatial analysis. To do so, I simply followed the logic described in Sacha’s blog (here).The following U-SQL has in consideration a dataset that contains details of the trajectory of a courier, tracked on a daily basis. With the following code, we identify if a courier drove by a park by using the Intersect function. Because we have to cross two datasets, a C# function was created to help the evaluation of multiple events. // Reference the assemblies we require in our script. REFERENCE SYSTEM ASSEMBLY [System.Xml]; REFERENCE ASSEMBLY [SQLServerExtensions].[SqlSpatial]; REFERENCE ASSEMBLY [USQL.Core]; // Once the appropriate assemblies are registered, we can alias them using the USING keyword. USING Geometry = Microsoft.SqlServer.Types.SqlGeometry; USING Geography = Microsoft.SqlServer.Types.SqlGeography; USING SqlChars = System.Data.SqlTypes.SqlChars; USING [USQL].[Core].[Utilities]; // Extract the list of parks @parks = EXTRACT [ID] string, [PARKNAME] string, [STREET] string, [POSTCODE] string, [SHAPE] string FROM "RAW/Parks.csv" USING Extractors.Text(delimiter : ',', silent : false, quoting : true, skipFirstNRows : 1); //Extract data from the file containing the courier trajectory @trajectories = EXTRACT GPSDateTimeUTC DateTime, ReceivedDatetimeUTC DateTime, VehicleKey string, Altitude int, Longitude double, Latitude double, Distance decimal, VehicleSpeedMph decimal FROM "CURATED/Trajectory/Trajectory.TXT" USING Extractors.Text(delimiter : '|', silent : false, quoting : true, skipFirstNRows : 1); //Get the list of vehicles that drove by the park. @vehicleIntersection = SELECT DISTINCT a. *, "1" AS VehicleIntersected FROM @trajectories AS a CROSS JOIN @parks AS b WHERE Utilities.Intersect(b.[SHAPE], a.[Longitude], a.[Latitude]).ToString() == "True"; //Get the list of vehicles that didn't drive by the park. @vehicleWithoutIntersection = SELECT a. *, "0" AS VehicleIntersected FROM @trajectories AS a LEFT JOIN @vehicleIntersection AS b ON b.VehicleKey == a.VehicleKey AND b.GPSDateTimeUTC == a.GPSDateTimeUTC WHERE b.VehicleKey IS NULL; //Union both datasets to get the complete set of data @finalData = SELECT * FROM @vehicleIntersection UNION ALL SELECT * FROM @vehicleWithoutIntersection; //Export the results to a csv file OUTPUT @finalData TO "LABORATORY/GeoSpatialIntersection.csv" USING Outputters.Text(outputHeader : true, delimiter : ',', quoting : true);And here is the C# function. It accepts three parameters and calculate the intersection of a point with a shape.public static string Intersect(string shape, double longitude, double latitude) { //Because we had a csv file, the coordinates in the polygon were separated by | //It is important to use the .MakeValid() method to validate any invalid shape //In case the dataset had multypoligon shapes, without the MakeValid(), the function would throw an error var g = Geography.STGeomFromText( new SqlChars( shape.Replace('|',',')), 4326).MakeValid(); var h = Geography.Point(longitude, latitude, 4326); return g.STIntersects(h).ToString(); }As always, if you have any questions or comments, do let me know.

Debug Custom MDS Workflows

The custom MDS workflows, AKA External Action Business Rules or custom MDS assemblies, are one of the more advanced features of MDS. They allow you to extend the standard MDS business rules by getting MDS to fire Entity Member member data at an external assembly of your choice. There are a lot of moving parts to this feature, MDS business rules, the custom .Net assembly, service broker and a windows service! When a business rule meets your condition, then the rule is written to a service broker queue. The windows service interrogates the queue and then passes the information to the custom assembly. I’m not going to show how to do this, this is already done by Johan Machielse and also on MSDN. What I would like to show is how to debug the use of the custom business rules, just due to the number of moving parts and also a couple of quirks that I’ve encountered when using them. Custom Assembly Deployment One of the first things that may catch you out is the deployment of the assembly. To troubleshoot this, I recommend the           “-console” argument of the workflow service. If you run it in command line mode you may encounter a few errors: “Could not load file or assembly 'YourAssembly' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.” This means that you need to go the properties of your assembly in Visual Studio and change the Target Framework from .NET Framework 4 to 3.5. Another error that you may get is "ERROR: Could not create type. Ignored.”  Assuming that you have actually put the assembly in the bin directory, this is most likely to be that the either the namespace or the class name in your DLL doesn’t match what you have in your config file. Check the contents of Microsoft.MasterDataServices.Workflow.exe.config match your DLL. You may also get, “ERROR: Type does not implement IWorkflowTypeExtender.” This means that your assembly has been picked up correctly, but it does not implement IWorkflowTypeExtender. If the assembly does start up ok without errors, you may still get errors when the exe tries to take a message off the service broker queue. An error such as “Could not dispatch due to missing extender <tag>” means that the tag that you’ve given to the assembly in the config file does not match the “Workflow Type” value of the MDS business rule: <setting name="WorkflowTypeExtenders" serializeAs="String"> <value>MDSTEST=MDS.CustomWorkflow, CustomWorkflow</value> </setting> Debugging the Custom Assembly Once you do get the assembly working, you may find that the code isn’t quite doing what you want. A good way to troubleshoot this is to choose “Attach To Process” from the Debug menu in Visual Studio, as shown below: If you do this, then you can add breakpoints to your .Net code as normal and step through in order to troubleshoot issues. To conclude, this probably isn’t an exhaustive list of all the sort of errors that could occur with custom workflows, but with any luck it will save someone a few hours of troubleshooting.

Managing SSAS Named Sets with Master Data Services Collections

Master Data Services Collections are probably not the first feature that come to mind when you think of the MDS product. The hierarchies and member grids tend to be the core functionality, but as this post will hopefully show, MDS Collections are useful also. Collections are essentially managed lists of members that can come from multiple different MDS explicit hierarchies, or also from another collection. The idea is that this “subset of members” can be maintained in MDS by a business user and then fed to external applications for reporting or other purposes. Analysis Services Named Sets One example of how collections can be used is to maintain Analysis Services named sets. Some named sets, such as a Top 50 Customers, don't require any maintenance, as it’s just the top 50 of all customers, based on a particular measure. On the other hand, sometimes named sets can be static lists, e.g. a list of core products that the user wants to see for reporting. In the latter example, if a user wants the definition of a named set to change, they have to get IT to change the MDX script. MDS collections can help by allowing the user to control the named set definition, reducing the burden on IT. Example Scenario  Here’s an example of how this is done. First of all, the end game is that I have a named set in Analysis Services that is currently just for 3 products. Therefore, a user can easily drop this set into an Excel report to get quick access to the products that are important to them: So the challenge is that we need to find some way of extending this named set without doing it manually. This is where MDS starts to come in, so using the sample Product model that comes with MDS, I’ve created a new collection called Favourite Products, as shown in the image below: If I go and edit the Favourite Products collection in MDS, then I can drag and drop more products into this collection, or remove some existing members. In this case, I’ve chosen to add two new products: C#, AMO and the MDS API So the named set represents the target that we need to get to, whereas the MDS collection that’s shown is the source. To get the contents of the MDS collection to update the named set, one way of doing it is to use the MDS API to pick up the collection members, and then to use AMO in order to write the named set into the MDX script. I’m just doing this in a C# windows application, but you could do it via a batch process, such as SSIS. For this post I’m just going to show the code, so here goes: This post is already starting to feel too long so I’m not going to show the basics of the MDS API. For that, take a look at a good posting by the MDS Team blog here. Also, as anyone who knows me will no doubt agree, I don’t tend to get involved in doing C#, so don’t consider this to be production ready! It should give you an idea of the basics though. Anyway, assuming that we’re now connected to the MDS Web Service, I’ve created the following method that will return the members from the collection: private HierarchyMembers ObtainHierarchyMembers(string entityId, string hierarchyId, string modelId, string versionId) { HierarchyMembersGetRequest request = new HierarchyMembersGetRequest(); HierarchyMembersGetResponse response = new HierarchyMembersGetResponse(); request.HierarchyMembersGetCriteria = new HierarchyMembersGetCriteria(); //State that our hierarhcy type is a collection and that we want collection members request.HierarchyMembersGetCriteria.HierarchyType = HierarchyType.Collection; //Pass in the key search criteria to identify the correct collection in MDS request.HierarchyMembersGetCriteria.ParentEntityId = new Identifier { Name = entityId }; request.HierarchyMembersGetCriteria.HierarchyId = new Identifier { Name = hierarchyId }; request.HierarchyMembersGetCriteria.ModelId = new Identifier { Name = modelId }; request.HierarchyMembersGetCriteria.VersionId = new Identifier { Name = versionId }; request.HierarchyMembersGetCriteria.RowLimit = 50; request.International = new International(); OperationResult result = new OperationResult(); //Return the hierarchy members from the service return service.HierarchyMembersGet(request.International, request.HierarchyMembersGetCriteria, out result); } Before we use the above method, we need to connect to SSAS using AMO. That can be done quite easily with the following code: Server server = new Server(); string connection = "Data Source=.;Catalog=Adventure Works DW 2008R2;"; server.Connect(connection); After we’ve done all the usual error handling associated with connecting to a database, we need to pick up the SSAS database and cube: Database database = server.Databases["Adventure Works DW 2008R2"]; Cube cube = database.Cubes["Adventure Works"]; Now we’re ready to call our ObtainHierarchyMembers method, before looping through it to build the named set: StringBuilder mdx = new StringBuilder("\n//Auto generated named set at " + DateTime.Now.ToString() + "\nCREATE SET CurrentCube.[Favourite Products] AS {"); int count = 1; //Loop through the collection to build the mdx foreach (ParentChild pc in hm.Members) { //Add the members to the MDX string //This references the member by name //It would be possible to reference by member key, but would require more work mdx.Append("[Product].[Product].[" + pc.Child.Name + "]"); if (count < hm.Members.Count()) { mdx.Append(", "); } count++; } mdx.Append("};"); Now we need to insert the named set in the correct place within the existing MDX script, bearing in mind it could already exist: string currentScript = cube.MdxScripts[0].Commands[0].Text; //Find the correct place to insert the named set within the MDX script: int start = currentScript.IndexOf("\n//Auto generated named set at"); int end = 0; StringBuilder newScript = new StringBuilder(currentScript); if (start != -1) { end = currentScript.IndexOf(";", start); //If the named set already exists, remove it newScript.Remove(start, end - start + 1); } else { start = currentScript.Length; } //Insert the named set in the correct place newScript.Insert(start, mdx.ToString()); //Update the cube's MDX script cube.MdxScripts[0].Commands[0].Text = newScript.ToString(); Finally we just need to update the cube in order to write the MDX back to the cube: //Call the update methods to update the cube cube.MdxScripts[0].Update(); cube.Update(); User Reports Now that the cube has been updated, if the Excel report is refreshed, then the two new products will appear in the list: Summary Controlling SSAS named sets like this won’t be for everybody. I’ve certainly worked in a few clients where strict process has to be followed to update this sort of thing, but I can think of other companies that I know where this would be really useful. Managing Analysis Services named sets is just one use for collections. Another example might be managing default multi-select parameters for SSRS reports. As collections are just lists of members that can be extracted easily, what you do with them is up to you!

Audit Trail in PerformancePoint Planning

I've noticed that the PPS Technet documentation has been updated recently to include an official Microsoft method to carry out auditing in PPS Planning. PPS will do some basic auditing out of the box, namely to the audit.log file on the server. This will automatically capture key events that occur on the server, e.g. creation of a model, updating of a dimension etc. The audit file does not, however, track changes to the model fact data. There has been a custom solution around for this for a while now - Sacha has written an excellent post here that details what you need to do in order to implement your own PPS audit trail. Like Sacha's method, the Microsoft approach involves creating auditing tables, which should then be populated by running a custom stored procedure. The stored procedure should then be scheduled on a periodic basis (e.g. hourly) to capture any new activity. This is a bit different to Sacha's method, where triggers are used to capture changes in real-time as they occur. In both cases the idea is to use something like Reporting Services to to view detailed auditing reports on your PPS data. One thing that did catch my eye on in the Technet documentation is a method to decode the binary 'change list' column that's held in the dbo.Submissions table. Whereas you can manually export the change list to a CSV file, there has historically been no way to take what's in the change list column and automatically decode it into a useful format. The following C# code will read the change list, and then insert it into your newly created auditing table: DataSet ds = new DataSet(); DataLayer dl = new DataLayer("PPSConnection"); ds = dl.ExecuteDataSetFromSQL("SELECT [SubmissionID]FROM [_AppDB].[dbo].[Submissions] s1 where s1.SubmissionID not in (select SubmissionID from [_StagingDB].[dbo].[SubmissionsAudited]) and s1.[Status] = 0"); string sSQL = ""; foreach (DataRow r in ds.Tables[0].Rows) { sSQL = @"INSERT INTO SubmissionsAudited(… ) VALUES("; // RETRIEVE THE CHANGELIST FOR THIS SUBMISSION DataSetWrapper dsw = new DataSetWrapper((Byte[])r["ChangeList"]); foreach (DataRow cldr in dsw.DataSet.Tables[0].Rows) { // SUBMISSION ROW DATA sSQL += r[0].ToString() + ", " + r[1].ToString() + ", " + r[2].ToString() + ", " + r[3].ToString() + ", '" + r[4].ToString() + "', "; // CHANGELIST ROW DATA foreach (object o in cldr.ItemArray) { sSQL += "," + o.ToString(); } sSQL += ")"; } // STORE EACH CHANGE TO THE AUDIT TABLE dl.ExecuteNonQuery(sSQL); Click here to view the Technet documentation.

C# Coalesce

Although this has been around for a long time and this is slightly off topic, I needed it this week, and just think it is worth mentioning.  With objects you occasionally need to know if they are null, and if they are get something else, or do something else.  This used to be very convoluted with .NET 1.1: if (a != null) {   return a; }   else if (b != null) { return b; }   else if (c!= null) {   return c; } else {   return new object(); } Now you can simply use this (.NET 2.0 and above): return a ?? b ?? c ?? new object(); Now you can not use this with types that get default values, such as Integer's, or boolean's, however still very usefull.

Maximum Message Size For Web Services (.NET 3.5)

A new introduction to .NET 3.5 is the ability to limit the size of the incoming messages when using Web services.  Apparently this is to help combat Denial of Service (DoS) attacks. However, it is not clear how to change this setting, its simple when you know how.  In you App.Config, or Web.Config you should have a Bindings section for each of web services references.  Within this there are all sorts of useful settings, however by default the maximum message size is quite small, so to alter this you must change maxBufferSize and maxRecievedMessageSize.  Now don't go crazy just up it to what you may need, this may be quite large if you are building all your internal applications through a web service layer.

LINQ and SQL Server

Some time ago I reported on LINQ whilst Visual Studio 2008 was still in BETA. On Monday, Nov. 19 2007, Microsoft announced that Visual Studio 2008 and the .NET Framework 3.5 was released to manufacturing (RTM).  Since then I am sure many of you have had many hours playing with VS 2008, WPF, Expression Blend, and all that these new products contain.  Having finally got my hands on it, and found the time, LINQ was the first stop.  So we have a database that we need to run complex queries on where stored procedures just would not be flexible enough.  Further to this Inline SQL is an obvious No No, so perfect scenario for a LINQ to SQL implementation.  Deciding that a DBML (Database Markup Language) structure would work best, generating this file, and getting this file to be update-able was crucial.  So after some research we found a small, unknown command line application called SQLMetal.  This allows you to drop and re-create DBML files, based entirely on a given database.   SqlMetal will always produce a consistent data access layer: SQLMetal is a command line tool that can generate the data access layer in seconds. SqlMetal will produce either C# or VB.Net code. SqlMetal generates a strongly typed data access layer. This is great for reducing runtime error. Now those runtime errors pop up in the development cycle as compile errors, reducing stress on the QA dept, Support dept and upper management, later in the life cycle. SqlMetal can generate strongly typed interfaces for stored procedures and user-defined functions. From the developer point of view, it is now a complete breeze to call either a stored proc and/or a user-defined function. To generate source code from SQL database directly, execute the following: C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin> SqlMetal /server:baker   /database:AdventureWorks  /namespace:AdventureWorks  /dbml:AdventureWorks.dbml This creates a DBML file that can be added to your project:    This now gives an entire database structure accessible via LINQ, one thing to be aware of though; ensure you have a solid database design, as SQLMetal will transform you Database design identifying all keys and relationships. And once you have the database as an object model, you can use LINQ as you wish. LINQ to SQL uses ADO.NET under the covers but offer several additional benefits: Reduces complexity Fewer lines of code Strong Typing

Embedding Images into DLL's

I am sure this is already well documented, however, its something that is really very straight forward with .NET 2.0, and not clearly written about so far on the web, so here is(hopefully) a good example:   Step 1:  Embed the Resource -   Step 2:  Reference the resource with the exact name and Namespace. Step 3: Retrieve the Embedded resource Image.ImageUrl = Page.ClientScript.GetWebResourceUrl(typeof(_Default), "[YourNamespace].Contributor.png");  It is that easy!!

The PerformancePoint WebServices - Retrieving Data

In a previous post, I have discussed retrieving information from the PPS WebServices, namely, retrieving the system information, that returns information on the Applications stored within PPS.  To retrieve any further data from PPS WebServices, we need to know the application ID, which is returned from the GetSystem() method. Once you have retrieved the applications, you may notice that there are not a huge amount of WebServices methods available for use.  This is because one WebServices is used to retrieve most data from PPS.  Again we are going to use the MetaDataManager.asmx, WebServices, and the method we can use is the Request method, that expects an object called a PerformancePointRequest, and in turn returns a PerformancePointResponse.  A PerformancePointRequest can be made to retrieve all sorts of information from PPS, and this is always returned to a PerformancePointResponse.  Here is an example of code that returns Cycle instance information. MetaDataManager.QueryFilter ppQueryFilter = new MetaDataManager.QueryFilter();ppQueryFilter.ResultDataType = MetaDataManager.ReferenceDataType.CycleInstances; MetaDataManager.QueryWorkflowData performancePointRequest = new MetaDataManager.QueryWorkflowData();performancePointRequest.TargetObjectId = ApplicationID;performancePointRequest.QueryFilter = ppQueryFilter;MetaDataManager.PerformancePointResponse ppsResp = ppsMDM.Request(performancePointRequest);return ppsResp; So the PerformancePointResponse will return a standard object that can be successfully cast to a number of different objects, I have not used all objects as yet, however we have most experience with the 'WorkflowDataResponse' object.  So casting the PerformancePointResponse object, to a 'WorkflowDataResponse'  object, we know have a Workflow Data Object that is a set of data tables that re compressed.  So.. To get the data we need to de-compress this.  The following shows how I have gone about decompressing this: First we need to get the Table Schema, that is compressed: public static DataTable UnpackTableSchema(byte[] packedTableSchema){  DataTable table2 = new DataTable();  XmlSerializer dataTableSerializer = new XmlSerializer(typeof(DataTable));  if (packedTableSchema == null)    {      return null;    }      DataTable table = null;      using (MemoryStream stream = new MemoryStream(packedTableSchema))      {        using (GZipStream stream2 = new GZipStream(stream, CompressionMode.Decompress, true))        {          table = dataTableSerializer.Deserialize(stream2) as DataTable;        }      }      table2 = table;    }return table2;} Once we have the unpacked schema, we can unpack the data and load it against the schema: There are quite a few methods to actually unpacking the data, so if you are interested please do not hesitate to contact me.  Also, once we have finished the product, it will be available for download at  Should you like to take a look at our BETA please email us at 

The PerformancePoint Planning Web Service

So.... you want to use the PerformancePoint web service so you can extend PerformancePoint, and integrate it into bespoke applications and SharePoint.  We needed to create a Web part that could show a users PerformancePoint assignments in SharePoint.  So, this should be simple we know that Excel uses the webservices to get all of its PerformancePoint  information, so lets just tap into that and away we go... OR not! We were after creating a WebPart that did not require any further references and was self contained.  This is where life got tricky.  We needed to get all the applications within Performance Point, and then get all the Assignments for each application that are related to the requesting user.  After looking through available web services, it is not clear how to retrieve the amount of data that is required to run PPS, so that was a complication.  However for now lets look at how to get the applications: We can get the whole Biz system from the Web service using this: ( where ppsMDM is the MetaDataManager web service!)  //Get the System Data MetaDataManager.BizSystem bizSystem = ppsMDM.GetSystem(false); This should return the whole system to a Biz system object, however, as we found the webservice attempts to cast the array of application into a dataset, and then fails to do so, meaning that we can not get the application data.  After some degree of head scratching it was decided to change the Webservice Reference.cs, so the Application list was returned as an object.  This enables us to then manually deserialize the XML, into our own object and get access to the entire Application object, as we should have had.  It does seem that if you are willing, and able, to use the PPS Client Dll's, that you can hook straight into them to get this to work less painfully (thanks to Casper and his post here) you need to reference this (Microsoft.PerformancePoint.Planning.Client.Common) and any other dependant DLL's.  This is how we have deserialize this, once creating the object ArrayOfBizApplication, and BizApplication: sb.Append("<ArrayOfBizApplication>");foreach (XmlNode xmlNode in xmlNodes){  if (xmlNode.Name.ToLower() == "arrayofbizapplication")     {       sb.Append(xmlNode.InnerXml.ToString());    }} sb.Append("</ArrayOfBizApplication>"); XmlSerializer serializer; System.IO.StringReader sr = new System.IO.StringReader(sb.ToString());XmlTextReader xmlTReader = new XmlTextReader(sr);  serializer = new XmlSerializer(typeof(ArrayOfBizApplication));arrayOfBizApplicaion = (ArrayOfBizApplication)serializer.Deserialize(xmlTReader); Now we have the array of Applications we should be able to get the Application ID's and then the Assignements from PPS, without the need for referencing any Microsoft PPS Dll's.