Jeremy Kashel

Jeremy Kashel's Blog

Prototyping Analysis Services Cubes

After running a modelling workshop with end users, my next activity on a project is generally to produce a prototype model with some test data, in order to check with users that the planned star schema design will work. My tool of choice for this job tends to be Power Pivot, but I was presented with a slightly different solution when working with a client recently. This got me thinking it would be worth contrasting the two approaches….

Power Pivot

First of all, lets start off with Power Pivot. The goal here is to quickly produce a prototype to verify the star schema design. I tend to just request that a business user brings along an Excel workbook with samples of each dimension and facts pre-extracted in Excel, which can then be loaded into Power Pivot. I find that this saves time, although as an alternative of course Power Pivot could extract directly from SQL Server, Oracle etc if needed.

As an example, we can quickly walk through creating a very simple model to create a star schema:

  • I’ve copied the data that represents my single fact table and related dimensions into Excel. In this case it’s to model a simple Sales Order Transactions fact, therefore my Excel workbook has dimensions such a Product, Customer, Sales Territory etc, as we well as the data that represents the fact table:

image

  • After completing and closing the workbook, we now create a new workbook and open the Power Pivot window.
  • Within the Power Pivot window, the next step is to import the data from the Excel workbook created in step 1. This can be done by choosing From Other Sources->Excel workbook and then browsing to the file.
  • Power Pivot will now present us with all of the sheets that it finds in the workbook, so we select the sheets that represent the dimensions and facts:

image

  • The next step is to setup relationships between the various tables. Once this is done, we have a simple prototype data model, as shown below:

image

After a bit of tidying up (e.g. creating some hierarchies and removing unwanted columns), we can now connect to a Pivot Table in Excel that will help verify the star schema:

image

They key point here is that we have rapidly built a prototype, without the need for any ETL. This means we can quickly cover the design issues with the users, but this time with their actual data, rather than just on a whiteboard.

Analysis Services

The alternative approach that was presented to me recently is to develop the Analysis Services cube/Tabular model before carrying out the ETL. Essentially this means that the users connect to cube, as a production system, unaware that under the hood the ETL is not yet complete.

How is this achieved? Essentially by putting logic in the SQL view layer. Its a well known Analysis Services best practice to bind your SSAS objects (e.g. dimensions, measure groups) to SQL Server views, rather than using objects within the DSV. Rather than each view pulling its data from a complete dimension or fact table, instead each view would pull its data from a staging area or a copy of the source database. The idea being that, over time, the contents of each view would be updated to point to the actual dimensions and facts, once they are built.

Therefore a normal view (in this case for a product dimension) that feeds Analysis Services might look like:

SELECT          ProductKey,
                ProductName,
                ProductCategory,
                ProductSubCategory
FROM            Dim.Product

With the ‘No ETL’ approach, the view looks very different, as any transformation and cleaning will have to be carried out from the view:

SELECT          ROW_NUMBER() OVER(ORDER BY PROD.ProductID) AS ProductKey,
                PROD.Name AS ProductName,
                CAST(ISNULL(SUB.Name, 'Unknown') AS VARCHAR(100)) AS ProductSubCategory,
                CAST(ISNULL(CAT.Name, 'Unknown') AS VARCHAR(100)) AS ProductCategory
FROM            [$(AdventureWorks)].Production.Product PROD
LEFT JOIN       [$(AdventureWorks)].Production.ProductSubcategory SUB ON SUB.ProductSubcategoryID = PROD.ProductSubcategoryID
LEFT JOIN       [$(AdventureWorks)].Production.ProductCategory CAT ON CAT.ProductCategoryID = SUB.ProductCategoryID

This therefore incurs some technical debt, as the cube gets built before the ETL or even the physical dimension table. But the idea is that you can get the users using the cube, on production even, then gradually replace the views with proper ETL and dimension tables once you have some more feedback.

Conclusion

The Power Pivot method is tried and tested for me many times – I find that its very quick to get up and running. The Analysis Services approach is not quite as quick to get up and running, but offers the benefit that the cube can continually evolve in a fairly agile manner. Its early days for me using this approach, but the a disadvantage is that the views can get quite complex, impacting performance depending on the data volumes.

Both methods probably have their place. If I wanted to validate my design, I think I would continue to produce a quick Power Pivot model. On the other hand, if quick delivery to production is a must, and the complexity/data volumes are not an issue, then I would use the view approach.

Analysis Services Dimension Processing Errors

I’ve seen a few data quality issues recently in a some client data warehouses/marts where I’ve been asked to carry out some Analysis Services work. These data quality issues have often led to the classic SSAS dimension processing error, namely:

“Errors in the OLAP storage engine: A duplicate attribute key has been found when processing: Table: 'dbo_DimProduct', Column: 'SubCategory', Value: 'Other'. The attribute is 'Sub Category'”

Normally this error is easy to track down, but there are a few scenarios that I’ve encountered where SSAS attribute properties actually make it harder for you to troubleshoot.

Scenario 1 – Trimming

When hitting the above error, the first thing to do would be to run a query to see where the duplicate attribute is. In my example, I’ve just produced a very simple dimension that’s loosely based on Adventure Works – here are the attribute relationships:

image

Therefore the first place to start would be to write a query that checks if we have any Sub Categories that are duplicated across Categories, such as:

SELECT        SubCategory
FROM          DimProduct
GROUP BY      SubCategory
HAVING        COUNT(DISTINCT Category) > 1

In my case, this actually yields no results. So what’s happening here? We can’t find any duplicates in SQL, but SSAS thinks there are duplicates. Well….Analysis Services dimension attributes contain a property called Trimming, which, by default, will remove spaces at the end of an attribute key or name. Needless to say, a dimension shouldn’t actually contain leading or trailing spaces, but in this specific scenario, the ETL wasn’t as robust as it could be. In the actual example I encountered, there was a tab at the end of one of the attribute names. To visualise that in the DimProduct example that I’ve created, the highlighted attribute has a tab at the end:

image

Scenario 2 – NullProcessing

The next scenario I want to highlight is the same error, but a different dimension. Again I’m going to create a simple example to illustrate what actually happened. This time I’ve got a customer dimension table, as follows:

image

The error is: A duplicate attribute key has been found when processing: Table: 'dbo_DimCustomer', Column: 'CustomerCountryKey', Value: '0'. The attribute is 'Customer Country'

A quick query on the above data will reveal that a CustomerCountryKey of 0 doesn’t actually exist twice for the same country, so what’s happened here? Again its due to a dimension property, this time on the attribute key, called NullProcessing. As its default is Automatic, it means that Nulls will get converted to zero during dimension processing. As we already have a member with CustomerCountryKey of 0, then we get an error.

ETL and Data Quality

These sort of issues highlight just why handling data quality during the ETL is so important! For the first scenario, some basic trimming/cleaning can fix the issue before it hits Analysis Services. As for the last issue, this is partly due to data warehouse/mart design. Personally I would never allow a dimension attribute to be null, partly because you shouldn’t present null attributes to a user, but also due to the lack of control that can result as shown above. Therefore a bit of design work up front to ensure that you have a robust data model is essential. Interestingly the properties that I’ve mentioned don’t exist for tables in Analysis Services Tabular. Whereas I think there are a few SSAS MD features that need to make it over into the next version of Tabular, I can personally live without the two properties outlined above.

Tabular Analysis Services Perspectives and Measure Groups

I’ve always liked how Excel and other client tools deal with measure groups in a multi dimensional SSAS cube. Once you connect to the cube, you get a drop down in the Pivot Table fields pane that lets you choose which measure group you want. From there, you get a filtered list of measures and dimensions. For example, if I connect to the Adventure Works 2012 sample cube, I get the following:

image

By picking Internet Orders, I will see only the measures dimensions that relate to Internet Orders, which provides me with a good way of navigating a large cube.

Tabular

Unfortunately though, if I connect to a Tabular model (e.g. the sample Adventure Works 2012 model that I got from codeplex), then if I click the same drop down, then I will see every table in the entire model, whether the table  is a dimension, a fact or something else. E.g. here for a tabular model than contains 3 fact tables, but 15 tables in total, then I see all 15 tables:

image

This isn’t ideal in my opinion, as I’d much rather that users had the ability to quickly jump to an area of interest, which measure groups achieve fairly well. As I have 3 fact tables, I would expect to see those 3 fact tables/measure groups in the drop down.

I did accept this as a quirk of tabular until I stumbled across something recently. If you connect to a tabular perspective, then Excel behaves exactly as Multi Dimensional does, i.e. it shows you only measure groups in the drop down. To illustrate I’ve added a new perspective to the model that contains all tables and I’ve called this perspective ‘Adventure Works’:

image

Now I can connect to this new perspective via Excel. Remember the perspective contains all my tables, so it shouldn’t be any different than connecting the model itself:

image

The perspective does give a different result though – now only the actual measure groups are displayed to me in the drop down, which is much more user friendly:

image

Summary

This is completely different to the way that perspectives work in multi dimensional. When making MD cubes, I wouldn’t always need perspectives, just because the measure groups provided a ‘natural’ way of the users picking the subset of the cube that was of interest to them. Now with tabular, it seems that using a perspective will actually improve the user experience.

Debug MDX with Generate

On occasion you may find that your MDX queries don’t quite give you the result that you expect, which sometimes may be due to the way in which a certain bit of MDX has changed the context of part of your query.

This can sometimes be tough to figure out, especially with functions like Sum() or Filter(), whereby we are looping over a set and then only returning the end product. For example, take the following query, where I’m trying to find out the count of UK male customers who have bought Road Bikes in both the current and the previous year:

WITH MEMBER [Measures].[Repeat Customer Count] AS
SUM(
    [Customer].[Customer].[Customer].Members,
    IIF([Measures].[Internet Sales Amount] <> 0 AND
    ([Measures].[Internet Sales Amount], [Date].[Calendar Year].PrevMember) <> 0, 1, NULL)
    )

SELECT      {[Measures].[Repeat Customer Count]} ON 0,
            [Date].[Calendar Year].[Calendar Year] ON 1
FROM        [Adventure Works]
            --Road Bikes, UK, Male
WHERE       ([Product].[Subcategory].&[2], [Sales Territory].[Sales Territory Country].&[United Kingdom],
            [Customer].[Gender].&[M])

It returns the following result set, which I know to be incorrect:

image

Unlike debugging either of the axes, where of course we can see what gets returned, debugging examples such as the above might be difficult. The MDX Generate function can help here, as it works in a similar way to Sum(), by iterating over a set and then evaluating an argument once per iteration. The idea is that Generate will build us a new set, by using the looping context of the first set that you pass it. In this case, however, we can use it to return us a concatenated string based on our input set.

What I’ve done here is to use the CurrentMember of attributes that I think might be causing problems (Customer, Gender) and then just print them out as a string in order to see if the attributes have the context that I expect. Therefore I’ve replaced the ‘1’ in my original query with the string of:

" | " + [Customer].[Customer].CurrentMember.MEMBER_CAPTION + ", " 
    + [Customer].[Gender].CurrentMember.MEMBER_CAPTION

The full query and results then become:

GENERATE(
        [Customer].[Customer].[Customer].Members,
        IIF([Measures].[Internet Sales Amount] <> 0 AND
        ([Measures].[Internet Sales Amount], [Date].[Calendar Year].PrevMember) <> 0, 
        " | " + [Customer].[Customer].CurrentMember.MEMBER_CAPTION + ", "
        + [Customer].[Gender].CurrentMember.MEMBER_CAPTION, NULL)
        )

SELECT        {[Measures].[Repeat Customer Count]} ON 0,
            [Date].[Calendar Year].[Calendar Year] ON 1
FROM        [Adventure Works]
            --Road Bikes, UK, Male
WHERE        ([Product].[Subcategory].&[2], [Sales Territory].[Sales Territory Country].&[United Kingdom],
            [Customer].[Gender].&[M])
image

I can spot straight away that female customers have been returned in 2007, which is due to the fact that sets specified inside the WITH MEMBER section are not evaluated in the context of the slicer. In this case the problem can be solved by using the EXISTING keyword inside the Sum, so that the customers get sliced by the gender attribute:

SUM(
    EXISTING [Customer].[Customer].[Customer].Members,
    IIF([Measures].[Internet Sales Amount] <> 0 AND
    ([Measures].[Internet Sales Amount], [Date].[Calendar Year].PrevMember) <> 0, 1, NULL)
    )

This won’t be applicable in all MDX debugging scenarios, but hopefully it will help someone out.

Analysis Services Member Properties and Excel

When working with users who are browsing a cube using Excel 2010, a common requirement that I find is that the users want to create tabular or grid based reports. E.g. “I have a list of currencies, I want to display the currency code in an adjacent column”. Or “Here are my account codes, I want to also see the account type”.

The common way that I see users attempt this is to stack attribute hierarchies on top of one another. For example, using Adventure Works, to get the currency names and codes together, I can stack the two attribute hierarchies:

image

This produces the following report:

image

Not exactly what we want – ideally we want the currency codes in separate columns, on the same line as the name.

An Alternative Approach

One well known way to achieve this layout is to right click on the relevant member and display its member properties, whereas another is to set the layout to Tabular Form. I tend to prefer the member properties route as I think it produces slightly better reports, so I’ll be covering a simple member properties design tip shortly. First of all, here’s the sort of report that I’m able to create using member properties:

image

I’m able to do this because the Adventure Works Source Currency Code Attribute Hierarchy has the Source Currency (names) as a member property. Therefore, I get the following option when I right click on the currency code:

image

Displaying member properties will only work if the attribute relationships are set up correctly. The reason that I’m able to see the member property ‘Source Currency’ is because of the following attribute relationship:

image

However, if we just display the currency name, using the Source Currency attribute hierarchy and then try and right click to see the currency code, we’ll unfortunately see nothing in the ‘Show Properties in Report’ submenu. This is because the currency name is a property of the code, rather than the other way around. I’d argue that for a user it’s intuitive to get the name from the code, or vice versa, as they sit at the same level. With a few simple changes we can achieve this, essentially by adding an extra attribute to the dimension…

Within the dimension editor in BIDS, the first step is to add the new attribute. In this case it should be based on the Currency Code column that is, of course, already in use. That doesn’t matter, we want to use it twice. Therefore, I’ve created a new attribute called Currency Code and have set AttributeHierarchyEnabled = False, as there’s no need for anyone to browse this attribute. Now onto the all important attribute relationships. The attribute hierarchy that contains the names is called Source Currency. As it’s not possible to get the currency codes by right clicking on this attribute hierarchy, Source Currency is the attribute needs to be related to the new Currency Code attribute:

image

Finally, the end result for the user is that they now get the ability to easily get access to the currency codes by right clicking on the names:

image

Power View Default Field Set

Here's another quick tip when creating data models for use with Power View - Default Field Sets can be created in both Tabular SSAS and PowerPivot that give the user a shortcut for automatically adding multiple fields to a report.

As an example, here's a screenshot of the Default Field Set in PowerPivot - this will tell Power view that the following fields should be automatically selected when the table is added to a Power View report:

Default Field Set for blog


I'd seen this in the documentation a while back, but hadn't actually used it, nor could I see a way to select the table. In fact, although there's no checkbox, it's just a simple single click on the actual table name, as I’ve highlighted below:

image

When you click on the table name Product above, you will now get the following fields automatically added to a table visualisation:

image

So not exactly rocket science, but worth doing! Here's how to set up the Default Field set for PowerPivot and Tabular Analysis Services.

Working with Images in Power View

Power View includes several ways to create visualisations with images, ranging from a simple table, to the scrollable tile visualisation. In this post I’m going to explain a bit about the tile visualisation, showing how it can be used in Power View, before showing a couple of tips for working with images in both PowerPivot and Analysis Services Tabular models.

Power View Tiles

The Power View Tiles visualisation allows you to display a scrollable series of images, which, when clicked on, will display data of your choosing. The report below is an example of this:

image

What’s nice about the View above is that the names of the products are automatically displayed underneath the images. This is a sensible place to display the product names, as it means the area below can focus on whatever product level data you want, in this case Revenue by Month for the selected product.

This works fine in the sample models (in my case the Tailspin Toys model that I think came with CTP3) but it wont work in vanilla PowerPivot / Tabular models unless you configure a few model settings. Trying to reproduce the report above without these settings will give the following tile visualisation, minus the product names:

image

PowerPivot Advanced Settings

To address this in PowerPivot, you first of all need to switch to Advanced Mode, which you can do by selecting this option from the File menu in PowerPivot:

image

This gives you the following advanced tab:

image

Clicking on the highlighted Table Behaviour button in the ribbon gives you the following Table Behaviour window:

image

Here you can “change the default behaviour for different visualisation types and default grouping behaviour in client tools for this table”. The following properties should be set in order to get the tiles working correctly:

  • Row Identifier – Specifies a column that only contains unique values, which allows the column to be used as an internal grouping key in client tools.
  • Default Label – Specifies which column gives a row level display name, e.g. the product name in a Product table. This is key, as when a row level image is used, this property specifies which value to display alongside the image in tiles and other image-based visualisation.
  • Default Image – Specifies which column contains images representing the row level data, e.g. pictures of products.

The full details for all these properties can be found here.

Tabular Models

Tabular Models contain the same properties, which can be edited when you open a tabular project in the new SQL Server Data Tools. The image below shows how the properties have been configured for the column in my model called Image:

image

Back to Power View

If we now build a tile visualisation using the image column, we will see that we get the names of the products returned along with the image. It’s also worth noting that the Image and Product Name columns have now have an icon next to them, indicating that the field exhibits “row level” behaviour:

image

Finally, for some examples of what’s new in Power View SQL 2012 RC0 take a look at this post on the Reporting Services Team Blog.

Managing SSAS Named Sets with Master Data Services Collections

Master Data Services Collections are probably not the first feature that come to mind when you think of the MDS product. The hierarchies and member grids tend to be the core functionality, but as this post will hopefully show, MDS Collections are useful also.

Collections are essentially managed lists of members that can come from multiple different MDS explicit hierarchies, or also from another collection. The idea is that this “subset of members” can be maintained in MDS by a business user and then fed to external applications for reporting or other purposes.

Analysis Services Named Sets

One example of how collections can be used is to maintain Analysis Services named sets. Some named sets, such as a Top 50 Customers, don't require any maintenance, as it’s just the top 50 of all customers, based on a particular measure. On the other hand, sometimes named sets can be static lists, e.g. a list of core products that the user wants to see for reporting.

In the latter example, if a user wants the definition of a named set to change, they have to get IT to change the MDX script. MDS collections can help by allowing the user to control the named set definition, reducing the burden on IT.

Example Scenario 

Here’s an example of how this is done. First of all, the end game is that I have a named set in Analysis Services that is currently just for 3 products. Therefore, a user can easily drop this set into an Excel report to get quick access to the products that are important to them:

image

So the challenge is that we need to find some way of extending this named set without doing it manually. This is where MDS starts to come in, so using the sample Product model that comes with MDS, I’ve created a new collection called Favourite Products, as shown in the image below:

image

If I go and edit the Favourite Products collection in MDS, then I can drag and drop more products into this collection, or remove some existing members. In this case, I’ve chosen to add two new products:

image

C#, AMO and the MDS API

So the named set represents the target that we need to get to, whereas the MDS collection that’s shown is the source. To get the contents of the MDS collection to update the named set, one way of doing it is to use the MDS API to pick up the collection members, and then to use AMO in order to write the named set into the MDX script. I’m just doing this in a C# windows application, but you could do it via a batch process, such as SSIS. For this post I’m just going to show the code, so here goes:

This post is already starting to feel too long so I’m not going to show the basics of the MDS API. For that, take a look at a good posting by the MDS Team blog here. Also, as anyone who knows me will no doubt agree, I don’t tend to get involved in doing C#, so don’t consider this to be production ready! It should give you an idea of the basics though. Anyway, assuming that we’re now connected to the MDS Web Service, I’ve created the following method that will return the members from the collection:

private HierarchyMembers ObtainHierarchyMembers(string entityId, string hierarchyId, string modelId, string versionId)
{
    HierarchyMembersGetRequest request = new HierarchyMembersGetRequest();
    HierarchyMembersGetResponse response = new HierarchyMembersGetResponse();
    request.HierarchyMembersGetCriteria = new HierarchyMembersGetCriteria();

    //State that our hierarhcy type is a collection and that we want collection members
    request.HierarchyMembersGetCriteria.HierarchyType = HierarchyType.Collection;
    //Pass in the key search criteria to identify the correct collection in MDS
    request.HierarchyMembersGetCriteria.ParentEntityId = new Identifier { Name = entityId };
    request.HierarchyMembersGetCriteria.HierarchyId = new Identifier { Name = hierarchyId };
    request.HierarchyMembersGetCriteria.ModelId = new Identifier { Name = modelId };
    request.HierarchyMembersGetCriteria.VersionId = new Identifier { Name = versionId };

    request.HierarchyMembersGetCriteria.RowLimit = 50;
    request.International = new International();
    OperationResult result = new OperationResult();
    //Return the hierarchy members from the service
    return service.HierarchyMembersGet(request.International, request.HierarchyMembersGetCriteria, out result);
}

Before we use the above method, we need to connect to SSAS using AMO. That can be done quite easily with the following code:

Server server = new Server();
string connection = "Data Source=.;Catalog=Adventure Works DW 2008R2;";
server.Connect(connection);

After we’ve done all the usual error handling associated with connecting to a database, we need to pick up the SSAS database and cube:

Database database = server.Databases["Adventure Works DW 2008R2"];
Cube cube = database.Cubes["Adventure Works"];

Now we’re ready to call our ObtainHierarchyMembers method, before looping through it to build the named set:

StringBuilder mdx = new StringBuilder("\n//Auto generated named set at " + DateTime.Now.ToString() + 
    "\nCREATE SET CurrentCube.[Favourite Products] AS {");
int count = 1;
//Loop through the collection to build the mdx
foreach (ParentChild pc in hm.Members)
{
    //Add the members to the MDX string
    //This references the member by name
    //It would be possible to reference by member key, but would require more work
    mdx.Append("[Product].[Product].[" + pc.Child.Name + "]");
    if (count < hm.Members.Count())
    {
        mdx.Append(", ");
    }
    count++;
}
mdx.Append("};");

Now we need to insert the named set in the correct place within the existing MDX script, bearing in mind it could already exist:

string currentScript = cube.MdxScripts[0].Commands[0].Text;
//Find the correct place to insert the named set within the MDX script:
int start = currentScript.IndexOf("\n//Auto generated named set at");
int end = 0;
StringBuilder newScript = new StringBuilder(currentScript);
if (start != -1)
{
    end = currentScript.IndexOf(";", start);
    //If the named set already exists, remove it
    newScript.Remove(start, end - start + 1);
}
else
{
    start = currentScript.Length;
}
//Insert the named set in the correct place
newScript.Insert(start, mdx.ToString());
//Update the cube's MDX script
cube.MdxScripts[0].Commands[0].Text = newScript.ToString();

Finally we just need to update the cube in order to write the MDX back to the cube:

//Call the update methods to update the cube
cube.MdxScripts[0].Update();
cube.Update();

User Reports

Now that the cube has been updated, if the Excel report is refreshed, then the two new products will appear in the list:

image

Summary

Controlling SSAS named sets like this won’t be for everybody. I’ve certainly worked in a few clients where strict process has to be followed to update this sort of thing, but I can think of other companies that I know where this would be really useful.

Managing Analysis Services named sets is just one use for collections. Another example might be managing default multi-select parameters for SSRS reports. As collections are just lists of members that can be extracted easily, what you do with them is up to you!

Master Data Services Training in the UK

This is just a quick post to announce a range of SQL Server training courses, organised (and in some cases delivered) by Chris Webb. To start off there’s a SQL Server course delivered by Christian Bolton in December, followed by an Analysis Services course delivered by Chris in February. I’ll be delivering a Master Data Services course in February, before Chris delivers an MDX course in March.

The details for all the courses are:

  • SQL Server Internals and Troubleshooting Workshop - Christian Bolton – 6th – 7th December 2011

The Advanced Troubleshooting Workshop for SQL Server 2005, 2008 and R2 provides attendees with SQL Server internals knowledge, practical troubleshooting skills and a proven methodical approach to problem solving. The workshop will enable attendees to tackle complex SQL Server problems with confidence.

Full details and registration here.

  • Real World Cube Design and Performance Tuning with Analysis Services – Chris Webb – February 2012

A two day course that takes real world experience in showing you how to build a best practice Analysis Services cube, covering design issues such as data warehouse design and complex cube modelling. Day two then covers performance optimisation for Analysis Services, including MDX optimisation and cube processing.

Full details and registration here.

  • Introduction to Master Data Services with Jeremy Kashel – February 2012

An end to end look inside Master Data Services, this full day course will begin with a synopsis of Master Data Management before moving on to an overview of Microsoft SQL Server 2008 R2 Master Data Services (MDS). The remainder of the course will cover the major MDS topics, such as modelling and business rules, which will include a number of practical exercises.

More details and registration for here.

  • Introduction to MDX with Chris Webb – March 2012

The Introduction to MDX course aims to take you from the point of being a complete beginner with no previous knowledge of MDX up to the point where you can write 90% of the MDX calculations and queries you’ll ever need to write. The three day course covers the basics, such as sets, tuples, members to more advanced concepts such as scoped assignments and performance tuning.

Full details and registration here.

MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook - Book Review

As you may have already seen, Packt have released a new MDX book, namely MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook by Tomislav Piasevoli.

The book is written as cookbook, meaning that you can choose the individual ‘recipes’ that apply to whatever problem you’re trying to solve. Each recipe starts off with a section called ‘Getting Ready’, essentially giving you the base query to use, before moving onto ‘How to do it', which covers the main instructions for the recipe. There are then further sections, which explain how the example works and also suggest other functions/concepts that you may want to consider. This sort of cookbook style makes it really easy to follow, each recipe is displayed very clearly in the book.

A wide range of MDX problems are covered, from time calculations to context-aware calculations. Not every piece of the MDX functionality is covered, which is to be expected, given the size of the book. It also doesn’t cover the usual introduction to members/sets etc that MDX books tend to cover, but it’s clearly stated that having a working knowledge of MDX is a pre-requisite for the book.

I found the copy that I’ve been reading in the Adatis office, but I really do like it, so I’ll definitely be ordering my own copy!

image