# Jeremy Kashel

### Jeremy Kashel's Blog

I recently delivered an Analysis Services training course to a client who were a bit surprised to learn that the standard list of Aggregation Functions for an Analysis Services measure doesn’t include Mean, Median and Mode. My answer was of course that all three can delivered using MDX, so I thought it might be a good blog post to show how it’s done.

Before I begin, just a quick refresher for anyone who needs it, the following article seems to be a good reference on the difference between Mean, Median and Mode. Essentially Mean is the average, Median is the middle value, whereas Mode is the value that occurs the most.

Mean and Median

The Mean and the Median are both relatively easy to do in MDX as there are built-in functions for both AVG() and Median(). As they’re well documented here and here, I won’t spend too long covering them. In the following MDX query I’m producing the Mean and the Median of the product sub categories in Adventure Works:

```WITH
MEMBER  [Measures].[Mean] AS AVG([Product].[Subcategory].[Subcategory].Members,
[Measures].[Internet Sales Amount])

MEMBER  [Measures].[Median] AS Median([Product].[Subcategory].[Subcategory].Members,
[Measures].[Internet Sales Amount])

SELECT  {[Measures].[Internet Sales Amount], [Measures].[Mean], [Measures].[Median]} ON 0,
NonEmpty([Product].[Subcategory].[Subcategory].Members, [Measures].[Internet Sales Amount]) ON 1
WHERE   ([Date].[Date].&[20070727])```

If you run the query and copy the data out to Excel you’ll see that it matches the result of Excel’s Median and Mode functions.

Mode

Mode is harder to achieve as there’s no built in MDX function. Fortunately, Excel is on hand to help out, as Analysis Services allows you to use some of the Excel functions in MDX. There are warnings that come with this approach, as you need to have Excel installed on the server and there can also be performance problems. But, if you’re comfortable with these caveats, then the Excel mode function may be an option for you.

If you do use the Excel Mode function then there are a few things to watch out for. Firstly, you’ll have to use the MDX SetToArray() function to pass the set in the format that Excel expects. Secondly, you may get an error returned with the description: “#Error The following system error occurred:  Invalid flags.” This is due to the data type of the Measure – it seems that Currency measures are not supported by this function. Therefore, I’m using Cdbl() on Internet Sales Amount to get this to work:

```WITH
MEMBER  [Measures].[Demo] AS Cdbl([Measures].[Internet Sales Amount])

MEMBER  [Measures].[ExcelMode] AS
Excel!MODE(SetToArray(NONEMPTY([Product].[Subcategory].[Subcategory].Members, [Measures].[Demo])
* [Measures].[Demo]))```

Applying this to the example query I’ve got above, the mode returned is 8.99, as its the only number that appears twice.

Bi Modal Result Set

There is a bit more complexity to Mode in some situations, essentially as you can get a bi-modal result set - where there is more than one value occurring the most. E.g. the mode of 1,2,2,3 is just 2, but the mode of 1,2,2,3,3,4 is both 2 and 3 – so it’s known as bi-modal.

The Excel Mode function won’t help in this scenario, as it will just return one of the mode values. Interestingly Excel 2010 does support this, with the Mode.Mult() function, but I’ve been unable to get this to work, presumably as the Excel function returns an array.

All is not lost though, it’s possible to produce mode using MDX. First of all, the numbers that I’m operating on are shown below. As you can see, there are two sets of two numbers that are the same:

As the following forum thread shows, mode can be achieved using MDX. I’m adapting that approach to give the following MDX:

```WITH
--Produce a result set that will guarantee bi modal results
MEMBER    [Measures].[Demo] AS
CASE WHEN [Product].[Subcategory].CurrentMember IS [Product].[Subcategory].&[31]  THEN 65.91
ELSE Cdbl([Measures].[Internet Sales Amount]) END

--Count how often each value appears
MEMBER [Measures].[ValueCount] AS
SUM(
Union([Product].[Subcategory].CurrentMember.Level.Members,
{[Product].[Subcategory].CurrentMember} AS Currentsub)
, IIF(([Product].[Subcategory].CurrentMember, [Measures].[Demo]) =
(Currentsub.Item(0).Item(0), [Measures].[Demo]), 1, null)
)```
```--Only get the items that appear the most
SET [MaxModes] AS
ORDER(FILTER(NONEMPTY([Product].[Subcategory].[Subcategory].Members, {[Measures].[Demo]}),
[Measures].[ValueCount] = MAX(NONEMPTY([Product].[Subcategory].[Subcategory].Members, [Measures].[Demo]),
[Measures].[ValueCount])), [Measures].[Demo], ASC)

SELECT   {[Measures].[Demo], [Measures].[ValueCount]} on 0,
[MaxModes]
--Filter out the duplicates
HAVING [MaxModes].CurrentOrdinal = 0 OR [Measures].[Demo] <>
([Measures].[Demo], [MaxModes].Item([MaxModes].CurrentOrdinal - 2)) ON 1
WHERE    ([Date].[Date].&[20070727])```

This gives the following correct result set, assuming we just want the two bi modal values:

An alternative is to use an MDX Filter() or HAVING clause to just display all the sub categories that have the mode values, which would just require a small modification to the above code.

I’ve not used these approaches with big data volumes etc, but they should at least give you a few options if you’ve got to do these sort of calculations in your own environment.

Here's another quick tip when creating data models for use with Power View - Default Field Sets can be created in both Tabular SSAS and PowerPivot that give the user a shortcut for automatically adding multiple fields to a report.

As an example, here's a screenshot of the Default Field Set in PowerPivot - this will tell Power view that the following fields should be automatically selected when the table is added to a Power View report:

I'd seen this in the documentation a while back, but hadn't actually used it, nor could I see a way to select the table. In fact, although there's no checkbox, it's just a simple single click on the actual table name, as I’ve highlighted below:

When you click on the table name Product above, you will now get the following fields automatically added to a table visualisation:

So not exactly rocket science, but worth doing! Here's how to set up the Default Field set for PowerPivot and Tabular Analysis Services.

I’ve been using both PowerPivot and Power View quite a bit recently and, in addition to the post I did a while back, have made a few further observations on getting the two to play nicely together.

Building an Example PowerPivot Model

For this post I’ve created a very very simple PowerPivot model, based on the Adventure Works Internet Sales Fact table and its related tables. What this means is that after the model is built, I can build a variety of Excel reports that summarise the measures in the FactInternetSales Adventure Works fact table, such as Sales Amount, Tax Amount or Order Quantity. In the example below I’ve built a very basic report showing Order Quantity by year:

On to Power View

Having already published the workbook to SharePoint, my task now is to create a Power View report, which can be done via the PowerPivot Gallery. If wanted to try and create a visual version of the above report in Power View (E.g. a graph), my steps would be to do first of all drag Calendar Year into the report, then Order Quantity. However, when doing this recently, the result wasn’t quite what I expected:

Order Quantity has not been summed at all, which is the opposite behaviour to that of PowerPivot. Consequently I can’t create a graph visualisation, because Power View thinks I don’t have any measures in my report:

Yet if we contrast this to measures such as Internet Sales Amount or Tax Amount, they all work fine. Upon looking at the field list for Internet Sales, we will see that Order Quantity is missing the sum symbol, as Power View has not interpreted it as a measure:

We can get around this in Power View with a bit of dragging and dropping, but we may as well look why it’s happened in the first place. The reason for this is the way in which the PowerPivot import wizard interprets SQL data types. The OrderQuantity column in the FactInternetSales SQL table has a data type of Smallint, which will not be summarised by default in Power View. Decimal and Money types are fine, hence why Tax Amount and Sales Amount above look ok.

Rather than changing the data types, we can force Power View to interpret the column as a measure by going to the PowerPivot advanced tab and clicking on Summarize By. If the Summarize By value is changed from Default to Sum, then Power View will interpret the column as a measure:

If we republish the model and then go back to Power View, we will see quite a difference with the same actions as before. First of all the field list is looking correct:

Secondly, only dragging Calendar Year and Order Quantity will now give the following correct results:

Now that we have a measure, it means that we can now change the table into a visualisation of our choice:

Power View is the kind of product that power users are hopefully going to find intuitive, but we can make things even easier for users by making small changes like this.

There’s been a whole host of changes to MDS in the SQL Server 2012 RC (Release Candidate) 0 that came out the other week. This blog post gives an overview of the changes, before diving into detail on a few of them. At a high level, the following changes have been made to MDS:

• Improved Master Data Manager front screen UI and navigation paths
• Collections interface updated to Silverlight
• Improved Excel user interface & functionality
• Auto generation of entity code values, without using business rules
• New deployment tool
• SharePoint integration

Let’s take a look at each of these changes:

Improved Master Data Manager UI

Although the UI improvement (shown below) is good, the best thing about the Master Data Manager changes, in my opinion, is that clicking on the Explorer feature no longer takes you into the Model View, but instead takes you straight into your master data for your core entity (e.g. in a Customer model this entity would be Customer). I’m not sure if this would get a bit frustrating if you didn’t want the core entity, but then again everything seems very quick in RC0, so I don’t think it’s really going to matter.

There’s also a new button next to a domain-based attribute that apparently has been designed to help with Many-to-Many relationships. I can definitely see that working, but it’s useful to have anyway to jump to the member details for the domain based attribute that you are viewing, many-to-many or not:

Collections Interface Updated to Silverlight

The collections interface is now much slicker, getting the Silverlight makeover seen in other areas of Master Data Manager. Switching between collections, for example, which could take a while in R2, now happens very quickly, making collections far more useable. The screen shot below shows how you edit collection members by picking members from one of the entities and adding those over into the collection by clicking the Add button:

Collections now have the concept of a Weight, meaning that you can alter the weighting value associated with the collection members, which could be useful for reporting purposes. The idea is that you extract the collection members and the weight values in a subscription view. A Weight column is actually included in the 2008 R2 collection subscription views, but there was no front end to modify the weight value, which has of course now changed:

Improved Excel User Interface and Functionality

The MDS Excel Ribbon has been given a makeover, meaning that you now see the following in the ribbon:

It’s interesting to see that the Favourites section has been replaced with the concept of ‘queries’. The idea is that you can send a query file via email to another user, providing that user has the Excel add-in. When the user double clicks on the query file (extension *.mdsqx), Excel will open and make a connection to MDS, using the connection and filter information provided in the file. This will result in Excel opening, with the user prompted if they would like to connect to MDS:

It’s also good to see that a few of the domain-based attribute issues have been addressed, namely the display of domain-based attribute names when you filter an entity and also the display of the names and the codes together in the Excel sheet. Here’s a screen shot of the how the attribute names are now visible when filtering the Country domain-based attribute that exists in the Customer model:

Auto Generation of Entity Code Values

If you wanted the code to be auto-generated in 2008 R2, you had to use a business rule. You can still do that if you want, but the create entity admin screen has changed slightly to include an option to auto generate the code. This works slightly better than business rules in my opinion, at least as far as the Excel add-in is concerned, as the code is returned to the user immediately after publishing, whereas the business rules require you to do a refresh in Excel and of course need more development! Here’s a screenshot of the add entity screen:

New Deployment Tool

Deployment has been altered in SQL 2012, with the addition of a new deployment tool, plus the fact that subscription views now get deployed. It seems that the current R2 deployment method (in the Administration section of Master Data Manager) is still included but now will not deploy data. To do that you need to use MDSModelDeploy.exe, as explained here. As an example, here’s how you can deploy the sample Customer model using MDSModelDeploy.exe, for default installations:

MDSModelDeploy deploynew –package “C:\Program Files\Microsoft SQL Server\110\Master Data Services\Samples\customer_en.pkg” –model “Customer” –service “MDS1”

SharePoint Integration and Further Details

Master Data Manager now supports a set of parameters that allow the MDS UI to be displayed without the header, menu bar and padding area. This means that MDS can now be incorporated into SharePoint or other websites. For the details on this, as well as more details on the above points, take a look at the following Technet article.

Power View includes several ways to create visualisations with images, ranging from a simple table, to the scrollable tile visualisation. In this post I’m going to explain a bit about the tile visualisation, showing how it can be used in Power View, before showing a couple of tips for working with images in both PowerPivot and Analysis Services Tabular models.

Power View Tiles

The Power View Tiles visualisation allows you to display a scrollable series of images, which, when clicked on, will display data of your choosing. The report below is an example of this:

What’s nice about the View above is that the names of the products are automatically displayed underneath the images. This is a sensible place to display the product names, as it means the area below can focus on whatever product level data you want, in this case Revenue by Month for the selected product.

This works fine in the sample models (in my case the Tailspin Toys model that I think came with CTP3) but it wont work in vanilla PowerPivot / Tabular models unless you configure a few model settings. Trying to reproduce the report above without these settings will give the following tile visualisation, minus the product names:

To address this in PowerPivot, you first of all need to switch to Advanced Mode, which you can do by selecting this option from the File menu in PowerPivot:

This gives you the following advanced tab:

Clicking on the highlighted Table Behaviour button in the ribbon gives you the following Table Behaviour window:

Here you can “change the default behaviour for different visualisation types and default grouping behaviour in client tools for this table”. The following properties should be set in order to get the tiles working correctly:

• Row Identifier – Specifies a column that only contains unique values, which allows the column to be used as an internal grouping key in client tools.
• Default Label – Specifies which column gives a row level display name, e.g. the product name in a Product table. This is key, as when a row level image is used, this property specifies which value to display alongside the image in tiles and other image-based visualisation.
• Default Image – Specifies which column contains images representing the row level data, e.g. pictures of products.

The full details for all these properties can be found here.

Tabular Models

Tabular Models contain the same properties, which can be edited when you open a tabular project in the new SQL Server Data Tools. The image below shows how the properties have been configured for the column in my model called Image:

Back to Power View

If we now build a tile visualisation using the image column, we will see that we get the names of the products returned along with the image. It’s also worth noting that the Image and Product Name columns have now have an icon next to them, indicating that the field exhibits “row level” behaviour:

Finally, for some examples of what’s new in Power View SQL 2012 RC0 take a look at this post on the Reporting Services Team Blog.

Master Data Services Collections are probably not the first feature that come to mind when you think of the MDS product. The hierarchies and member grids tend to be the core functionality, but as this post will hopefully show, MDS Collections are useful also.

Collections are essentially managed lists of members that can come from multiple different MDS explicit hierarchies, or also from another collection. The idea is that this “subset of members” can be maintained in MDS by a business user and then fed to external applications for reporting or other purposes.

Analysis Services Named Sets

One example of how collections can be used is to maintain Analysis Services named sets. Some named sets, such as a Top 50 Customers, don't require any maintenance, as it’s just the top 50 of all customers, based on a particular measure. On the other hand, sometimes named sets can be static lists, e.g. a list of core products that the user wants to see for reporting.

In the latter example, if a user wants the definition of a named set to change, they have to get IT to change the MDX script. MDS collections can help by allowing the user to control the named set definition, reducing the burden on IT.

Example Scenario

Here’s an example of how this is done. First of all, the end game is that I have a named set in Analysis Services that is currently just for 3 products. Therefore, a user can easily drop this set into an Excel report to get quick access to the products that are important to them:

So the challenge is that we need to find some way of extending this named set without doing it manually. This is where MDS starts to come in, so using the sample Product model that comes with MDS, I’ve created a new collection called Favourite Products, as shown in the image below:

If I go and edit the Favourite Products collection in MDS, then I can drag and drop more products into this collection, or remove some existing members. In this case, I’ve chosen to add two new products:

C#, AMO and the MDS API

So the named set represents the target that we need to get to, whereas the MDS collection that’s shown is the source. To get the contents of the MDS collection to update the named set, one way of doing it is to use the MDS API to pick up the collection members, and then to use AMO in order to write the named set into the MDX script. I’m just doing this in a C# windows application, but you could do it via a batch process, such as SSIS. For this post I’m just going to show the code, so here goes:

This post is already starting to feel too long so I’m not going to show the basics of the MDS API. For that, take a look at a good posting by the MDS Team blog here. Also, as anyone who knows me will no doubt agree, I don’t tend to get involved in doing C#, so don’t consider this to be production ready! It should give you an idea of the basics though. Anyway, assuming that we’re now connected to the MDS Web Service, I’ve created the following method that will return the members from the collection:

```private HierarchyMembers ObtainHierarchyMembers(string entityId, string hierarchyId, string modelId, string versionId)
{
HierarchyMembersGetRequest request = new HierarchyMembersGetRequest();
HierarchyMembersGetResponse response = new HierarchyMembersGetResponse();
request.HierarchyMembersGetCriteria = new HierarchyMembersGetCriteria();

//State that our hierarhcy type is a collection and that we want collection members
request.HierarchyMembersGetCriteria.HierarchyType = HierarchyType.Collection;
//Pass in the key search criteria to identify the correct collection in MDS
request.HierarchyMembersGetCriteria.ParentEntityId = new Identifier { Name = entityId };
request.HierarchyMembersGetCriteria.HierarchyId = new Identifier { Name = hierarchyId };
request.HierarchyMembersGetCriteria.ModelId = new Identifier { Name = modelId };
request.HierarchyMembersGetCriteria.VersionId = new Identifier { Name = versionId };

request.HierarchyMembersGetCriteria.RowLimit = 50;
request.International = new International();
OperationResult result = new OperationResult();
//Return the hierarchy members from the service
return service.HierarchyMembersGet(request.International, request.HierarchyMembersGetCriteria, out result);
}```

Before we use the above method, we need to connect to SSAS using AMO. That can be done quite easily with the following code:

```Server server = new Server();
string connection = "Data Source=.;Catalog=Adventure Works DW 2008R2;";
server.Connect(connection);```

After we’ve done all the usual error handling associated with connecting to a database, we need to pick up the SSAS database and cube:

```Database database = server.Databases["Adventure Works DW 2008R2"];

Now we’re ready to call our ObtainHierarchyMembers method, before looping through it to build the named set:

```StringBuilder mdx = new StringBuilder("\n//Auto generated named set at " + DateTime.Now.ToString() +
"\nCREATE SET CurrentCube.[Favourite Products] AS {");
int count = 1;
//Loop through the collection to build the mdx
foreach (ParentChild pc in hm.Members)
{
//Add the members to the MDX string
//This references the member by name
//It would be possible to reference by member key, but would require more work
mdx.Append("[Product].[Product].[" + pc.Child.Name + "]");
if (count < hm.Members.Count())
{
mdx.Append(", ");
}
count++;
}
mdx.Append("};");```

Now we need to insert the named set in the correct place within the existing MDX script, bearing in mind it could already exist:

```string currentScript = cube.MdxScripts[0].Commands[0].Text;
//Find the correct place to insert the named set within the MDX script:
int start = currentScript.IndexOf("\n//Auto generated named set at");
int end = 0;
StringBuilder newScript = new StringBuilder(currentScript);
if (start != -1)
{
end = currentScript.IndexOf(";", start);
//If the named set already exists, remove it
newScript.Remove(start, end - start + 1);
}
else
{
start = currentScript.Length;
}
//Insert the named set in the correct place
newScript.Insert(start, mdx.ToString());
//Update the cube's MDX script
cube.MdxScripts[0].Commands[0].Text = newScript.ToString();```

Finally we just need to update the cube in order to write the MDX back to the cube:

```//Call the update methods to update the cube
cube.MdxScripts[0].Update();
cube.Update();```

User Reports

Now that the cube has been updated, if the Excel report is refreshed, then the two new products will appear in the list:

Summary

Controlling SSAS named sets like this won’t be for everybody. I’ve certainly worked in a few clients where strict process has to be followed to update this sort of thing, but I can think of other companies that I know where this would be really useful.

Managing Analysis Services named sets is just one use for collections. Another example might be managing default multi-select parameters for SSRS reports. As collections are just lists of members that can be extracted easily, what you do with them is up to you!

This is just a quick post to announce a range of SQL Server training courses, organised (and in some cases delivered) by Chris Webb. To start off there’s a SQL Server course delivered by Christian Bolton in December, followed by an Analysis Services course delivered by Chris in February. I’ll be delivering a Master Data Services course in February, before Chris delivers an MDX course in March.

The details for all the courses are:

• SQL Server Internals and Troubleshooting Workshop - Christian Bolton – 6th – 7th December 2011

The Advanced Troubleshooting Workshop for SQL Server 2005, 2008 and R2 provides attendees with SQL Server internals knowledge, practical troubleshooting skills and a proven methodical approach to problem solving. The workshop will enable attendees to tackle complex SQL Server problems with confidence.

Full details and registration here.

• Real World Cube Design and Performance Tuning with Analysis Services – Chris Webb – February 2012

A two day course that takes real world experience in showing you how to build a best practice Analysis Services cube, covering design issues such as data warehouse design and complex cube modelling. Day two then covers performance optimisation for Analysis Services, including MDX optimisation and cube processing.

Full details and registration here.

• Introduction to Master Data Services with Jeremy Kashel – February 2012

An end to end look inside Master Data Services, this full day course will begin with a synopsis of Master Data Management before moving on to an overview of Microsoft SQL Server 2008 R2 Master Data Services (MDS). The remainder of the course will cover the major MDS topics, such as modelling and business rules, which will include a number of practical exercises.

More details and registration for here.

• Introduction to MDX with Chris Webb – March 2012

The Introduction to MDX course aims to take you from the point of being a complete beginner with no previous knowledge of MDX up to the point where you can write 90% of the MDX calculations and queries you’ll ever need to write. The three day course covers the basics, such as sets, tuples, members to more advanced concepts such as scoped assignments and performance tuning.

Full details and registration here.

 As you may have already seen, Packt have released a new MDX book, namely MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook by Tomislav Piasevoli. The book is written as cookbook, meaning that you can choose the individual ‘recipes’ that apply to whatever problem you’re trying to solve. Each recipe starts off with a section called ‘Getting Ready’, essentially giving you the base query to use, before moving onto ‘How to do it', which covers the main instructions for the recipe. There are then further sections, which explain how the example works and also suggest other functions/concepts that you may want to consider. This sort of cookbook style makes it really easy to follow, each recipe is displayed very clearly in the book. A wide range of MDX problems are covered, from time calculations to context-aware calculations. Not every piece of the MDX functionality is covered, which is to be expected, given the size of the book. It also doesn’t cover the usual introduction to members/sets etc that MDX books tend to cover, but it’s clearly stated that having a working knowledge of MDX is a pre-requisite for the book. I found the copy that I’ve been reading in the Adatis office, but I really do like it, so I’ll definitely be ordering my own copy!

This is a quick blog post to announce that Adatis are running a contest in conjunction with Packt around Master Data Services. The winner will receive an Amazon Kindle pre-loaded with our Microsoft SQL Server 2008 R2 Master Data Services book.

In order to enter, all you need to do is follow @AdatisBI and copy and paste our eBook on Kindle message and retweet it using your twitter account. The message is:

Please mark your retweet with the hashtag (#), so that we can find and consider your entry. Only one entry per person will be accepted.

You can find out the full details of how to enter, as well as terms and conditions, on the following page.

Good luck!

Master Data Services in SQL Server Denali now includes an Excel add-in that allows MDS users to add and update data that exists within Master Data Services. For those of you that haven’t had a chance to download the latest CTP, this post gives an overview of what’s possible in the Excel Add-in.

At a high level, the following functionality is available within the MDS add-in:

• Retrieve members from a master data entity
• Add or Update members (including annotations) and publish the changes back to MDS
• Create entities in MDS if you have sufficient permission

I don’t want this post to go on for ever, so today I’m going to focus on retrieving and updating members.

Retrieving MDS Entity Members

The Excel Add-in comes with a sidebar called the Explorer that can be used to connect to an MDS model and retrieve data from an entity within the model. What I quite like is that there is a filter button that allows a user to specify how to filter the data before loading it. In the example below, I’ve connected to the sample customer model, and have filtered the customer entity to only show customers of type 1 and customers from Seattle:

On the same filtering window its possible to choose the attributes to display. Therefore in the above example, by clicking the ‘Load Data’ button, a filtered list of customer members will be shown:

Editing Members

Once the members have been retrieved, editing is just a simple case of typing into the field that you want to change. In the example below I’ve chosen to change the names of two of the customers, which has caused the cells to be highlighted, informing me of the changes that I’ve made:

Domain-based attributes are also picked up nicely by Excel. Not that I imagine it would make too much sense, but it’s possible to change the Sales District North Western US (WUSSL) to Central US (CEUS), for example. Excel handles this by rendering a drop down for this domain attribute:

As it is in Master Data Manager, within Excel it’s also possible to display a domain attribute’s name.

Regardless of the type of attribute that gets changed, the changes will just remain in the Excel sheet until I click the publish button:

Clicking the the publish button will result in being prompted to make an optional annotation for each of the changes:

The annotations, as well as the member updates, will be committed to the Master Data Services database. The end result is that the transactions can be viewed as per 2008 R2 in Master Data Manager, or by right clicking in Excel and choosing ‘View Transactions’.

Summary

The Excel add-in doesn’t contain all the functionality available within Master Data Manager, but provides an interface that clearly Excel-based data stewards will be very comfortable with. It’s also much easier to do bulk updates in the Excel front-end, when compared to the web front-end.

That’s about if for now. There’s more to the Excel add-in, which I’m aiming to cover at SQLBits 9, plus Master Data Manager has been given a complete overhaul in Denali – hopefully I’ll find the time to cover that soon…