Sacha Tomey

Sacha Tomey's Blog

Convert PivotTable to PerformancePoint Planning Matrix

I spotted a 'Convert to PerformancePoint Matrix' menu option on the context menu
of an Excel PivotTable recently.Context Menu

I'd not noticed it before - it is documented within the PPS Excel Add-In help but only once, and subtly at that.  I'm glad to report, on first impressions, it appears to do a fairly decent job.

It's a one-way process that cannot be undone and it creates a 'User defined' MDX style matrix.  You are therefore limited to making subsequent edits using the Report Properties window but suffice to say it's a handly little feature that I'll no doubt use more and more now I've found it.







MDX Stored Procedures (Part 2)

What seems like a lifetime ago I posted about MDX Stored Procedures and how we've built a helper class that provides a wrapper to an MDX query.

The helper class has been stable for quite sometime now and a few people have been asking me when I'm going to release it.  Well, today is the day.

You can download a Visual Studio 2005 C# Quick Start project from here.

The idea behind the class is that text files containing parameterised MDX queries can be executed against a cube with just a couple of lines of code.

Here are some notes to complement the quick start to get you going:

Step 1:  Create an instance of the helper class

MDXHelper helper = new MDXHelper
    "Adventure Works DW Standard Edition",
    "Adventure Works",
    "MDX/Product Category List.txt");

There are a couple of overloads on the constructor.  The one above specifies all the options.  The first three parameters set the Analysis Services Server, the database, the cube.  The last two parameters specify what format the MDX query.  In this case it's a Relative URL meaning that the next parameter must specify a relative URL pointing to a text file containing the desired query.  The alternative to a Relative URL is 'Text' which means the last parameter must be the query itself rather than a path to a file.

Step 2:  Add the parameters to the parameter collection.


Parameters are simply name/value pairs and are held internally as a generic dictionary<string,object>.  Parameters are optional but you will get an error if you invoke a parameterised query without defining a value for every parameter.

Step 3: Execute the Query

DataTable categoryList = helper.ExecuteDataTable();

A DataTable or a CellSet can be returned, again various overloads exist to provide additional flexibility, for example you can set the resource format and query path on the ExecuteDataTable e.g.

DataTable result = helper.ExecuteDataTable(MDXHelper.MDXResourceFormat.RelativeURL, "MDX/Internet Sales by Country.txt");

Additional Notes

The query definitions are cached in the ASP.NET application cache so if you make any changes to the query definition files you'll need to recycle the application pool.

The helper class will only work for Web Projects.  There is no support for WInForms/SmartClient deployments.

If you are returning a data table you can specify some additional options. 

  • Untyped is the fastest and default, returning a plain data table based on a flattened CellSet object. 
  • Typed brings back a Data Table with the Schema column data types set to match the CellSet object it was derived from.  This can be slow for large datasets.
  • UntypedFormatted brings back a plain data table but using the formatting e.g currency formats etc held in the cube.

There are some additional properties containing meta data debugging information that have been useful.  ExecutedMdxQuery contains the actual MDX that was executed after the parameter replacements have occurred.  DatabaseServerVersion, CubeDescription, CubeLastProcess and CubeLastSchemaUpdate are pulled from the server when a query is executed.

You can use parameters to define any aspect of the query.  E.g.  You could use parameters to swap rows and columns, define measures, pretty much anything you can achieve with search and replace.  On one hand this provides no end of flexibility but on the other provides diddly squat compile time debugging!

The same database connection is maintained between query execution calls.

All feedback gratefully received.

Planning a PerformancePoint Planning Implementation

There is a useful spreadsheet (what other format would it be!?) to assist in the requirement gathering and the planning of the implementation for a PerformancePoint Planning Application.

It's a strange mix of high and lower level detail but it does ensure you capture and at least think about each element of your application and the business requirements it should satisfy.  It's a useful starting point/stake in the ground although some of the questions will require much more thought and analysis behind the scenes than others.

The main topics it covers are:

  • Completing an Impact Assessment
  • Application, Model, and Model Site Planning Considerations
  • Model Type Planning Considerations
  • Dimension Planning Considerations
  • Currency Translation Planning Considerations
  • Data Loading Planning Considerations
  • Business and Process Planning Considerations
  • Reporting Planning Considerations
  • Business Rule Planning Considerations
  • Model-to-Model Association Planning Considerations
  • Diagramming the Application

PerformancePoint Planning Server - Upgrading from CTP*

The PPS Operations Guide has a good procedure for upgrading from a pre-release version:

It's relatively straightforward to follow but if you want a quick heads up, see below:

- Backup all the databases, both server databases and all application staging databases

- Uninstall all the PPS binaries

- Re-install the PPS binaries selecting 'Distributed Installation' to allow you to unselect the PPS Database Installation
(You'll upgrade, keeping all applications/server config in place this way)

- Run ppscmd upgrade /server < PlanningServer URL > from the command prompt
(This upgrades the database schemas for all applications to the new version)

- Connect to the Administration console and 'Take online' the applications

It's a similar but simpler process for Monitoring Server too.


PerformancePoint Server Planning - Business Rules Debugging

In PerformancePoint Server 2007 Planning, you build centralised business rules in a new language called PerformancePoint Expression Language (PEL).  PEL is very MDX-like and relatively straight-forward to pick up, if you are even a little familiar with MDX. 

The beauty of PEL is that it can generate either MDX or T-SQL scripts from the same PEL Expression.  This allows the developer/analyst to target either the cube itself, using MDX Script, or the fact table, using T-SQL, depending on which implementation approach is more suitable for the type of calculation and from a performance perspective.


Within the Business Rules Editor workspace you can easily select the implementation (SQL or MdxQuery) from the Rules Detail property list





With the Rule saved (it does not have to be deployed) you use the rule context menu to debug the rule.

If you come from a development background like me, you may expect a little more to happen than what actually does happen !

Depending on what implementation you have selected, and whether or not your PEL is valid, either the MDX or the T-SQL is generated from the PEL and displayed in a window.  It's important to realise that the PEL expression has not been run.  It is for information purposes only.

From this point you can eye-ball the resultant query to help determine your issue or, as I tend to do, cut and paste into a New Query window inside SQL Server Management Studio.  The resultant T-SQL tends to be extremely verbose so actually debugging the problem using T-SQL will be rare !

Error If there is a problem with the actual PEL itself, the reason, in the form of a (normally) reasonably helpful error message is displayed in the window, instead of the resultant MDX/T-SQL.

I was initially a little disappointed with the built-in debugging facilities, but considering the target audience of the Planning Business Modeler, full integration with SSMS or Visual Studio was never going to be a consideration.  But, to be fair, it is probably just enough; the debugger is primarily aimed at ensuring your PEL is correct, it stops somewhat short of helping you debug the actual business logic but this can be handled if you cut, paste and hack the resultant MDX or T-SQL into another tool and even that becomes less and less necessary as you develop your PEL writing skills.

PerformancePoint Server Planning 2007 - eh?

Despite all the hype, the recent launch, several PPS related conferences, numerous articles and the pull-no-punches Olap Report Preview ,there is still some doubt about what the Planning element actually offers, well..

To quote directly from the marketing bumf:

Efficiently build budgets, forecasts, and plans in the interface everyone knows—Microsoft Office Excel. PerformancePoint Server 2007 offers auditing capability, centralized control, enhanced security, and the proven data platform of Microsoft SQL Server 2005

For a specific example, imagine, without PerformancePoint Server, trying to build an enterprise-wide solution to capture quarterly financial based budgets and/or forecasts from every budget-holding manager across the organisation, for consolidation and approval by the CFO.  The issues you would face would revolve around data-capture, workflow, security, consolidation, business rules, performance, tracking etc.  Even if you were successful, it's likely that you would eventually end up in Excel hell, or left with no confidence in the results and a huge team on full-time support.

Step forward PerformancePoint Server Planning.  It removes much of the pain associated with all of the above and is not just limited to financial based scenarios.  In addition, when integrated with PerformancePoint Monitoring you can track the actuals against the captured plans/targets from Planning to ensure the business stays on track.  When the enevitable deviations occur, the Analytics element of PerformancePoint Server (Currently ProClarity) will help analyse why to allow corrective action to be taken. 

For a little more detail..

PerformancePoint Server 2007 Forums - Released

The PerformancePoint Server forums are now part of the official TechNet Forums and no longer limited to Microsoft Connect Beta Forums.

For those that didn't see the relatively active Beta forums on Connect, Microsoft separated the product into two groups and this is reflected in the official versions too.


Monitoring and Analytics:

At the time of writing the Planning forum was empty and the M&A forum had just the single topic.  It will be interesting to see how the activity picks up through the launch period and into the New Year.

Loading the PerformancePoint Planning Account Dimension

I've been setting up some PerformancePoint Planning demonstrations for both clients and internal knowledge transfer.  As part of these demonstrations I've been loading the Account Dimension from CSV files. 

There are several other ways of loading data into PerformancePoint planning dimensions and models and I'll no doubt post about the alternatives in the future.

There is a small gotcha that I'd thought I'd share.  The pre-defined Account Dimension contains a member property called Account Type.  This member property utlises a lookup table for the various built-in account types such as Unit, Expense etc. 

The PerformancePoint CSV format requires that the first row contains the field (or rather member property) names, and optionally data types, with the remaining rows that actual data.  This is slightly different for the Account Type member property, as this is a lookup field, you need to specify the key field name instead, in this case, AccountTypeMemberId.

With that known, you would be forgiven in thinking that, in order to load data against that field, you need to specify the actual AccountTypeMemberId.  However, that would result in a new member property being created called 'AccountTypeMemberId' that contains the value and not the description.  The proposed destination field Account Type would be left unpopulated.

Instead, to correctly load the member property, rather than use the Id, you need to specify the actual description from the lookup table.  (Not the only un-intuitive feature of PerformancePoint Planning!)