Jeremy Kashel

Jeremy Kashel's Blog

Multiple Assumption Models in PerformancePoint

PerformancePoint gives you the flexibility to have multiple assumption models to use as you please in your main models. Although this is great, I've found a problem when the two assumption models have different member sets for the same dimension, and so in an extension to my last assumption model post, this post provides a workaround for the issue.

Consider the following example. I've got a main model where I want to use 2 assumption models, namely:

  • HR Assumptions - Uses the HR member set from the BusinessDriver dimension;
  • Strategic Assumptions - Uses the Strategic member set from the BusinessDriver dimension.

If you go and add the two assumption models to the main model at the same time, then everything looks normal, as shown in the screen shot below:

image

Once you deploy successfully, you will of course want to write a business rule to pick up the assumption model data. However, when writing the rule and trying to pick from the BusinessDriver member selector, you will see that you can unfortunately only select from one member set, as shown below:

image

If you need to write rules that reference specific members in both member sets, then you will be out of luck. It's not even possible in any kind of native MDX rule, as the main model cube that gets created in Analysis Services only contains the dimension that has been created from the 'HR' member set. It would seem that PerformancePoint just picks the member set that is first alphabetically.

The workaround for this issue is simply to create a single member set that combines the two original member sets. Therefore, each assumption model will contain more members than required, but that's far better than not being able to write the rules that you need.

So just something to be aware of, and catch, at design time - rather than in the middle of your build.

 

SQL Server 2005 Cumulative Update 8 for Service Pack 2

Microsoft have just released Cumulative Update 8 for SQL Server 2005 Service Pack 2.

Amongst the changes are a few fixes for small problems in Analysis Services, such as:

  • Infinite recursion occurs in the CalculationPassValue function in SQL Server 2005 Analysis Services;
  • A data source view that connects to an Oracle database generates incorrect relationships when you develop a Microsoft SQL Server 2005 Analysis Services project by using BIDS;
  • The MeasureGroupMeasures function does not work correctly in SQL Server 2005 Analysis Services;
  • All the MDX queries that are running on an instance of SQL Server 2005 Analysis Services are canceled when you start or stop a SQL Server Profiler trace for the instance.

Details the above fixes and many more can be found here, as well as details of how to download the update. As usual, you can only apply the hotfix if you have Service Pack 2 installed.

Details of all SQL Server 2005 builds released after Service Pack 2 can be found here.

XLCubed Version 4 Released

XLCubed Version 4 was recently released, adding support for increased data visualisation via MicroCharts, as well as enhanced web publishing.

I've been using XLCubed since version 1 and have always found it both powerful and easy to use. All the standard functionality that you'd expect from an OLAP/Analysis Services Excel add-in exists and is incredibly intuitive, yet there's plenty more if you want to dive deeper. In particular, I've always liked the following features:

  • Wealth of right click functionality, such as remove only, keep only, propagate across sheets;
  • Drag and drop pivoting - alter your report quickly without having to open toolbars or menus;
  • Converting grids to formula - Allows you to easily create disjoint reports using Excel formulas;
  • Linking grids - Means that an action in one grid (such as a drill down on a dimension) can automatically perform the same operation on multiple linked grids.

As mentioned, version 4 sees the inclusion of the interesting MicroCharts feature to the XLCubed grids. This essentially means that you can choose to display just numbers, a graphical MicroChart visualisation of the numbers, or both (see below). The MicroCharts do not hinder the core functionality, such as drill down or pivoting, and can be turned on for any of the measures.

SparklineGrid_550X

VarianceAnalysis_550X

There's a lot more functionality worth looking at, you can find out more at www.xlcubed.com.

Executing Rule Sets in PerformancePoint to Move Data

As I mentioned in my post a few weeks a ago, the PEL Allocate statement is a powerful method of moving data between PerformancePoint models.

Although it's powerful, the end result of an Allocate statement is simply that the destination model will contain data that has been queried from the source model. What if the destination model already contains data for the target scope? If this is the case, then you will have double counting. When creating the Allocate rule, there is unfortunately no option that lets you decide what you want to do with existing data. Ideally you need to have the 'Existing Data' option that you get when running an association, which gives you the 'Append', 'Scoped Replacement' and 'Full Replacement' options.

To get around this, I tend to place two rules inside a rule set. The first rule is a SQL implementation assignment rule with the same scope as the allocate rule, and deletes existing data by using the PEL statement this = NULL inside the rule. Then the second rule is the allocate statement, which appends the data. This is shown below:

image

The idea is then to execute the rule set, which will execute each of its rules in order. The advantage to doing this is that your whole data movement process can be executed in one clean step (perhaps scheduled or via Excel), without the hassle of executing several individual rules.

Assumption Model Design Techniques and Business Rules

PerformancePoint assumption models are very useful if you want the data in one model to drive and affect another model. However, if you’ve been using assumption models, then the chances are that you may have noticed that they can sometimes be a bit awkward to use. This post highlights a couple of issues that I’ve experienced with assumption models, and provides a few workarounds.

Firstly, when you initially link the assumption model to the main model, you may get the error message ‘the assumption model uses different member sets for the following dimensions’, as shown below:

image 

Lets presume that you ignore this warning and continue to link to the assumption model. Further down the line, when you come to write your PEL rule to pick up the assumption model data, you may find that you get an error when validating the rule. This will occur if you want to populate a member in one member set by referencing a member in another member set, which resides in the assumption model.

To illustrate this, if we want to populate an account called ‘Total Sales’ in the ‘Operational’ member set of the main model by using an account called ‘Price’ in the ‘Drivers’ member set in the assumption model, then we would have the following PEL rule:

scope (
[Scenario].[All Members].[Forecast],
[Account].[Operational].[Sales]) ;
     
this = [Account].[Operational].[Volume] * (MODEL([Sales Assumptions]), [Account].[Drivers].[Price]);

end scope;

However, when validating this rule we get told “No hierarchy Drivers exists in dimension Account”. The only account member set that PerformancePoint will let you use in the above rule is 'Operational'. Note: you'll be ok if 'Price' exists in the main model, but you will just need to refer to it as [Account].[Operational].[Price]. But if 'Price' doesn't exist in the main model, then a different approach is needed.....

The simple way around it is to use an approach found in the Alpine Ski House sample application. Add the accounts (if that’s the dimension causing you problems) into another dimension called Business Driver. You can then use the Business Driver dimension in your PEL rule, as PerformancePoint is more than happy for you to reference assumption model specific dimensions. This is shown below:

scope (
[Scenario].[All Members].[Forecast],
[Account].[Operational].[Sales]) ;

//The rule is almost identical, but validates ok as we're using the BusinessDriver dimension.     
this = [Account].[Operational].[Volume] * (MODEL([Sales Assumptions]), [BusinessDriver].[Drivers].[Price]);

end scope;

Another problem that you may experience is when you have a difference in granularity between member sets used in the assumption model and the main model. For example, you may have an Entity member set called ‘Regional’ used in the assumption model, but a more detailed member set used for the Entity dimension in the main model.

So in the above example, we want to populate members in the more detailed 'Company' member set by using a value from the member's parent in the higher level 'Regional' member set. In order to do this, we can just use the Parent function on the current member of the main model's entity member set. Even though we've referenced the main model member set inside the MODEL function, if the current member's parent exists in the assumption model, then it will pick it up without any problems:

scope (
[Scenario].[All Members].[Forecast],
[Account].[Operational].[Sales],
Descendants([Entity].[Company].[TOT], 1073741823, leaves)) ;
     
this = [Account].[Operational].[Volume] * 
      //Its ok here to use [Dimension].[Main Model Member Set].CurrentMember.Parent
      (MODEL([Sales Assumptions]), [Entity].[Company].CurrentMember.Parent,
      [BusinessDriver].[Drivers].[Price])

end scope;

So both very simple design techniques, but it’s worth being aware of them before you actually start building your models.

Intercompany Eliminations Video for PerformancePoint Planning

Useful post on Norm's PerformancePoint Server Blog showing a short video on how to do intercompany eliminations in PerformancePoint.

The video is actually by Forrest Dermid, who sets the scene by explaining why you would actually want to do an intercompany elimination. He then goes on to show the relevant areas of Planning Business Modeler, demonstrating in detail the changes that you need to make to your account dimension and model properties, amongst others.

Really worth watching in my opinion. You can find it here.

PerformancePoint Planning Performance Tuning White Paper

A PerformancePoint Planning white paper has recently been published that documents the performance and scalability of Planning Server.

Included are some very interesting test results that will come in handy when planning the deployment of PPS for a large number of users. There are also a few tips for the Excel add-in and configuration of the Planning Server.

The following page contains links to both the new Planning white paper and also to a Monitoring white paper that was published a while ago:

http://technet.microsoft.com/en-us/library/bb660521.aspx

PEL Allocate Statement Vs Associations

PerformancePoint provides you with a few options when you want to move data between models, the most obvious of which is Associations. However, there is also the powerful PEL Allocate statement....

So which method should you use to move data? Associations are very easy to setup and will transfer your data for you - all out of the box. However, they do have a few limitations, namely:

  • You can only apply a minimum, maximum or total aggregation to the source data;
  • You can't have parameters to dynamically select members;
  • If you delete a dimension that's used in an association, then you can't view or edit the association. Although there should hopefully be no need to do this in production(!), it can be a bit annoying during development.

By writing a PEL Data Movement rule of type PushCrossModelAllocation or PullCrossModelAllocation, we get bit more flexibility. With the PEL Allocate statement you can:

  • Use PEL functions or operators to alter the source value that gets written to the target model;
  • Similarly, apply a ratio to the source value so that we only transfer the relevant portion of the whole data value. E.g. we can transfer 20% of head office costs;
  • Use rule parameters to dynamically select members.

So if you find that Associations don't quite do what you need, then how do you use a PEL Allocate rule? The following is an example of a simple PushCrossModelAllocation PEL Allocate statement:

scope 
(
  $TransferScenario$,
  [Time].[Monthly].[All].LeafMembers,
  [Organisation].[Europe].[France],
  [Account].[All Members].[Total Cost]
) ;
     
  Allocate
  (
    //The scope of cells that will receive the data
    //The this keyword indicates the target is the same cells as defined in the scope
    this,
    //The source value. If its the same as the scope then we just use ()
    //Or we can apply a simple calculation
    [Account].[All Members].[Volume] * [Account].[All Members].[Cost Per Unit],
    //The scaling ratio that gets applied to the source value
    //0.2 indicates that we only want to apply a fraction of the source value
    0.2,
    //The last two arguments are only relevant for Push and PullCrossModelAllocation
    //They specify the mappings between the source and target members
    //Here I want France (in the source) to be mapped to France (in the target)
    (
      [Organisation].[Europe].[France]  
    ),
    (
      [Organisation].[Group].[France]
    )
  ) ;

end scope;

There are a fair few arguments to the Allocate statement, but I've found that a good way to learn what its doing is to debug the rule in order to see the MDX statement that gets generated. The MDX query that gets generated for the above PEL statement is as follows:

WITH CELL CALCULATION queryCalc 
FOR
 //This part of the MDX comes from the 1st argument in the PEL statement
 '([Account].[All Members].[Account].&[5005],
 [Measures].[Value],
 [Scenario].[All Members].[Scenario].&[1],
 Descendants([Time].[Monthly].[(All)].&[0], 1073741823, LEAVES),
 [Organisation].[Europe].[Level 02].&[11])' 
AS
 //This is arguments 2 and 3 of the PEL Allocate statement
 //Namely the simple A*B calculation and then multiplying the value by the ratio
 ((([Account].[All Members].[Account].&[5003],[Measures].[Value]) *
 ([Account].[All Members].[Account].&[5004],[Measures].[Value])) * 0.2)
SELECT NON EMPTY 
 ([Account].[All Members].[Account].&[5005],
 [Measures].[Value], 
 NonEmpty(({[Scenario].[All Members].[Scenario].&[1]},
 {Descendants([Time].[Monthly].[(All)].&[0], 1073741823, LEAVES)},
 {[Organisation].[Europe].[Level 02].&[11]}))) 
 properties [Scenario].[All Members].Key ,
 [Time].[Monthly].Key ,
 [Organisation].[Europe].Key ,
 [Account].[All Members].Key 
ON COLUMNS 
FROM [Europe]

Here we can clearly tell (as marked in the above MDX code) how the first, second and third PEL arguments translate into MDX. If you find that the allocate statement is not quite doing what you want, then its quite useful to execute the MDX within SSMS to figure out exactly how the source model is being queried.

Once you're finished coding your business rule it can be executed via a scheduled job, manually, or via Excel.

To be fair to Associations, you're probably more likely to use them for bulk transfer of data, perhaps to facilitate cross model consolidation - and they're extremely useful for this purpose. The Allocate statement is perhaps more relevant for specific pieces of data that need to be adjusted in some way before being transferred to the destination. Despite this, the Allocate statement is the more flexible method and will allow you to support more sophisticated cross model data transfer - if it’s needed.

 

PerformancePoint Management Reporter - First Look

PerformancePoint 2007 Management Reporter was launched recently, with the specific intention of providing financial users with the ability to easily create financial reports that can be distributed across the organisation. Management Reporter can connect to PerformancePoint and there are plans to allow it to connect to ERP systems such as Dynamics AX, GP and SL, among others.

On the PerformancePoint side of things, the interesting feature to note is that you won't be able to connect to just any old PerformancePoint model - you can only connect to Financial Models that contain a a Currency Dimension. Once you are connected, its quite easy to create and publish reports. This is partly due to the excellent wizard, which essentially allows you to map your PerformancePoint account dimension members to the relevant section of a Balance Sheet or Income Statement report:

image 

In addition, the product uses the idea of 'building blocks' in order to give users flexibility when designing reports. A building block is essentially a section of the report (e.g. the rows or columns) that you can re-use amongst multiple different reports. Therefore, if you have a common set of columns that you want to see on all your reports, then there's no need for you to manually re-create the column definitions for every report. The 'Report Designer' application contains an area where you can simply pick your row and column definitions based on building blocks that you have already created:

image 

Management Reporter is actually based on a product called FRx, which has been around for a while and is able to connect to a lot of the major ERP systems. There are a lot of good resources on web for FRx, in particular FRxBuzz, which contains a few FRx tips and tricks that can be applied to Management Reporter.

PerformancePoint Dimension Schema Error 225000210

I'm a big fan of schemas in SQL 2005, and like to use them for both organisation and security. However, I've found that PerformancePoint Planning Business Modeller doesn't appear to like schemas as much as I do....

It seems that if you try to build a dimension from a table that is not in the dbo schema, then Planning Business Modeller will throw the following error :

image

This will happen whether you've built your data source using the 'Database' or 'Data Source View' options in the create data source window. The same error will also appear if you try to populate a dimension using the 'Load Members' task in the Workspace Actions pane. In addition, the problem doesn't appear to be fixed by applying the recent PerformancePoint Cumulative Update, as detailed by Tim Kent here.

An obvious workaround is to create views in the dbo schema, or perhaps creating synonyms, but this could really be a hassle if you have a lot of tables/views affected. In the interest of getting this resolved, and to understand if a better workaround exists, I've posted this issue on the MS Connect site.

The issue can be found on Connect at: https://connect.microsoft.com/feedback/ViewFeedback.aspx?FeedbackID=333236&SiteID=181

I'll provide an update as this issue progresses.