Adatis BI Blogs

Renaming a PerformancePoint Planning Server

I had a strange situation to deal with a few weeks ago - the IT department at one of our clients wanted to rename the server that was running PerformancePoint Planning, and wanted to know if that would be ok! This was of course a bit of a worry - its not something that I've seen documented, so I wasn't sure what impact it would have on models, form templates, etc. I was aware from the Planning Server Mirroring Technique that there are several references to the server name within the application, system and service database on a Planning Server. My first step was to follow the aforementioned mirroring technique, and to update all the necessary tables. The mirroring technique also instructs you to update the config file called PerformancePoint.config, which can be found in Program Files under \Microsoft Office PerformancePoint Server\3.0\Config\. After updating the tables/config files, everything seemed fine at first, but then I noticed a couple of problems. Although you can of course edit it, the server connection text box in PAC always defaulted to the old server name. In addition, and more of a concern, the auditing and tracing had stopped working. Getting the sever name to default correctly was taken care of quite easily by updating the following XML that's found at \Program Files\Microsoft Office PerformancePoint Server\3.0\AdminConsole\Web.Config: <add key="PerformancePointServer" value="http://SERVERNAME:46787/" /> Getting the auditing/tracing to work again required a bit more digging, as you can't change the 'audit writer computer name' that's found in the ManageAuditing.aspx page in PAC. These locked values for auditing are held in the dbo.Configuration table of PPSPlanningSystem, in an XML column, as follows: <PerformancePoint.audit highLimit="1048576" pollInterval="60000" fileName="D:\Microsoft Office PerformancePoint Server\3.0\Audit Log\audit.log" writerMachineName="SERVERNAME" /> <listener maxLength="10" maxBackup="9" writerMachineName="SERVERNAME" fileName="D:\Microsoft Office PerformancePoint Server\3.0\Trace Log\server.log" format="single" name="FileLog" type= "Microsoft.PerformancePoint.Planning.Common.Configuration.FileListenerElement, Microsoft. PerformancePoint.Planning.Common, Version=3.0.0000.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> After updating the above values, everything returned to normal, and the auditing/tracing kicked in immediately. Finally, one thing to think about if you ever have to do this, check that no local administrators have objects checked out! As you are renaming the sever, your local admin account will belong to a different server name, meaning check outs to OldServerName\Username will be invalid.

Unofficial PerformancePoint Planning Tips and Tricks

Wavesmash has posted a series of tips and tricks shared at a train the trainer event that took place in Denver recently.  As suggested, most of the 'nuggets' are from the attendees themselves rather than the course material so, on the plus side there are some real experienced based tips however, I wouldn't treat all as official tips and tricks - I certainly frowned at a couple but that could be due to the explanation rather than the intent. There's certainly some goodness, and one that made me smile:  Regular Refresh of model = happy modeler

Troubleshooting the PEL Allocate Statement

The PEL Allocate statement can sometimes throw unexpected error messages, especially when you're setting up a pull or push cross model allocation rule. I've been keeping track of a few of the more obscure ones, which I've listed in the table below. Hopefully this will help anyone out there who's struggling to get the cross model rules working. Error Message Solution/Explanation No hierarchy ‘xyz’ exists in [dimension] Check that you have specified the correct member set in the source, target and scope parts of the rule. Member set is not defined correctly for cross model mapping. The number of members in source and target member sets in cross model mapping must match. This message can have 2 meanings. Firstly, as it suggests, it can be that the number of members in the source and target mapping arguments do not match. Or, prior to SP1, it can be this rather more serious problem - Execution of rule ‘xyz’ failed.  This may have occurred because changes have been made to the model structure without redeploying the model site. This can occur at execution time when you try and map two members twice. E.g. if your source is {2009, 2009} and your target is {Jan, Feb}. It doesn’t like the 2009 in there twice. Each dimension scope in cross model mapping must be a list of concrete dimension members Unfortunate but true – you can’t use relative member functions such as Descendants in the 4th and 5th arguments of an allocate statement. Some of the model's dimension members are invalid or missing. The model may need to be redeployed. This is a run time error that can occur when you are trying to write to non-leaf members. The first argument of the allocate statement will contain the problem, as this is where the cells to receive the allocation are specified. Often it’s because ‘this’ has been entered as the first argument, which could result in using non-leafs for the argument, as it will inherit any non-leafs specified in the rule’s scope statement. Allocation statement must have 5 parameters when they appear in Cross Model Allocation Rule. Cross model allocation rules expect 5 parameters, in the correct format. So if you don’t have 5 parameters, then there’s the problem. However, be careful how you specify the parameters. For example, if you specify 2 members in the source mappings, such as [Time].[Monthly].[2007], [Time].[Monthly].[2008], then be sure to enclose them in brackets, otherwise the 2nd member will be interpreted as your 5th argument. Unable to compile the cross model allocation rule. The target contains the dimension ‘xyx’ but no explicit mapping has been provided. If an extra dimension(s) exists in the target model, the allocate statement requires you to specify what member you expect the data to be written to in that dimension.  

Using NativeSQL Business Rules for Seeding or What Ifs

You need to give really careful consideration to procedural business rules in PerformancePoint Planning. This is where multi-dimensional knowledge and experience helps, as you need to know whether its best to use a SQL or MDX implementation for your rules. User Requirements One area where I've found that this is particularly relevant is user requests for procedural rules that accept parameters that are member(s) of a dimension. E.g. Lets run the rule for the selected member in the entity dimension, perhaps to carry out a what if on a particular entity. This is fine for a single member, but on occasion I've found that the users want to pick either a)a leaf level member (which is fine) or b)a non-leaf member, and for the rule to apply the logic to all descendants of that member. The Technical Problem What technical problem? You can use the descendants function to achieve this right? You can with an MDX rule, but you can't with a SQL rule. So, surely just use an MDX rule then? This is where you have to think relational world Vs OLAP world. Putting a statement such as Descendants($Entity$, 10, leaves) in the scope of your rule will mean that when the rule is run, then it will always operate on the leaves of the dimension, regardless of whether you pick a leaf or non-leaf member. But the issue is that if your scope statement covers cells that have definition rules against them, then an MDX query will pick up the results of of those cells and and shove them into the fact table! So can you use the Except() function or something similar to filter out those cells that you don't need? Yes you can, but not if the members start to span different levels, the compiler won't let you do that. As I said, you can't use a SQL implementation because using Descendants($Entity$, 10, leaves) in the scope statement will tell you - "A SQL implementation is forbidden by a parameter or variable reference in a MemberSet expression unless the reference is the entire expression." The Solution Rules that operate on fact data should really use SQL, its quicker and avoids hitting calculated data. I'm sure that you've seen this coming a mile off (!), but one way to go about this is to use a NativeSQL rule. You can reference database objects in your NativeSQL rule, one of which will make it easier for you carry out descendant/hierarchy type functions in SQL itself. A function will get created in the Calc schema of your Planning application database in the format fnCalc_0_DimensionName_MemberSetName_IsAncestorOrSelf - this will occur when you deploy your model site. Using this function in conjunction with NativeSQL will allow you to get the leaves of a member quite easily. The following NativeSQL rule creates a copy of one scenario's data into another scenario, which is referenced by a parameter:--Although integers, the parameters are actually 'members of a dimension' parameters create proc Calc.[sp$procedure] @scenario int, @entity int as begin begin transaction insert into dbo.[tbl$factTable] (alldim$except(dim$Entity, dim$Scenario), dim$Entity, dim$Scenario, column$ruleId, Value, LoadingControlID, bmo$CreateDateTime, bmo$ChangeDateTime, ContextID) select alldim$except(dim$Entity, dim$Scenario), dim$Entity, @scenario, $ruleId, Value, -9, getutcdate(), getutcdate(), -1 --No need to hard code the fact table name from dbo.[tbl$factTable] t inner join ( select e.memberId from dbo.d_Entity e --Get leaves of the member key that is in @entity where Calc.fnCalc_0_Entity_Divisions_IsAncestorOrSelf(@entity, e.memberid) = 1 and e.memberid not in (select h.parentmemberid from dbo.H_Entity_Divisions h) ) lv on lv.memberid = dim$Entity where t.dim$Time >= 200701 and t.dim$Time <= 200712 and (dim$Scenario = 1) if @@error <> 0 rollback transaction else commit transaction end I suppose the above rule may look a little bit daunting, due to all the dollar signs everywhere. These are just the preprocessor commands, which will get replaced when the stored procedure gets created. But other than that, its just an insert statement. In some cases the syntax is actually quite handy, as you can use statements such as alldim$except(), which is a short hand way of returning fact table dimension column names very easily. The idea behind the preprocessor commands is that you can protect the logic in your rule from minor structural database changes. The key is that when the rule gets run, it has normal parameters, meaning the user will see the following interface:   In the case above, the member key of North America Operations will end up in the stored procedure, as I have a parameter in the stored procedure called @entity. Inside the stored procedure it is then used in combination with the IsAncestorOrSelf function to pick up the leaf level descendants. The only caveat that I will make is that this hasn't come from a production environment, so I've skimped a bit on a few of the operational details such as deleting the existing data, decent error handling and returning the number of records affected. These are all things that you should add in if you use this kind of thing in a production environment!

PerformancePoint Server Planning SP1 - Clear Changes After Workflow Action

There's a new workbook property that was introduced in PPS Service Pack 1.  The 'Clear Changes After Workflow Action' effectively splats the change list for the workbook once the assignment has been submitted (either draft or final). The property can only be reached through the Report Properties dialog, and is at the workbook level:                         This property defaults to false which, under certain circumstances can hinder performance.  Whenever you change data on a matrix, the slice that you affected is saved to a change list.  You can view what's on the change list by choosing 'View -> Show Current Changes' from the PPS Add-In for Excel. Here's an example change list; two budget accounts for the same time period and department have been updated to the included new values.         The default behaviour (and the behaviour prior to SP1) is that, for the life of the assignment, the change list is maintained for every cell that is updated  The change list is simply appended to, so you can imagine, on a large workbook with several matrices spanning several filter slices, the change list can become quite large. Submitting the assignment effectively submits the change list for processing by the server, first updating/inserting the appropriate records into the fact table and subsequently re-processing the Analysis Services partition.  It follows then, that the larger the change list, the slower the submission process. Before SP1, this forever growing change list issue was resolved with little user training.  As part of the submission process you would invite your users to manually clear the change list: By 'Clearing Current Changes' you throw away the changes to the cells and have to rely on the data being safe and sound on the server.  This process helped keep the change list to a more manageable size thus gaining on the submission performance. The new 'Clear Changes After Workflow Action' property in SP1, if set to true, will perform the 'Clear Current Changes' step for you automatically.  This helps keep the change list lightweight (providing of course, the user regularly submits).  However, as I have already implied, there is one issue to be wary of; with the property set to clear changes, if your submission fails, the change list is lost and there is a real danger of losing data.

PerformancePoint MCPs

I was a little surprised to see that (as of June 2008) there are already 329 people certified in PerformancePoint (i.e. have passed the 70-556 exam) and whilst it's not the hardest exam (IMO compared to 445 and 446) it's a good sign that PPS uptake is getting stronger and stronger. Mind you most of the 329 probably work for CapGemini who should have had 3000 consultants trained in PPS by the end of 2007!! Amazing ;)

PerformancePoint Planning - Loading the account dimension members from CSV

More for my benefit this post; I can never remember the format for the CSV file to load the Account dimension.  Below is an example csv file in it's simplest form that should help get me started faster the next time I need it ! Label,Name,Description,AccountTypeMemberId 1000,Salaries Permanent Staff,,Expense 2000,Temporary Staff Workload,,Expense 4000,National Insurance Costs,,Expense 5000,Pension Costs,,Expense

PerformancePoint Planning - Data change tracking / audit trail

PerformancePoint Planning does contain auditing out-of-the-box.  However, it's not data level auditing but instead Metadata level auditing.  If you want to track when a user has been added, or a workflow cycle started then PPS-P keeps a track of all these changes plus a whole host of other system changes in an XML file located in the directory specified in the 'Audit file name' property of the 'Auditing' tab in PAC. The auditing will answer questions such as: "When was the staging database last synchronised?" "Who re-started my assignment?" "When and who changed these business rules?" This audit file does contain and audit a lot, and one day, unless I'm beaten to it by Microsoft (hint hint!) I'll either write an XSLT file to transform the XML into a 'nice looking' HTML page, or build some routines to suck the audit files into a usable database where I can stick some SSRS reports over the top.  Until then, notepad will have to do. What's missing out-of-the-box is the ability to audit data level changes.  Questions that are actually more pertinent to Business Performance Management and requirements under Sarbanes-Oxley (SOX), the Basel Accords and other, related regulations: "Who increased the salary assumption for the North West sales force?" "When were the revenue targets last increased?" "How many times do the sales forecasts get amended before they are approved?" This post attempts to detail a simple mechanism that could be utilised to implement a basic data change audit trail that could be used to answer all questions relating to data changes.  At this juncture I must point out that the approach described below entails making database level changes to the Planning Application database and therefore would result in an unsupported environment.  In reality you would leave the planning database well alone and perform these changes to a log shipped or equivalent synchronised copy of the database, well away from the main production planning application database.  There, you have been warned ! In this scenario I want to place an audit trail on a simple assumption model.  The assumption model contains five dimensions, Time, Account, Region(Entity), Role (Custom dimension) and Scenario.  I want to track all changes to data within this model. To set up the auditing the following is required: Audit Tables Three new tables are required, these are essentially copies of existing Planning Application tables and are used to keep a history of inserts and updates.  I create these tables in an 'Audit' schema; this keeps them grouped together, and allows me to maintain the same name as the original.  The only difference between the two tables is the addition of a new primary key 'AuditId'. MG_Drivers_MeasureGroup_default_partition This is the main fact table of the assumption model that will contain the audit trail.  A complete history of changes will be maintained in this table. BizUsers BizUsers contains all current users of the planning application.  As users can be removed from the application it is necessary to audit this table to ensure changes by non-current users can be forever traced. AssignmentsHistory Stores history information relating to an assignment.  Information such as when it was submitted (either draft or final).  The comments that were added by the submitter etc etc.  The key field contained in this table is the UserId that allows us to know where to point the finger of blame! (Figuratively speaking of course!).  Like the BizUsers table, when assignments are no more, the associated history records are purged too.  Auditing this table maintains the history. You could argue that each of the dimension tables also need auditing for cases when members are removed from the dimension in the future.  I can't argue against this and depending on your business process and application you might be quite right ! Example Audit Table Create Statement: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [Audit].[AssignmentsHistory]( [AuditID] [int] IDENTITY(1,1) NOT NULL, [AssignmentID] [bigint] NOT NULL, [ChangeDatetime] [datetime] NOT NULL, [CreateDatetime] [datetime] NOT NULL, [UserID] [bigint] NOT NULL, [Action] [nvarchar](100) NOT NULL, [Comments] [nvarchar](2000) NULL, CONSTRAINT [PK_AuditAssignmentsHistory] PRIMARY KEY CLUSTERED ( [AuditID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Triggers Now, one of my colleagues in particular will shoot me down for this as, for years I've been strongly and completely against triggers.  They cause no end of pain when performance tuning and debugging line of business application databases.  I always argue that all trigger logic can, and should, reside in (transaction based) stored procedures that actually perform the data updates.  I actually do still stand by this, but, in this situation we don't really have general access to the data update stored procedures so, although I'm actually breaking one of my own 10 commandments, my hand is forced. For each of the original tables a trigger is required to update the appropriate audit table.  To make the reporting easier I create a trigger for Update and Insert, this means that the most recent version of the record is in the audit table too.  (C'mon, storage is cheap !) Example Original Table Trigger Create Statement: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[trig_AssignmentsHistory_Audit] ON [dbo].[AssignmentsHistory] FOR UPDATE, INSERT Not For Replication AS Set NoCount On INSERT into Audit.AssignmentsHistory( AssignmentID, ChangeDatetime, CreateDatetime, UserID, Action, Comments) Select AssignmentID, ChangeDatetime, CreateDatetime, UserID, Action, Comments From Inserted View In order to report the audit trail, a simple view is required to show the changes over time of the values held in the fact.  This view is primarily based on the audit fact table and resolves all the dimension member ids to give you the dimension member names.  Example Audit Trail View: (This view is specific to my fact table.  It will need modification for each fact table). SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER VIEW [Audit].[AuditReport] AS SELECT AuditFact.ChangeDatetime, dbo.D_Scenario.Name AS Scenario, dbo.D_Account.Name AS Account, dbo.D_Entity.Name AS Region, dbo.D_Role.Name AS Role, (SELECT TOP 1 UserName FROM Audit.AssignmentsHistory ah INNER JOIN Audit.BizUsers bu ON ah.UserID = bu.UserId WHERE AssignmentId = AuditFact.AssignmentId) ChangedBy, AuditFact.Value FROM Audit.MG_Drivers_MeasureGroup_default_partition AS AuditFact INNER JOIN dbo.D_Role ON dbo.D_Role.MemberId = AuditFact.Role_MemberId INNER JOIN dbo.D_Account ON dbo.D_Account.MemberId = AuditFact.Account_MemberId INNER JOIN dbo.D_Scenario ON AuditFact.Scenario_MemberId = dbo.D_Scenario.MemberId INNER JOIN dbo.D_Entity ON AuditFact.Region_MemberId = dbo.D_Entity.MemberId The output of the view above will detail each of the changes made to each of the facts over time when queried like this: SELECT Scenario, Account, Region, Role, ChangedBy, ChangeDateTime, Value FROM [Audit].[AuditReport] WHERE Scenario = 'Budget' AND Account = 'Annual Salary' AND Region = 'South West' AND Role = 'Assistant Manager' ORDER BY ChangeDateTime DESC Resulting in the following output (most recent first):   Notice the HR Director set the initial salary value for the South West Assistant Manager, the rather indecisive Finance Director made a few amends for it to be later adjusted again by the HR Director.  PerformancePoint Planning Data Change Tracking ! Possible Enhancements The above method describes a very simple way of auditing data level changes in PPS-P.  This can be taken much further and I've included some possibilities below: Develop a parameter driven SSRS report to report the audit trail for a dimension member combination Create some generic scripts that accept the fact table as a parameter and create the appropriate audit tables, triggers and views. Link up with the fact table annotations table and/or the submission comments (Held in the AssignmentsHistory table) to enrich the reporting. The data volume will obviously grow over time - intelligent purging/archiving of audit records. Update the view to handle certain conditions where adjacent audit trail records contain no apparent change in value (only the change date will be different - this can occur when a contributor submits, makes subsequent changes and submits again without first clearing current changes).

An unknown server error occurred when submitting User

We recently had to turn a couple of demo boxes into a 2 client development environment for an overly aggressive PPS-P deadline.  Both demo boxes were connected to the network and both were PPS-P Servers.  We wanted to nominate a specific box as the PPS-P server for the build. Typically, the one we nominated, for some reason, would not allow us to add domain users to the Users tab of the PAC whereas, the other one would. We received the following (not particularly helpful) error message when attempting to add a domain level user through PAC.   The following items cannot be saved The total number of errors during submission is 1. An unknown server error occurred when submitting User - <domain>/<username> The server log may contain more information about this error.       In our environment both boxes were using LOCAL (but different) accounts as the Application Pool identity and the Planning Service Account Identity.  When we changed the local identity accounts over to DOMAIN level accounts we could add users on both boxes.  It appears that the PPS-P web service needs access to the domain controller to check whether the user entered actually exists.  In fact it does a little more than that as it retrieves the user SSID to store in the Planning database too.  Note: In addition to changing the user to a domain account we had to ensure that account existed in the local IIS_WPG (IIS Worker Process Group) also. But, why, when using local accounts, would one server allow the addition of domain users and the other not?  The answer was pass through authentication !  The server that worked under the local account was using an account and password combination that matched an account and password combination on the domain ! Nick Barclay posted about connection problems with PPS M&A a good while back and gave a great write up of how to overcome issues encountered.  Now that's worth a read if you are not familiar with IIS Application pools and identity accounts as the issue we encountered were related.

Multiple Assumption Models in PerformancePoint

PerformancePoint gives you the flexibility to have multiple assumption models to use as you please in your main models. Although this is great, I've found a problem when the two assumption models have different member sets for the same dimension, and so in an extension to my last assumption model post, this post provides a workaround for the issue. Consider the following example. I've got a main model where I want to use 2 assumption models, namely: HR Assumptions - Uses the HR member set from the BusinessDriver dimension; Strategic Assumptions - Uses the Strategic member set from the BusinessDriver dimension. If you go and add the two assumption models to the main model at the same time, then everything looks normal, as shown in the screen shot below: Once you deploy successfully, you will of course want to write a business rule to pick up the assumption model data. However, when writing the rule and trying to pick from the BusinessDriver member selector, you will see that you can unfortunately only select from one member set, as shown below: If you need to write rules that reference specific members in both member sets, then you will be out of luck. It's not even possible in any kind of native MDX rule, as the main model cube that gets created in Analysis Services only contains the dimension that has been created from the 'HR' member set. It would seem that PerformancePoint just picks the member set that is first alphabetically. The workaround for this issue is simply to create a single member set that combines the two original member sets. Therefore, each assumption model will contain more members than required, but that's far better than not being able to write the rules that you need. So just something to be aware of, and catch, at design time - rather than in the middle of your build.  

PerformancePoint Suggestion box

Inspired by Adrian's post on the TechNet Planning Forum here's my list of 'I wants' for PPS-P.  I've cross posted this to the forum discussion too.  A few of these have been suggested before but there are some new ones in there too - In no particular order: Application Migration Wizard There are several methods for doing this currently but nothing slick.  I was hopeful that the new Data Migration Tool was the answer to this but alas, no.  It only migrates data (don't get me wrong this is useful) - what we really need is an Application Migration Tool that migrates everything, both initial 'go-live' migrations but also 'update release' migrations.  It should support making a copy of the application on the same server too (With a different name of course!). Developer Edition The PerformancePoint Server components can only be installed on Windows Server.  This is a tad annoying for development and demo purposes.  Currently we have a Windows Server built laptop for demos and sales activities.  For development, when connecting to the server is not always feasible we have a couple of pre-built PPS-P 'development' hard disk drives that we use to swap out with our non-server development boxes and manually migrate to the official servers periodically.  It would be great if, like SQL Server, there was a 'Developer Edition' that could be installed on XP and/or Vista. Improved Support for Business Rule Re-Use There are several ways that Business Rule re-use could be implemented but initially I'd like to see a 'formula' tab where, for example, you can define global formulas and expressions that can be selected (re-used) within various scope statements.  Taking this a step further, how cool would it be if the global formulas were actually functions that accepted parameters so you could produce a smaller set of generic formulas that can be re-used throughout the application. Improved Rule Editing The editor is a little clunky at present.  We need something more akin to SQL/MDX Query Editor in Management Studio. Web-Based Data Entry Whether via Excel Services or some other 'pivot table' style entry form.  A couple of customers are banking on Microsoft providing this in the future.  We at Adatis would love to know if they've actually started something like this in Redmond as we'd be up for developing something in-house if not ! Extensible PEL Much like custom tasks can be developed in SSIS and stored procedures can be created for Analysis Services - It would add some real power if PEL keywords could be added via custom .NET assemblies that are registered with the PPS-P Server.  Not only can complex, vertical specific logic be encapsulated, intellectual property can be protected, opening up a third-party plug-in market. Process Cycle Independent Data Entry Forms It would really aid productivity if data entry forms can be created without the need to create a Process Management Cycle and associated assignment. Application/Model Templates Imagine a 'Retail Budgeting Template' or a 'Transportation Forecasting Template' and how that could potentially wipe weeks off an implementation.  This is a tall order due to the global nature of the product and the diversity of requirements within industry verticals so it might be better to make lower level component type templates that can be plugged in to a larger solution   e.g.  UK Tax and NI and Salary assumption models. Protect Sensitive Data Some data that is forecasted and budgeted is extremely sensitive, salary information for example.  Assignments should be marked as sensitive and as such should challenge the user opening it for some form of authentication.  Integrated security is not secure ! Spread Across Time Not having the ability to spread across time or entering data at non-leaf level (as you can with the other dimensions) results in more models than you would ideally choose, increasing complexity. Custom Spread Methods Even and Ratio Spread don't satisfy every client spreading requirement.  If custom spread logic could be developed on the server and exposed through the Excel add-in for specific assignments every requirement could be fulfilled. Unlimited Undo/Versioning Without the ability to copy applications, proper versioning and unlimited undo/redo (across sessions) would help.  Adopting the Visual Sourcesafe approach of versioning, pinning, branching and merging would greatly assist with development / deployment / testing cycles. Validation Centralised validation rules that can prevent invalid data from being submitted to the server would help reduce approval/review workload. Dimension Member Creation Excel Add-In support for dimension member creation would assist greatly for certain budgeting/forecasting scenarios.  Workflow and data collisions would need to be considered otherwise the data integrity could be jeopardised. Multiple Model Data Entry Allowing an assignments to capture data for multiple models would provide the ability to simplify the implemented process and provide greater support for, amongst others, assumption model based what-ifs. Submission Triggered Jobs It's common to want to run a job once all data has been submitted.  Currently the job has to be run manually or scheduled.  Allowing jobs to be triggered via data submissions would provide greater automated support for implemented processes.

Executing Rule Sets in PerformancePoint to Move Data

As I mentioned in my post a few weeks a ago, the PEL Allocate statement is a powerful method of moving data between PerformancePoint models. Although it's powerful, the end result of an Allocate statement is simply that the destination model will contain data that has been queried from the source model. What if the destination model already contains data for the target scope? If this is the case, then you will have double counting. When creating the Allocate rule, there is unfortunately no option that lets you decide what you want to do with existing data. Ideally you need to have the 'Existing Data' option that you get when running an association, which gives you the 'Append', 'Scoped Replacement' and 'Full Replacement' options. To get around this, I tend to place two rules inside a rule set. The first rule is a SQL implementation assignment rule with the same scope as the allocate rule, and deletes existing data by using the PEL statement this = NULL inside the rule. Then the second rule is the allocate statement, which appends the data. This is shown below: The idea is then to execute the rule set, which will execute each of its rules in order. The advantage to doing this is that your whole data movement process can be executed in one clean step (perhaps scheduled or via Excel), without the hassle of executing several individual rules.

Assumption Model Design Techniques and Business Rules

PerformancePoint assumption models are very useful if you want the data in one model to drive and affect another model. However, if you’ve been using assumption models, then the chances are that you may have noticed that they can sometimes be a bit awkward to use. This post highlights a couple of issues that I’ve experienced with assumption models, and provides a few workarounds. Firstly, when you initially link the assumption model to the main model, you may get the error message ‘the assumption model uses different member sets for the following dimensions’, as shown below:   Lets presume that you ignore this warning and continue to link to the assumption model. Further down the line, when you come to write your PEL rule to pick up the assumption model data, you may find that you get an error when validating the rule. This will occur if you want to populate a member in one member set by referencing a member in another member set, which resides in the assumption model. To illustrate this, if we want to populate an account called ‘Total Sales’ in the ‘Operational’ member set of the main model by using an account called ‘Price’ in the ‘Drivers’ member set in the assumption model, then we would have the following PEL rule: scope ( [Scenario].[All Members].[Forecast], [Account].[Operational].[Sales]) ; this = [Account].[Operational].[Volume] * (MODEL([Sales Assumptions]), [Account].[Drivers].[Price]); end scope; However, when validating this rule we get told “No hierarchy Drivers exists in dimension Account”. The only account member set that PerformancePoint will let you use in the above rule is 'Operational'. Note: you'll be ok if 'Price' exists in the main model, but you will just need to refer to it as [Account].[Operational].[Price]. But if 'Price' doesn't exist in the main model, then a different approach is needed..... The simple way around it is to use an approach found in the Alpine Ski House sample application. Add the accounts (if that’s the dimension causing you problems) into another dimension called Business Driver. You can then use the Business Driver dimension in your PEL rule, as PerformancePoint is more than happy for you to reference assumption model specific dimensions. This is shown below: scope ( [Scenario].[All Members].[Forecast], [Account].[Operational].[Sales]) ; //The rule is almost identical, but validates ok as we're using the BusinessDriver dimension. this = [Account].[Operational].[Volume] * (MODEL([Sales Assumptions]), [BusinessDriver].[Drivers].[Price]); end scope; Another problem that you may experience is when you have a difference in granularity between member sets used in the assumption model and the main model. For example, you may have an Entity member set called ‘Regional’ used in the assumption model, but a more detailed member set used for the Entity dimension in the main model. So in the above example, we want to populate members in the more detailed 'Company' member set by using a value from the member's parent in the higher level 'Regional' member set. In order to do this, we can just use the Parent function on the current member of the main model's entity member set. Even though we've referenced the main model member set inside the MODEL function, if the current member's parent exists in the assumption model, then it will pick it up without any problems: scope ( [Scenario].[All Members].[Forecast], [Account].[Operational].[Sales], Descendants([Entity].[Company].[TOT], 1073741823, leaves)) ; this = [Account].[Operational].[Volume] * //Its ok here to use [Dimension].[Main Model Member Set].CurrentMember.Parent (MODEL([Sales Assumptions]), [Entity].[Company].CurrentMember.Parent, [BusinessDriver].[Drivers].[Price]) end scope; So both very simple design techniques, but it’s worth being aware of them before you actually start building your models.

PEL Business Rule Re-use and Pre-defined What If Scenarios

You’ve built a rule set to satisfy an original set of business requirements for calculating budget, and all is good with the world.  Then, a change order is raised and approved that requires, for want of a better description, a set of pre-defined what if* scenarios to apply to the calculated budget. The pre-defined what-if’s are essentially the same business rule calculations with a slight factor or variance applied so it makes sense to want to re-use the existing rules. Not an unreasonable request. What’s the best way to implement this change request without introducing the burden of duplicated rules and keeping your model clean and well structured? Background This is the rule set in question: These rules utilise an assumption model that contains values such as salary levels, training days, tax assumptions etc and calculate various budget lines based on headcount information collected from a variety of regional cost centres. The rules are pretty straightforward as the Salary rule outlined below shows: scope([Scenario].[All Members].[Budget],[Account].[Detail].[Salary Costs],Descendants([Role].[Role].[All Roles], 0, after),Descendants([Time].[Quarterly].[Year2008], 0, after),Descendants([Region].[Region].[Global], 0, after) ) ; this = ( (Model([Drivers]), [Account].[Detail].[Annual Salary],[Time].[Quarterly].CurrentMember.Ancestor([Time].[Quarterly].[Year]),[Role].[Role].CurrentMember,[Region].[Region].CurrentMember,[Scenario].[All Members].[Budget]) / 4 ) * ( [Account].[Detail].[Headcount], [Time].[Quarterly].CurrentMember, [Role].[Role].CurrentMember, [Region].[Region].CurrentMember,[Scenario].[All Members].CurrentMember ); end scope; Essentially, for each quarter, for each role within each region we are multiplying the appropriate annual salary from the assumption model (divided by 4 for quarters) by the appropriate headcount to give us the budget salary figure. New Requirement Our change order requires a view of the budget as if we were to increase or decrease the headcount for each role within each region by 30%. Our new scenario dimension will look like this: The last two members have been added to cope with this new requirement – you could argue that these members could exist in a separate ‘what-if’ style dimension that could then be applied to any of the scenarios independently. I actually prefer that approach, but for simplicity let’s just limit this explanation to the scenario dimension. Okay, we have our new scenario members, we now need to wire up some rules to calculate these alternate budgets. Design Our first problem is how we can make the existing rules generic across budget scenarios. We have the budget scenario factors (a +30% and a -30%) to work in somewhere. We could hard-code these values into the rules but that would require a rule set per budget scenario thus duplicating all rules. Nasty. We could store the factors in an assumption model against the appropriate budget scenario, this could then be referenced within the rule. That would potentially allow us to re-use the business rules. However, I’m not keen on this approach at all as I feel that the budget factors are fixed, scalar values and to store them in an assumption model is overkill and would potentially require much more implementation effort. So, what option(s) are we left with? Not many, in this situation I would be tempted to create a member property on the Scenario dimension that held the factor against the appropriate member. The business rules could then simply (yeah right, see later!) reference the member property and apply the factor. So, in the case of the main Budget scenario we would have a factor of 1, for Budget +30% Headcount we would store a factor of 1.3 and 0.7 would be set against Budget -30% Headcount. So, on the face of it you can then update each rule to multiply every reference of Headcount by the appropriate factor held in a member property. However, there is a problem. That problem lies with the PEL function ‘Properties’. Unlike the MDX equivalent, the PEL version always returns a string, irrespective of the data type you assign to the member property itself. You cannot multiply by a string value and PEL does not contain any data type conversion functions to overcome this issue. Implementation The workaround: Use a Native MDX implementation. That sounds worse than it is; I’m not suggesting that you re-write all the business rules in MDX, although that is certainly an option. Alternatively, you can add a single MDX business rule to the top of the rule set that re-calculates Headcount, using the value of the member property: scope ( {[Scenario].[All Members].[Scenario].&[402],[Scenario].[All Members].[Scenario].&[501]},Descendants([Time].[Quarterly].[Year].&[2008], 0, AFTER),[Account].[Detail].[Level 02].&[5001],Descendants([Region].[Region].[Level 02].&[5201], 0, AFTER),Descendants([Role].[Role].[Level 02].&[101], 0, AFTER) ); ([Measures].[Value])= ( [Scenario].[All Members].[Budget],[Time].[Quarterly].CurrentMember,[Account].[Detail].[Headcount],[Region].[Region].CurrentMember,[Role].[Role].CurrentMember ) * [Scenario].[All Members].CurrentMember.Properties('OffsetPercent'); end scope; Apart from the fact the MDX rule is very PEL like, notice the last line of the rule. Here the headcount for each role, region and quarter is multiplied by the Scenario member property OffsetPercent. With this in place, the subsequent rules will calculate the budget scenarios based on the appropriate revised headcount value. (Due to the natural solve order of the rules in the rule set). The final step is to update each of the rules scope to include each of the budget scenarios (leaving the main bulk of the rule in place, untouched). Here’s the updated version of the Salary rule we looked at earlier (Modification in green) scope ({[Scenario].[All Members].[Budget],[Scenario].[All Members].[BudgetPlus30PcntHCount],[Scenario].[All Members].[BudgetMinus30PcntHCount]},[Account].[Detail].[Salary Costs],Descendants([Role].[Role].[All Roles], 0, after),Descendants([Time].[Quarterly].[Year2008], 0, after),Descendants([Region].[Region].[Global], 0, after)) ; this = ((Model([Drivers]), [Account].[Detail].[Annual Salary],[Time].[Quarterly].CurrentMember.Ancestor([Time].[Quarterly].[Year]),[Role].[Role].CurrentMember,[Region].[Region].CurrentMember) / 4) * ([Account].[Detail].[Headcount], [Time].[Quarterly].CurrentMember, [Role].[Role].CurrentMember, [Region].[Region].CurrentMember); end scope; For completeness, our modified rules set will look like this:   Notice the new (MDX) rule ‘Headcount’, this will be calculated first, for each of the budget scenarios ensuring the subsequent rules use the appropriately factored headcount figure. The support for Business Rule re-use is limited in PPS-P v1 but as this example hopefully illustrates, with a little thought and application, re-use is occasionally achievable.  I hope there will be better support for re-use in future versions ! --------------------------------------- * Generally, what-if’s are used dynamically; you change a few numbers in a data entry worksheet, re-calculate the sheet and view the results, repeating the process as necessary. This is a great feature and really quite powerful, however, there are some situations where pre-defined what-if’s are required to easily and frequently show standard what-if scenarios.

Intercompany Eliminations Video for PerformancePoint Planning

Useful post on Norm's PerformancePoint Server Blog showing a short video on how to do intercompany eliminations in PerformancePoint. The video is actually by Forrest Dermid, who sets the scene by explaining why you would actually want to do an intercompany elimination. He then goes on to show the relevant areas of Planning Business Modeler, demonstrating in detail the changes that you need to make to your account dimension and model properties, amongst others. Really worth watching in my opinion. You can find it here.

New Microsoft Insider PPS Planning Blog

A great new PPS-P focused blog has been up and running for a good few weeks now.  Peter Eb., a developer on the PPS Excel add-in has started blogging some real gold with a mixture of definitions, 'howtos' and workarounds.  He's one of the resident experts on the Planning TechNet Forum and really knows his stuff ! Just as a taster, here are the headings of his last few posts: What does this PPS Planning workflow action mean? HOWTO: Select a time range that is dynamic (updates over time for a rolling forecast) HOWTO: Hide filters for dimensions that confuse users What does this status "xyz" mean for my assignments? What does the error "filter selection for dimension XYZ is not in the filter scope"? It's what Technical Blogging should be all about, brilliant!

PerformancePoint Planning Performance Tuning White Paper

A PerformancePoint Planning white paper has recently been published that documents the performance and scalability of Planning Server. Included are some very interesting test results that will come in handy when planning the deployment of PPS for a large number of users. There are also a few tips for the Excel add-in and configuration of the Planning Server. The following page contains links to both the new Planning white paper and also to a Monitoring white paper that was published a while ago: