# Sacha Tomey

### Sacha Tomey's Blog

I've blogged about some of the financial consolidation elements of PerformancePoint planning before.  Most of the elements related to financial consolidation e.g. Share calculations, intercompany transactions etc are specific to consolidation and only implemented when performing some form of statutory consolidation.  Currency conversion, on the other hand, can be common to any element of PerformancePoint Planning whether it's a budgeting, forecasting, planning or, in fact statutory consolidation solution.

In it's simplest, out of the box, form, currency conversion is pretty straightforward.  You need an exchange rate assumption model to hold the exchange rates and you need a financial model without shares that contains a pre-defined currency conversion rule that utilses the exchange rate assumption model.  You run the pre-defined rule on a set of financial data and currency conversion is calculated for you.  There's little more to it than that.

This post looks a little (and I mean a little) deeper in to currency conversion and demonstrates the triangulation features contained within PPS-P.

Consider this set of exchange rates, taken from www.xe.com today:

On the face of it, we have a few rates that allow us to convert from British Pound to Japanese Yen, US Dollar to both British Pound and Euro, and finally a rate for converting Euro to Japanese Yen.

When the in-built currency conversion rule is run against, for example, a set of US financial data we can determine the British Pound equivalent.  The Net Profit/Loss of 110,500 USD, converted to British Pound at a rate of 53.5 British pence per dollar converts to 59,117.50 GBP - As you would expect:

What would happen if we were to run the currency conversion rule again but this time try and convert the US Dollar figures to Japanese Yen?

The astute among you would point out that we don't have an exchange rate for converting US Dollar to Japanese Yen.  The more astute among you might say that we could determine the JPY figures by first converting to GBP and then converting the GBP figures to JPY.  The even more astute among you might suggest that you could calculate the Japanese Yen two ways, either by first converting to GBP and then to JPY or by first converting to EUR before converting to our desired JPY.

Who's right?  Well, all of you, kind of!

If I run the currency conversion job again but instead of selecting GBP as the reporting currency, I select JPY, I get the following result:

No conversion !  Why - well, we don't have an exchange rate set for converting USD to JPY!

This is where triangulation comes in.  With our current set of exchange rates there are two methods for converting USD to JPY.  Either through GBP or EUR.  You set which 'route' the triangulation takes by setting the model property 'Default Currency'.  By default this is not set, hence our failed conversion above.

However, if you were to set it to either GBP or EUR you will get some results, but there is a catch - the 'route' you take returns different results.  Incidentally, you don't have to redeploy the model when you change the property you just need to save the model.

Triangulate through GBP

110,500 USD at a rate of 0.535 equals 59,117.50 GBP.  Convert the GBP to JPY at a rate of 205.524 and that equates to 12,150,065.07 JPY.

Triangulate through EUR

110,500 USD at a rate of 0.535 equals 75,029.50 EUR.  Convert the EUR to JPY at a rate of 162.068 and that equates to 12,159,881.006 JPY.

The two 'routes' highlight a variance of 9815.936 JPY (about 90 USD)!  A triangular arbitrage !

So, two questions:

1) Which 'route' should you use?
That's a business decision and if it's going to be a common conversion then direct rates ought to be favoured over triangulation.

2) How do you exploit this model to work out guaranteed triangular arbitrage to make millions through currency trading?
I'm working on it !

I was unable to make the first Microsoft BI Conference in Seattle last year but Tim made it, and reported that it was a great excuse for an extended weekend of partying and snowboarding in Whistler way to meet like minded people, learn the latest buzz and feel part of the BI Community.

Well, this year I can make it and I will be joining Tim and Jeremy for the duration.  Bad luck for me though, it's too early for snow ! ;o)

I've checked out the Tracks and Sessions and there is quite a few I'm up for.  They haven't published the schedules yet so whether I can jump tracks and still fit everything will remain to be seen but here are my picks:

Partner Track
- BI Unplugged - a Panel Discussion

Phew!  Well, after that lot I'm definitely gonna need a drink, so if you fancy a bevvy, hunt me down, be great to hook up !

There's a new workbook property that was introduced in PPS Service Pack 1.  The 'Clear Changes After Workflow Action' effectively splats the change list for the workbook once the assignment has been submitted (either draft or final).

The property can only be reached through the Report Properties dialog, and is at the workbook level:

This property defaults to false which, under certain circumstances can hinder performance.

Whenever you change data on a matrix, the slice that you affected is saved to a change list.  You can view what's on the change list by choosing 'View -> Show Current Changes' from the PPS Add-In for Excel.

Here's an example change list; two budget accounts for the same time period and department have been updated to the included new values.

The default behaviour (and the behaviour prior to SP1) is that, for the life of the assignment, the change list is maintained for every cell that is updated  The change list is simply appended to, so you can imagine, on a large workbook with several matrices spanning several filter slices, the change list can become quite large.

Submitting the assignment effectively submits the change list for processing by the server, first updating/inserting the appropriate records into the fact table and subsequently re-processing the Analysis Services partition.  It follows then, that the larger the change list, the slower the submission process.

Before SP1, this forever growing change list issue was resolved with little user training.  As part of the submission process you would invite your users to manually clear the change list:

By 'Clearing Current Changes' you throw away the changes to the cells and have to rely on the data being safe and sound on the server.  This process helped keep the change list to a more manageable size thus gaining on the submission performance.

The new 'Clear Changes After Workflow Action' property in SP1, if set to true, will perform the 'Clear Current Changes' step for you automatically.  This helps keep the change list lightweight (providing of course, the user regularly submits).  However, as I have already implied, there is one issue to be wary of; with the property set to clear changes, if your submission fails, the change list is lost and there is a real danger of losing data.

We are proud to announce that Adatis Consulting has been awarded Microsoft Gold Certified Partner Status.  This achievement reflects our continued investment in the latest Business Intelligence and Performance Management technologies from Microsoft and is, in no small part, thanks to our customers, partners, and the outstanding team at Adatis.

More for my benefit this post; I can never remember the format for the CSV file to load the Account dimension.

Below is an example csv file in it's simplest form that should help get me started faster the next time I need it !

Label,Name,Description,AccountTypeMemberId
1000,Salaries Permanent Staff,,Expense
4000,National Insurance Costs,,Expense
5000,Pension Costs,,Expense

PerformancePoint Planning does contain auditing out-of-the-box.  However, it's not data level auditing but instead Metadata level auditing.  If you want to track when a user has been added, or a workflow cycle started then PPS-P keeps a track of all these changes plus a whole host of other system changes in an XML file located in the directory specified in the 'Audit file name' property of the 'Auditing' tab in PAC.

The auditing will answer questions such as:

"When was the staging database last synchronised?"
"Who re-started my assignment?"
"When and who changed these business rules?"

This audit file does contain and audit a lot, and one day, unless I'm beaten to it by Microsoft (hint hint!) I'll either write an XSLT file to transform the XML into a 'nice looking' HTML page, or build some routines to suck the audit files into a usable database where I can stick some SSRS reports over the top.  Until then, notepad will have to do.

What's missing out-of-the-box is the ability to audit data level changes.  Questions that are actually more pertinent to Business Performance Management and requirements under Sarbanes-Oxley (SOX), the Basel Accords and other, related regulations:

"Who increased the salary assumption for the North West sales force?"
"When were the revenue targets last increased?"
"How many times do the sales forecasts get amended before they are approved?"

This post attempts to detail a simple mechanism that could be utilised to implement a basic data change audit trail that could be used to answer all questions relating to data changes.  At this juncture I must point out that the approach described below entails making database level changes to the Planning Application database and therefore would result in an unsupported environment.  In reality you would leave the planning database well alone and perform these changes to a log shipped or equivalent synchronised copy of the database, well away from the main production planning application database.  There, you have been warned !

In this scenario I want to place an audit trail on a simple assumption model.  The assumption model contains five dimensions, Time, Account, Region(Entity), Role (Custom dimension) and Scenario.  I want to track all changes to data within this model.

To set up the auditing the following is required:

Audit Tables

Three new tables are required, these are essentially copies of existing Planning Application tables and are used to keep a history of inserts and updates.  I create these tables in an 'Audit' schema; this keeps them grouped together, and allows me to maintain the same name as the original.  The only difference between the two tables is the addition of a new primary key 'AuditId'.

• MG_Drivers_MeasureGroup_default_partition
This is the main fact table of the assumption model that will contain the audit trail.  A complete history of changes will be maintained in this table.
• BizUsers
BizUsers contains all current users of the planning application.  As users can be removed from the application it is necessary to audit this table to ensure changes by non-current users can be forever traced.
• AssignmentsHistory
Stores history information relating to an assignment.  Information such as when it was submitted (either draft or final).  The comments that were added by the submitter etc etc.  The key field contained in this table is the UserId that allows us to know where to point the finger of blame! (Figuratively speaking of course!).  Like the BizUsers table, when assignments are no more, the associated history records are purged too.  Auditing this table maintains the history.

You could argue that each of the dimension tables also need auditing for cases when members are removed from the dimension in the future.  I can't argue against this and depending on your business process and application you might be quite right !

Example Audit Table Create Statement:

```SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [Audit].[AssignmentsHistory](
[AuditID] [int] IDENTITY(1,1) NOT NULL,
[AssignmentID] [bigint] NOT NULL,
[ChangeDatetime] [datetime] NOT NULL,
[CreateDatetime] [datetime] NOT NULL,
[UserID] [bigint] NOT NULL,
[Action] [nvarchar](100) NOT NULL,
CONSTRAINT [PK_AuditAssignmentsHistory] PRIMARY KEY CLUSTERED
(
[AuditID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]```

Triggers

Now, one of my colleagues in particular will shoot me down for this as, for years I've been strongly and completely against triggers.  They cause no end of pain when performance tuning and debugging line of business application databases.  I always argue that all trigger logic can, and should, reside in (transaction based) stored procedures that actually perform the data updates.  I actually do still stand by this, but, in this situation we don't really have general access to the data update stored procedures so, although I'm actually breaking one of my own 10 commandments, my hand is forced.

For each of the original tables a trigger is required to update the appropriate audit table.  To make the reporting easier I create a trigger for Update and Insert, this means that the most recent version of the record is in the audit table too.  (C'mon, storage is cheap !)

Example Original Table Trigger Create Statement:

```SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE TRIGGER [dbo].[trig_AssignmentsHistory_Audit] ON [dbo].[AssignmentsHistory]
FOR UPDATE, INSERT

Not For Replication

AS

Set NoCount On

INSERT into Audit.AssignmentsHistory(
AssignmentID,
ChangeDatetime,
CreateDatetime,
UserID,
Action,
Select
AssignmentID,
ChangeDatetime,
CreateDatetime,
UserID,
Action,
From Inserted```

View

In order to report the audit trail, a simple view is required to show the changes over time of the values held in the fact.  This view is primarily based on the audit fact table and resolves all the dimension member ids to give you the dimension member names.

Example Audit Trail View: (This view is specific to my fact table.  It will need modification for each fact table).

```SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER VIEW [Audit].[AuditReport]
AS
SELECT
AuditFact.ChangeDatetime,
dbo.D_Scenario.Name AS Scenario,
dbo.D_Account.Name AS Account,
dbo.D_Entity.Name AS Region,
dbo.D_Role.Name AS Role,
(SELECT TOP 1 UserName FROM Audit.AssignmentsHistory ah INNER JOIN Audit.BizUsers bu ON ah.UserID = bu.UserId WHERE AssignmentId = AuditFact.AssignmentId) ChangedBy,
AuditFact.Value
FROM
Audit.MG_Drivers_MeasureGroup_default_partition AS AuditFact
INNER JOIN dbo.D_Role ON dbo.D_Role.MemberId = AuditFact.Role_MemberId
INNER JOIN dbo.D_Account ON dbo.D_Account.MemberId = AuditFact.Account_MemberId
INNER JOIN dbo.D_Scenario ON AuditFact.Scenario_MemberId = dbo.D_Scenario.MemberId
INNER JOIN dbo.D_Entity ON AuditFact.Region_MemberId = dbo.D_Entity.MemberId```

The output of the view above will detail each of the changes made to each of the facts over time when queried like this:

```SELECT
Scenario,
Account,
Region,
Role,
ChangedBy,
ChangeDateTime,
Value
FROM
[Audit].[AuditReport]
WHERE
Scenario = 'Budget' AND
Account = 'Annual Salary' AND
Region = 'South West' AND
Role = 'Assistant Manager'
ORDER BY
ChangeDateTime DESC```

Resulting in the following output (most recent first):

Notice the HR Director set the initial salary value for the South West Assistant Manager, the rather indecisive Finance Director made a few amends for it to be later adjusted again by the HR Director.  PerformancePoint Planning Data Change Tracking !

Possible Enhancements

The above method describes a very simple way of auditing data level changes in PPS-P.  This can be taken much further and I've included some possibilities below:

• Develop a parameter driven SSRS report to report the audit trail for a dimension member combination
• Create some generic scripts that accept the fact table as a parameter and create the appropriate audit tables, triggers and views.
• Link up with the fact table annotations table and/or the submission comments (Held in the AssignmentsHistory table) to enrich the reporting.
• The data volume will obviously grow over time - intelligent purging/archiving of audit records.
• Update the view to handle certain conditions where adjacent audit trail records contain no apparent change in value (only the change date will be different - this can occur when a contributor submits, makes subsequent changes and submits again without first clearing current changes).

We recently had to turn a couple of demo boxes into a 2 client development environment for an overly aggressive PPS-P deadline.  Both demo boxes were connected to the network and both were PPS-P Servers.  We wanted to nominate a specific box as the PPS-P server for the build.

Typically, the one we nominated, for some reason, would not allow us to add domain users to the Users tab of the PAC whereas, the other one would.

We received the following (not particularly helpful) error message when attempting to add a domain level user through PAC.

The following items cannot be saved

The total number of errors during submission is 1.

In our environment both boxes were using LOCAL (but different) accounts as the Application Pool identity and the Planning Service Account Identity.  When we changed the local identity accounts over to DOMAIN level accounts we could add users on both boxes.  It appears that the PPS-P web service needs access to the domain controller to check whether the user entered actually exists.  In fact it does a little more than that as it retrieves the user SSID to store in the Planning database too.

Note: In addition to changing the user to a domain account we had to ensure that account existed in the local IIS_WPG (IIS Worker Process Group) also.

But, why, when using local accounts, would one server allow the addition of domain users and the other not?  The answer was pass through authentication !  The server that worked under the local account was using an account and password combination that matched an account and password combination on the domain !

Nick Barclay posted about connection problems with PPS M&A a good while back and gave a great write up of how to overcome issues encountered.  Now that's worth a read if you are not familiar with IIS Application pools and identity accounts as the issue we encountered were related.

Andrew has posted a great list of community developed utilities for SQL Server.  There is a few on there that have been on my 'must-do-one-day' list for far too long !

There's a new tool for PerformancePoint Planning that allows you to migrate reference and fact data between applications.
(It is designed to be used for first time migrations only)

Documentation here

Inspired by Adrian's post on the TechNet Planning Forum here's my list of 'I wants' for PPS-P.  I've cross posted this to the forum discussion too.  A few of these have been suggested before but there are some new ones in there too - In no particular order:

Application Migration Wizard
There are several methods for doing this currently but nothing slick.  I was hopeful that the new Data Migration Tool was the answer to this but alas, no.  It only migrates data (don't get me wrong this is useful) - what we really need is an Application Migration Tool that migrates everything, both initial 'go-live' migrations but also 'update release' migrations.  It should support making a copy of the application on the same server too (With a different name of course!).

Developer Edition
The PerformancePoint Server components can only be installed on Windows Server.  This is a tad annoying for development and demo purposes.  Currently we have a Windows Server built laptop for demos and sales activities.  For development, when connecting to the server is not always feasible we have a couple of pre-built PPS-P 'development' hard disk drives that we use to swap out with our non-server development boxes and manually migrate to the official servers periodically.  It would be great if, like SQL Server, there was a 'Developer Edition' that could be installed on XP and/or Vista.

Improved Support for Business Rule Re-Use
There are several ways that Business Rule re-use could be implemented but initially I'd like to see a 'formula' tab where, for example, you can define global formulas and expressions that can be selected (re-used) within various scope statements.  Taking this a step further, how cool would it be if the global formulas were actually functions that accepted parameters so you could produce a smaller set of generic formulas that can be re-used throughout the application.

Improved Rule Editing
The editor is a little clunky at present.  We need something more akin to SQL/MDX Query Editor in Management Studio.

Web-Based Data Entry
Whether via Excel Services or some other 'pivot table' style entry form.  A couple of customers are banking on Microsoft providing this in the future.  We at Adatis would love to know if they've actually started something like this in Redmond as we'd be up for developing something in-house if not !

Extensible PEL
Much like custom tasks can be developed in SSIS and stored procedures can be created for Analysis Services - It would add some real power if PEL keywords could be added via custom .NET assemblies that are registered with the PPS-P Server.  Not only can complex, vertical specific logic be encapsulated, intellectual property can be protected, opening up a third-party plug-in market.

Process Cycle Independent Data Entry Forms
It would really aid productivity if data entry forms can be created without the need to create a Process Management Cycle and associated assignment.

Application/Model Templates
Imagine a 'Retail Budgeting Template' or a 'Transportation Forecasting Template' and how that could potentially wipe weeks off an implementation.  This is a tall order due to the global nature of the product and the diversity of requirements within industry verticals so it might be better to make lower level component type templates that can be plugged in to a larger solution   e.g.  UK Tax and NI and Salary assumption models.

Protect Sensitive Data
Some data that is forecasted and budgeted is extremely sensitive, salary information for example.  Assignments should be marked as sensitive and as such should challenge the user opening it for some form of authentication.  Integrated security is not secure !

Not having the ability to spread across time or entering data at non-leaf level (as you can with the other dimensions) results in more models than you would ideally choose, increasing complexity.

Even and Ratio Spread don't satisfy every client spreading requirement.  If custom spread logic could be developed on the server and exposed through the Excel add-in for specific assignments every requirement could be fulfilled.

Unlimited Undo/Versioning
Without the ability to copy applications, proper versioning and unlimited undo/redo (across sessions) would help.  Adopting the Visual Sourcesafe approach of versioning, pinning, branching and merging would greatly assist with development / deployment / testing cycles.

Validation
Centralised validation rules that can prevent invalid data from being submitted to the server would help reduce approval/review workload.

Dimension Member Creation
Excel Add-In support for dimension member creation would assist greatly for certain budgeting/forecasting scenarios.  Workflow and data collisions would need to be considered otherwise the data integrity could be jeopardised.

Multiple Model Data Entry
Allowing an assignments to capture data for multiple models would provide the ability to simplify the implemented process and provide greater support for, amongst others, assumption model based what-ifs.

Submission Triggered Jobs
It's common to want to run a job once all data has been submitted.  Currently the job has to be run manually or scheduled.  Allowing jobs to be triggered via data submissions would provide greater automated support for implemented processes.