Sacha Tomey

Sacha Tomey's Blog

Migrating to Native Scoring with SQL Server 2017 PREDICT

Microsoft introduced native predictive model scoring with the release of SQL Server 2017.

The PREDICT function (Documentation) is now a native T-SQL function that eliminates having to score using R or Python through the sp_execute_external_script procedure. It's an alternative to sp_rxPredict. In both cases you do not need to install R but with PREDICT you do not need to enable SQLCLR either - it's truly native.

PREDICT should make predictions much faster as the process avoids having to marshal the data between SQL Server and Machine Learning Services (Previously R Services).

Migrating from the original sp_execute_external_script approach to the new native approach tripped me up so I thought I'd share a quick summary of what I have learned.

Stumble One:

Error occurred during execution of the builtin function 'PREDICT' with HRESULT 0x80004001. 
Model type is unsupported.

Reason:

Not all models are supported. At the time of writing, only the following models are supported:

  • rxLinMod
  • rxLogit
  • rxBTrees
  • rxDtree
  • rxdForest

sp_rxPredict supports additional models including those available in the MicrosoftML package for R (I was using attempting to use rxFastTrees). I presume this limitation will reduce over time. The list of supported models is referenced in the PREDICT function (Documentation).

Stumble Two:

Error occurred during execution of the builtin function 'PREDICT' with HRESULT 0x80070057. 
Model is corrupt or invalid.

Reason:

The serialisation of the model needs to be modified for use by PREDICT. Typically you might serialise your model in R like this:

model <- data.frame(model=as.raw(serialize(model, NULL)))

Instead you need to use the rxSerializeModel method:

model <- data.frame(rxSerializeModel(model, realtimeScoringOnly = TRUE))

There's a corresponding rxUnserializeModel method, so it's worth updating the serialisation across the board so models can be used interchangeably in the event all models are eventually supported.  I have been a bit legacy.

That's it.  Oh, apart from the fact PREDICT is supported in Azure SQL DB, despite the documentation saying the contrary.

Geographic Spatial Analysis with Azure Data Lake Analytics (ADLA)

Whilst working on an Azure Data Lake project, a requirement hit the backlog that could be easily solved with a Geographical Information System (GIS) or even SQL Server - Spatial data type support was introduced into SQL Server 2008.

However, Azure Data Lake Analytics (ADLA) does not natively support spatial data analytics so we'll have to extract the data into another service right? Wrong ? :)

Due to the extensibility of Azure Data Lake Analytics, we can enhance it to do practically anything. In fact, we can lean on existing components and enhance the service without having to develop the enhancement itself.

This blog is a quick run through demonstrating how to enhance ADLA such that it will support Spatial analytics and meet our project requirement.

Problem

For simplicity I've trivialised the problem. Here's the requirement:

Indicate which Bus Stops are within 1.5 km of Southwark Tube Station.

To support this requirement, we have two datasets:

  • A list of all the Bus Stops in London, including their Geo location (circa 20k records)
  • The Geo location record of Southwark Tube Station (a single record !)
    • In fact, the location of the tube station is pretty accurate and is geo located to the entrance pavement outside the tube station:

clip_image001_thumb8_thumb

This would be an easy problem for a GIS to solve. You would specify the central point i.e. our Southwark Tube station marker and draw a circle, or buffer, with a radius 1.5 km around it and select all bus stops that fall within or intersect with that circle. This spatial analysis is easy for these systems as that's essentially what they are built to do.

SQL Server 2008 introduced the Spatial Data Type, this allowed spatial style analysis to be performed on geo data using T-SQL in conjunction with the supplied Geometry and Geography data types. More info on those can be found here

So, how can we solve our problem in ADLA, without a GIS and without having to export the data to SQL Server??

Solution

You can register existing assemblies with ADLA. It so happens that the SQL Server Data Types and Spatial assemblies are nicely packaged up and can be used directly within ADLA itself - think about that, it's pretty awesome !

Caveat: At the time of writing we have no idea of the licence implications. It will be up to you to ensure you are not in breach :)

Those assemblies can be downloaded from here.  You only need to download and install the following file:

  • ENU\x64\SQLSysClrTypes.msi

This installs two key assemblies, which you'll need to grab and upload to your Data Lake Store:

  • C:\Program Files (x86)\Microsoft SQL Server\130\SDK\Assemblies\Microsoft.SqlServer.Types.dll
  • C:\Windows\System32\SqlServerSpatial130.dll

Once they have been uploaded to your Data Lake Store, you need to register those assemblies with ADLA.

DECLARE @ASSEMBLY_PATH string = "/5.UTILITY/USQL-Extend/SQL-Server/";
DECLARE @TYPES_ASM string = @ASSEMBLY_PATH+"Microsoft.SqlServer.Types.dll";
DECLARE @SPATIAL_ASM string = @ASSEMBLY_PATH+"SqlServerSpatial130.dll";

CREATE DATABASE IF NOT EXISTS SQLServerExtensions;
USE DATABASE SQLServerExtensions;

DROP ASSEMBLY IF EXISTS SqlSpatial;
CREATE ASSEMBLY SqlSpatial
FROM @TYPES_ASM
WITH ADDITIONAL_FILES =
     (
         @SPATIAL_ASM
     );

Following registration of the assemblies, we can see the registration loaded in the ADLA Catalog database we created:

image_thumb2_thumb[4]

We are now ready to use this U-SQL enhancement in our U-SQL Query - let's go right ahead and solve our problem in one U-SQL Script.

// Reference the assemblies we require in our script.
// System.Xml we get for free as a System Assembly so we didn't need to register that and our SQLServerExtensions.SqlSpatial assembly
REFERENCE SYSTEM ASSEMBLY [System.Xml]; 
REFERENCE ASSEMBLY SQLServerExtensions.SqlSpatial; 

// Once the appropriate assemblies are registered, we can alias them using the USING keyword.
USING Geometry = Microsoft.SqlServer.Types.SqlGeometry; 
USING Geography = Microsoft.SqlServer.Types.SqlGeography; 
USING SqlChars = System.Data.SqlTypes.SqlChars; 

// First create the centralised point. 
// In this case it's the pavement outside the entrance of Southwark Tube Station, London. 
// Format is Longitude, Latitude and then SRID. 
// NB: It's Longitude then Latitude, that's the opposite way to what you might expect.. 
DECLARE @southwarkTube Geography = Geography.Point(-0.104777,51.503829,4326); 

// Next we extract our entire London bus stop data set from the file. 
// There's about 20k of them. 
@busStopInput = 
	EXTRACT 
		[StopCode]	string, 
		[StopName]	string, 
		[Latitude]	double?, 
		[Longitude]	double? 
	FROM @"/1.RAW/OpenData/Transport/bus-stops-narrow-full-london.csv" 
	USING Extractors.Csv(skipFirstNRows:1,silent:true); 

// This is effectively the transform step and where the magic happens 
// Very similar syntax to what you would do in T-SQL. 
// We are returning all the bus stops that fall within 1500m of Southwark Tube 
// Essentially we return all stops that intersect with a 1500m buffer around the central tube point 
@closeBusStops= 
	SELECT 
		* 
	FROM 
		@busStopInput 
	WHERE 
		@southwarkTube.STBuffer(1500).STIntersects(Geography.Point((double)@busStopInput.Longitude,(double)@busStopInput.Latitude,4326)).ToString()=="True"; 

// The results are written out to a csv file. 
OUTPUT 
	@closeBusStops TO "/4.LABORATORY/Desks/Sach/spatial-closebusstops.csv" 
	USING Outputters.Csv(outputHeader: true); 

The query outputs a list of bus stops that are within the specified Spatial distance from Southwark Tube Station. If we have a look at all the bus stops (in red) and overlay all the 'close' bus stops (in green), we can see the results:

clip_image00112_thumb4_thumb

clip_image002_thumb1_thumb

Pretty neat.

Azure Data Lake Analytics does not natively support spatial data analytics but by simply utilising the assemblies that ship with SQL Server, we can extend the capability of U-SQL to provide that functionality or practically any functionality we desire.

Friday Fun: GeoFlow does the Great South Run

GeoFlow was released to public preview yesterday; a new 3D visualization tool for Excel which allow users to create, navigate and interact with time-sensitive data applied to a digital map.

Back in October last year, along with 25,000 other people, my good friend and colleague Tim Kent (@TimK_Adatis) and I ran the Great South Run; a 10 mile run around the City of Portsmouth on the south coast of England.  As it happened, we both wore GPS watches and using the data collected I've created a simple GeoFlow tour of the race.

Tim is Green - I am Red - who wins...  there's only one way to find out ......

Run Race

Boot to VHD – Demo Environments and more

A few people have asked me about this recently, so I thought I’d share my approach.

Creating demo environments, particularly for the MS BI Stack, can be time consuming and a challenge, particularly when you need to take the demo with you and can’t rely on powerful, internal servers and good client internet connectivity.

A bit of history

Initially we used to have a dedicated, decently specced, Demo laptop that would be installed with all the goodies that we would need to demo. This worked until the demo needed to be in two places at once, or you needed to carry around your day-to-day laptop too.

The solution was to use the demo environment as our day to day workstation but it was massive overkill to have full blown Windows Server running SharePoint with multiple instances of SQL Server etc. and, unless you had a high spec machine, everything was a little laggy.

The next approach was to carry around a couple of laptop hard disks that you’d swap in and out depending on whether you were demoing or working. This worked well for a good while but did prevent timely demos (no screwdriver, no demo).

Then we entertained VirtualBox and Hyper-V and other virtualisation tech to run Virtual environments – this was all well and good but the primary downfall of this approach is the fact you need a really high spec machine to run both the host and the virtual environment or performance is going to be a major issue and for demos, you want performance to be good, as good as possible.

Then we discovered Boot to VHD. I’m not sure when this was first possible and I definitely believe we were late to the game but we’ve been using it for around 12 months, long enough to prove it to be a solid approach to creating and running [not only] demo environments.

Boot to VHD

The concept is easy, and “does what it says on the tin”. You create, or acquire a VHD and configure your laptop to Boot directly to the VHD.

Advantages

1) The VHD can use all the host resources. Under traditional virtualisation approaches you need to split memory and/or processors which impacts performance. So on an 8GB, 2 proc laptop traditionally you would have 4GB, 1 proc for the host and 4GB, 1 proc for the virtual environment. With Boot to VHD the virtual environment can utilise the full 8GB and both processors.

2) It’s flexible. I have a chunky external HHD containing several different virtual environments for different purposes. I can backup, swap out, replace and roll-back environments in the time it takes to copy a VHD from local to external or vice-versa. You can even share demo environments with you colleagues.

3) You always have a demo environment to hand. All it takes is a reboot to load up the appropriate environment for those spontaneous demo opportunities.

Disadvantages

1) You do need to be careful regarding disk space usage and be very disciplined to ensure you always have enough disk space available. If you are running a number of large environments there will be an element of maintenance to ensure everything always fits.

2) Without resorting to a hybrid approach, you can’t demo a distributed system working together.

Setup

So to make use of Boot to VHD, we’ll assume we already have a VHD available and ready for booting to. These can either be created manually, acquired from your internal infrastructure team, or from other third-parties.

When creating them manually I ALWAYS create “Dynamically Expanding” virtual hard disks. This way, you can actually store more VHD environments on your laptop than you would otherwise.

Although dynamically expanding disks allow you to store more environments, you will still need to ensure you have enough disk space for the disk to expand into as this will be required at the time of boot up. So, if your VHD is set to a 100GB dynamically expanding disk, (it might only be a 20GB file), but when it’s booted up, it will expand to 100GB, so you will need that space on your hard disk or the boot up will fail.

1) Copy the VHD to your laptop to a suitable location e.g. C:\VHD

2) Create a new Boot entry
Run the following at a command prompt as an Administrator:

bcdedit /copy {current} /d "My New VHD Option"

Be sure to update the label to something to help you identify the VHD – this label will appear on the boot menu when you reboot.

image

Note the new GUID that has been created.

3) Using the GUID created for you in the previous step and the location of the VHD, run the following three commands, one after the other

bcdedit /set {23dd42c1-f397-11e1-9602-923139648459} device vhd=[C:]\VHD\AdatisBI.vhd
bcdedit /set {23dd42c1-f397-11e1-9602-923139648459} osdevice vhd=[C:]\VHD\AdatisBI.vhd
bcdedit /set {23dd42c1-f397-11e1-9602-923139648459} detecthal on

Note the square brackets around the drive letters, these are required. If you have spaces in your path, or filename, you’ll need to wrap the path, excluding the drive letter, in quotes.e.g.

..vhd=[C:]"\VHD Path\AdatisBI.vhd"

clip_image002

That’s all there is to it. Reboot and you should be presented with a new Boot option and away you go.

Troubleshooting

When it doesn’t work you generally get a BSOD on boot up. To date I’ve identified two reasons for this:

1) You don’t have enough disk space for the VHD to expand (The BSOD actually does inform you of this)

2) You may need to change the SATA Mode configuration in the BIOS. Depending on how and where the VHD was created you may need to change the setting to either ATA or AHCI. If that works, you’ll have to reverse the change to reboot into your physical installation.

I’ve yet to create a portable (i.e. sharable amongst colleagues) VHD for Windows 8. I have successfully created a Windows 8 VHD but it only currently works on the laptop it was created on, this is unlike any other VHD I have created in the past. If I work out a fix, I will update this post.

Additional Information

There are a couple of extra benefits that are worth pointing out.

1) Once you’ve booted to VHD, your original, physical OS installation drive is reallocated, normally to drive D (Your VHD will assume C drive). This allows you to share files between environments, or as I do, place my SkyDrive folder on an accessible location on the original, physical drive. This allows me to have SkyDrive installed on VHDs but only have a single copy of the contents on my HDD.

2) The reverse is true too. You can attach a VHD (from the physical install, or from within another VHD) using the Disk Management tool to access, move or copy files between environments. The disk is expanded at this point so you will need enough disk space to accommodate it.

3) If disk space is a premium, you can shrink the VHD using a tool such as VHD Resizer. It doesn’t resize the physical VHD file, but allows you to reduce the size of the virtual hard disk. It also allows you to convert from Fixed to Dynamic disks and vice-versa.

4) You can remove boot entries with the following (or you can use the System Configuration tool):

bcdedit /delete {GUID}

5) I have found this approach so reliable my day-to-day Windows 7 installation is a VHD. I have not noticed any impact to performance. The only thing that I have noticed is that you cannot determine a “Windows Experience Index” when running a VHD – but I can live with that Smile

SQL Server 2012 : Columnstore Index in action

One of the new SQL Server 2012 data warehouse features is the Columnstore index. It stores data by columns instead of by rows, similar to a column-oriented DBMS like the Vertica Analytic Database and claims to increase query performance by hundreds to thousands of times.

The issue with indexes in a data warehouse environment is the number and broad range of questions that the warehouse may have to answer meaning you either have to introduce a large number of large indexes (that in many cases results in a larger set of indexes than actual data), plump for a costly spindle-rich hardware infrastructure, or you opt for a balanced hardware and software solution such as a Microsoft SQL Server 2008 R2 Fast Track Data Warehouse or a HP Business Data Warehouse Appliance where the approach is ‘index-light’ and you rely on the combination of high throughput and performance power to reduce the dependency on the traditional index.

The Columnstore index is different in that, when applied correctly, a broad range of questions can benefit from a single Columnstore index, the index is compressed (using the same Vertipaq technology that PowerPivot and Tabular based Analysis Services share) reducing the effort required on the expensive and slow disk subsystem and increasing the effort of the fast and lower cost memory/processor combination.

In order to test the claims of the Columnstore index I’ve performed some testing on a Hyper-V instance of SQL Server 2012 “Denali” CTP3 using a blown up version of the AdventureWorksDWDenali sample database. I’ve increased the FactResellerSales table from approximately 61,000 records to approximately 15.5 million records and removed all existing indexes to give me a simple, but reasonably large ‘heap’.

Heap

With a clear cache, run the following simple aggregation:

SELECT
   
SalesTerritoryKey
    ,SUM(SalesAmount) AS SalesAmount
FROM
  
[AdventureWorksDWDenali].[dbo].[FactResellerSales]
GROUP BY
   
SalesTerritoryKey
ORDER BY 
    SalesTerritoryKey

clip_image0014_thumb_thumb[1]

Table 'FactResellerSales'. Scan count 5, logical reads 457665, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 7641 ms, elapsed time = 43718 ms

image

Non-Clustered Index

Before jumping straight in with a columnstore index, let’s review performance using a traditional index. I tried a variety of combinations, the fastest I could get this query to go was to simply add the following:

CREATE NONCLUSTERED INDEX [IX_SalesTerritoryKey] ON [dbo].[FactResellerSales]
(
   [SalesTerritoryKey] ASC
)
INCLUDE ([SalesAmount]) WITH
(
    PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF,
   
DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON,
    ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100, DATA_COMPRESSION = PAGE

) ON [PRIMARY]
GO

Notice I have compressed the index using page compression, this reduced the number of pages my data consumed significantly. The IO stats when I re-ran the same query (on a clear cache) looked like this:

Table 'FactResellerSales'. Scan count 5, logical reads 26928, physical reads 0, read-ahead reads 26816, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 6170 ms, elapsed time = 5201 ms.

image

Much better! Approximately 6% of the original logical reads were required, resulting in a query response time of just over 5 seconds. Remember though, this new index will really only answer this specific question. If we change the query, performance is likely to fall off the cliff and revert back to the table scan.

Incidentally, adopting an index-light ([no index]) approach and simply compressing (and reloading to remove fragmentation) the underlying table itself, performance was only nominally slower than the indexed table with the added advantage of being able to perform for a large number of different queries. (Effectively speeding up the table scan. Partitioning the table can help with this approach too.)

Columnstore Index

Okay, time to bring out the columnstore. The recommendation is to add all columns into the columnstore index (Columnstore indexes do not support ‘include’ columns), practically there may be a few cases where you do exclude some columns. Meta data, or system columns that are unlikely to be used in true analysis are good candidates to leave out of the columnstore. However, in this instance, I am including all columns:

CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_Columnstore] ON [dbo].[FactResellerSales]
(
    [ProductKey],
    [OrderDateKey],
    [DueDateKey],
    [ShipDateKey],
    [ResellerKey],
    [EmployeeKey],
    [PromotionKey],
    [CurrencyKey],
    [SalesTerritoryKey],
    [SalesOrderNumber],
    [SalesOrderLineNumber],
    [RevisionNumber],
    [OrderQuantity],
    [UnitPrice],
    [ExtendedAmount],
    [UnitPriceDiscountPct],
    [DiscountAmount],
    [ProductStandardCost],
    [TotalProductCost],
    [SalesAmount],
    [TaxAmt],
    [Freight],
    [CarrierTrackingNumber],
    [CustomerPONumber],
    [OrderDate],
    [DueDate],
    [ShipDate]
)WITH (DROP_EXISTING = OFF) ON [PRIMARY]

Now when I run the query on a clear cache:

Table 'FactResellerSales_V2'. Scan count 4, logical reads 2207, physical reads 18, read-ahead reads 3988, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 235 ms, elapsed time = 327 ms.

image

I think the figures speak for themselves ! Sub-second response and because all columns are part of the index, a broad range of questions can be satisfied by this single index.

Storage

The traditional (compressed) non-clustered index takes up around 208 MB whereas the Columnstore Index comes in a little less at 194 MB so speed and storage efficiency, further compounded when you take into account the potential additional indexes the warehouse may require.

So, the downsides? Columnstore indexes render the table read-only. In order to to update the table you either need to drop and re-create the index or employ a partition switching approach. The other notable disadvantage, consistently witnessed during my tests, is the columnstore index takes longer to build. The traditional non-clustered index took approximately 21 seconds to build whereas the columnstore took approximately 1 minute 49 seconds. Remember though, you only need one columnstore index to satisfy many queries so that’s potentially not a fair comparison.

Troubleshooting

If you don’t notice a huge difference between a table scan and a Columnstore Index Scan, check the Actual Execution Mode of the Columnstore Index Scan. This should be set to Batch, not Row.

image

image

If the Actual Execution Mode is reporting Row then your query cannot run in parallel:

- Ensure, if running via Hyper-V, you have assigned more than one processor to the image.
- Ensure the Server Property ‘Max Degee of Parallelism’ is not set to 1.

Summary

In summary, for warehousing workloads, a columnstore index is a great addition to the database engine with significant performance improvements even on reasonably small data sets. It will re-define the ‘index-light’ approach that the SQL Server Fast Track Data Warehouse methodology champions and help simplify warehouse based performance tuning activities. Will it work in every scenario? I very much doubt it, but it’s a good place to start until we get to experience it live in the field.

SQL Server 2012 Licensing

Today saw the announcement of how SQL Server 2012 will be carved up and licensed, and it's changed quite a bit. There are three key changes:

1) There's a new Business Intelligence Edition that sits between Standard and Enterprise
2) No more processor licensing. There's a move to Core based licensing instead (with a minimum cost of 4 cores per server)
3) Enterprise is only available on the Core licensing model (Unless upgrading through Software Assurance *)

Enterprise, as you would expect, has all the functionality SQL Server 2012 has to offer.

The Business Intelligence edition strips away
- Advanced Security (Advanced auditing, transparent data encryption)
-
Data Warehousing (ColumnStore, compression, partitioning)
and provides a cut-down, basic (as opposed to advanced) level of High Availability (AlwaysOn).

In addition, the Standard Edition removes
- Enterprise data management (Data Quality Services, Master Data Services),
- Self-Service Business Intelligence (Power View, PowerPivot for SPS)
- Corporate Business Intelligence (Semantic model, advanced analytics)

If you are utilising 4 core processors, licence costs for Standard ($1,793 per core, or $898 per Server + $209 per CAL) and Enterprise ($6,874 per core) remain similar (ish).  However, you will be stung if you have more cores. The Business Intelligence edition is only available via a Server + CAL licence model and it's apparent that Microsoft are placing a big bet on MDS/DQS, Power View, PowerPivot for SharePoint and BISM as the licence for the Business Intelligence edition is $8,592 per server, plus $209 per CAL, that's nearly 10x more per server than Standard Edition !

For the complete low-down check out these links:

Editions Overview:
http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx

Licensing Overview:
http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx

Licence Detail (including costs):
http://download.microsoft.com/download/D/A/D/DADBE8BD-D5C7-4417-9527-5E9A717D8E84/SQLServer2012_Licensing_Datasheet_Nov2011.docx

* If you are currently running Enterprise as a Server + CAL and you upgrade to SQL 2012 through Software Assurance, you can keep Server + CAL model, providing you don’t exceed 20 cores.

Microsoft Tech-Ed 2010 BI Conference, New Orleans Day 2 (Tuesday 8th June 2010)

Day 2 and the BI Keynote.

Announcements? Only two, although actually, old news:

- They announced the availability of the MS BI Indexing Connector. Originally announced back in May

- They got their story straight(er) with regard to the release of what will be called Pivot Viewer Extensions for Reporting Services. It will be available in 30 days.

The session took more of a “look where we’ve come since the Seattle BI Conference” and, as Ted Kummert described, it’s Microsoft’s BI [School] Report Card.

Interesting change in semantics for their BI strap line; no longer do they spout “BI for the Masses”, now it’s “BI for Everyone”. Although they admitted they, along with the rest of the industry are falling well short at only a current average of 20% ‘reach’.

With the recent delivery of SQL Server 2008 R2, Sharepoint 2010 and Office 2010 the BI Integration story is significantly more complete.

A large focus on PowerPivot and how it has helped customer quickly deliver fast, available reporting ‘applications’. Although I know a few people that would object to simply describing DAX purely as a familiar extension to the Excel formula engine.

Following the look back, a brief look forward:

- Cloud Computing will pay a part, Reporting and Analytics will be coming, when combined with Windows AppFabric, described yesterday this is a closer reality.

- Consumerisation enhancements, with better search and improved social media integration BI will move towards becoming a utility.

- Compliance: Several plans; Improved Data Quality, Data Cleaning and Machine Learning and strong meta data strategy support to deliver lineage and provide change impact analysis.

- Data Volumes. SQL Server Parallel Data Warehouse Edition has completed CTP2, this will open up high performance datawarehousing to data volumes that exceed 100TB. Dallas, the data marketplace will be better integrated to development and reporting tools.

Then tempted us with some previews of what *could* make a future version of SQL Server. Essentially, the theme for the future is to join the dots between Self Service BI and the Enterprise BI Platform and focussed on plans around PowerPivot:

- KPI creation

Essentially they are exposing (yet another) way to create (SSAS based) KPI’s through a neat, slider based GUI directly from within the PowerPivot Client.

- Wide Table Support

To help with cumbersome wide PowerPivot tables, they have introduced a ‘Record View’ to help see all the fields on one screen, all appropriately grouped with edit/add/delete support for new fields, calculations etc.

- Multi Developer Support

They plan to integrate the PowerPivot client into BIDS. This will facilitate integration with Visual SourceSafe for controlled multi developer support, they also plan to provide a lineage visualisation to help with audit and impact change analysis.

- Data Volumes

Following on from the BIDS integration, plans surrounding deployment to server based versions of SSAS to allow increased performance for higher data volumes. They replayed the demo of the 2m row data set from Seattle where we first saw almost instant sort and filtering, but this time applied it (with equally impressive performance) to a data set of more than 2bn records!  It was described by Amir Netz as “The engine of the devil!” ;)

Microsoft Tech-Ed 2010 BI Conference, New Orleans Day 1 (Monday 7th June 2010)

The Tech-Ed 2010 Conference kicked off today with the Keynote session.  The BI Keynote session is tomorrow but today's keynote did incorporate a small BI Element.  No huge announcements, but some announcements all the same.

- Unsuprisingly, Cloud computing dominated the keynote.  Highlighting application Integration of Cloud apps & data with on-premise data e.g. Active Directory and business operational systems data to demonstrate "real-world" cloud computing solutions.

- July will see a release of Service Pack 1 for Windows 7 and Windows Server 2008 R2

- Windows Server AppFabric, Application Role Extensions to, for example, faciliate Cloud to on premise integration capability is now RTM

- Windows Intune, Cloud based PC management environment

- No date set, but Internet Explorer 9 will focus on performance (Graphics accelleration) and new web standards, and is probably a response to Google speedy Chrome claims

- The Microsoft Live Labs "Pivot" research project, is to hit the mainstream.  They were a little cagey around dates, but possibly this month.

Maybe some more BI specific announcements tomorrow...

 

 

 

 

Creating a Custom Gemini/PowerPivot Data Feed – Method 1 – ADO.NET Data Services

There are already a few good Gemini/PowerPivot blogs that provide an introduction into what it is and does so there is no need for repetition.  What I haven’t seen are examples of how existing investments can be harnessed for Gemini/PowerPivot based self-service analytics.

This series of posts focuses on various ways of creating Custom Data Feeds that can be used by Gemini/PowerPivot natively – Providing a direct feed from otherwise closed systems opens up new channels of analytics to the end user.

Gemini/PowerPivot supports reading data from Atom-based data feeds, this post looks at a quick way of creating an Atom-based feed that can be consumed by Gemini/PowerPivot.  By far the simplest way to develop an Atom-based data feed is to employ ADO.NET Data Services in conjunction with ADO.NET Entity Framework.  With very few (in fact one and a bit!) lines of code, a data source can be exposed as a feed that Gemini/PowerPivot can read natively. 

I am going to use the AdventureWorksDW sample hosted by a SQL Server 2008 R2 instance for this – obviously Gemini/PowerPivot natively reads SQL Server databases, so creating a custom feed over the top may seems a little pointless.  However, this technique may be useful for quick wins in several scenarios, including:

- Preventing the need for users to connect directly to the underlying data source.
- Restricting access to various elements of the data source (tables/columns etc)
- Applying simple business logic to raw data.

ADO.NET Data Services are a form of Windows Communication Foundation (WCF) services, and therefore can be hosted in various environments.  Here, I will simply host the ADO.NET Data Service inside an ASP.NET site.

To create a Native Gemini/PowerPivot feed, you take seven steps:

1 - Create ASP.NET Web Application
2 - Create Entity Data Model
3 - Create the Schema
4 - Create the Data Service
5 - Load From Data Feed
6 - Create Relationships
7 - Test

Step 1) Create ASP.NET Web Application

I’m using Visual Studio 2008 here to create an ASP.NET Web Application.

image

Step 2) Create Entity Data Model

Add an ADO.NET Entity Data Model item to the project, these files have a .edmx extension and allow us to create a schema that maps to the underlying database objects.

image

Step 3) Create the Schema

We simply require a 1:1 mapping so will ‘Generate from Database’.  Incidentally, the ‘Empty Model’ option allows you to build a conceptual model of the database resulting in custom classes that can be optionally mapped to the database objects later.

image

Create a Microsoft SQL Server connection to AdventureWorksDW2008.

image

Select the appropriate database objects, I’ve selected the following tables:

- DimCurrency
- DimCustomer
- DimDate
- DimProduct
- DimPromotion
- DimSalesTerritory
- FactInternetSales

image

Once the wizard has completed, a new .edmx and associated cs file is created that respectively contain an Entity Relationship Diagram and a set of Auto Generated Classes that represent the database objects.

Due to the way the Entity Framework handles Foreign Key Constraints we have to apply a workaround to ensure the Foreign Keys on the FactInternetSales table are exposed and brought into Gemini/PowerPivot.  A previous post Exposing Foreign Keys as Properties through ADO.NET Entity Framework walks through the workaround.

 image

image 

Step 4) Create the Data Service

Add an ADO.NET Data Service item to the project.

image

The service class inherits from a generic version of the System.Data.Services.DataService object, so we need to inform the compiler what class to base the generic object on.  We essentially want to base our Data Service on the class representing our newly created Entity Data Model.  The class name is derived from the database name, unless changed when the Entity Data Model was created, so in our case the class name is AdventureWorksDW2008Entities.

The auto generated service class contains a ‘TODO’ comment that asks you to ‘put your data source class name here’.  The comment needs replacing with AdventureWorksDW2008Entities.

The final step is to expose the resources in the Entity Data Model.  For security reasons, a data service does not expose any resources by default.  Resources need to be explicitly enabled.

To allow read only access to the resources in the Entity Data Model the InitializeService method needs updating with a single line of code.  The code snippet below details the final class implementation, notice the AdventureWorksDW2008Entities reference at line 1 and the the explicit resource enablement at line 6.

Code Snippet
  1. public class GeminiDataService : DataService<AdventureWorksDW2008Entities>
  2.     {
  3.         // This method is called only once to initialize service-wide policies.
  4.         public static void InitializeService(IDataServiceConfiguration config)
  5.         {
  6.             config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
  7.         }
  8.     }

That’s all that’s needed, by default, ADO.NET Data Services conform to the Atom standard, so in theory the Service is ready to be consumed by Gemini/PowerPivot.

Before we try, it’s worth giving the service a quick test, building and running the solution (F5) launches Internet Explorer navigating to the Service hosted by the ASP.NET Development Server.

image

You are first presented with an XML document containing elements that represent database objects, you can further drill into the objects by amending the URL.  For example, if you want to see the contents of the DimPromotion table then append DimPromotion to the end of the URL: http://localhost:56867/GeminiDataService.svc/DimPromotion (Case sensitive)

Note:  You may need to turn off Feed Reader View in Internet Explorer to see the raw XML (Tools->Internet Options–>Content->Settings–>Turn On Feed Reader View – make sure this is unchecked)

image

As a slight aside, the URL can be further enhanced to, filter, top n rows, extract certain properties etc etc. Here are a couple of examples:

URL Effect
http://localhost:56867/GeminiDataService.svc/DimCustomer?$top=5 Return the top 5 Customers
http://localhost:56867/GeminiDataService.svc/DimCustomer(11002) Return Customer with id 11002
http://localhost:56867/GeminiDataService.svc/DimCustomer(11002)/FirstName Return the First Name of Customer 11002
http://localhost:56867/GeminiDataService.svc/DimProduct(310)?$exapnd=FactInternetSales Returns Product with id 310 and all related Internet Sales Records

Confident that the feed is working, we can now deploy the service, and start using the feed in Gemini/PowerPivot. 

Step 5) Load From Data Feed

Open up Excel 2010, launch the Gemini/PowerPivot Client (by selecting ‘Load & Prepare Data’)

image

Select ‘From Data Feed’ from the ‘Get External Data’ section of the Gemini/PowerPivot Home Ribbon to launch the Table Import Wizard.

image

Specify the Url from the ADO.NET Data Services feed created earlier, in my case: http://localhost:56867/GeminiDataService.svc as the 'Data Feed Url’ and click Next.

Incidentally, you can use the majority of the enhanced Urls to, for example only select the DimProduct table should you so wish, however by specifying the root Url for the service you have access to all objects exposed by the service.

image

From the Table Import Wizard Select the required tables, in my case I’ll select them all.  (You can optionally rename and filter the feed objects here too).

Following the summary screen, the Gemini/PowerPivot Client then gets to work importing the data from the ADO.NET Data Service:

image

Once completed, Gemini/PowerPivot displays all the data from all of the feed objects as if it came directly from the underlying database.

image

Step 6) Create Relationships

There is one final step before we can test our model using an Excel Pivot Table.  We need to create the relationships between the tables we have imported.  The Gemini/PowerPivot Client provides a simple, if a little onerous way of creating relationships, the ‘Create Relationship’ action on the Relationships section of the Home Ribbon launches the Create Relationship wizard:

image

Each table needs relating back to the primary Fact table which results in the following relationships:

image

Step 7) Test

We are now ready to start our analysis, selecting PivotTable from the View section of the Gemini/PowerPivot Client Home ribbon creates a pivot table in the underlying Excel workbook attached to your custom fed Gemini/PowerPivot data model.

image

 

 

 

 

image

So, to allow fast access to, for example, potentially sensitive data, through Gemini/PowerPivot you can quickly build a custom data feed that can be consumed natively by the Gemini/PowerPivot Client data feed functionality.