Martyn

Martyn Bullerwell's Blog

SQL Server Delete does not release file space allocation

This one’s a short blog post, and by no means new, but it may help you in the future and maybe save some head scratching. Commonly in Data Warehousing projects you will be processing data through tables whilst staging and cleaning source data. These tables may commonly be heaps (no Clustered Index), that have no persistent data; rather they are a transient set of tables that get deleted from when your Warehouse loads each day.

What you may not know is that when a heap is deleted from (as opposed to truncated) the space that was used by that data is not automatically de-allocated from the table, and when new data is added the table space is increased, even though the table may be “empty”.

So what is actually going on? When deleting from a heap the database engine either locks the page or the row, as opposed to the whole table when deleting on a clustered index. During the delete on the heap the page is emptied, but the engine does not de-allocate the page. This behaviour only occurs on a heap, and can cause very large tables with no data in them. For my recent project, it caused the stage and clean processes to slow down gradually over time because the database engine was scanning huge tables with no data in them.

So, what can we do to stop this behaviour? There are three options (as documented by Microsoft here (see Locking Behaviour)):

1. When deleting from the table obtain an exclusive lock on the whole table, by using a TABLOCK hint as follows:

     DELETE FROM [TABLE] WITH (TABLOCK) WHERE [COLUMN] = 'CRITERIA'

2. If possible, truncate the whole table, this will de-allocate all pages.

3. Finally, my least favourite, add a clustered index to the table.

 

RANKX–Analysis Services 2012 Tabular

RANKX was a new function made available in DAX at the launch of SQL Server 2012; however Ranking is a widely used function in most BI projects, and was very much anticipated. Having worked extensively with Tabular on a recent project we had the need to rank over large sets of data, with models based on billions of rows. Whilst we were extremely impressed with the performance of Tabular, we found something not quite right with RANKX. Suppose we had the following very simple format of data, and wanted to perform a simple Rank over it.

CountryKey

Date

Units

SalesValue

53

24/01/2012

62

81.290619

156

03/01/2012

3575

5186.565208

48

07/05/2012

321

362.836524

157

16/04/2012

7

8.005637

134

19/08/2012

20

44.12

232

29/07/2012

40

49.313156

178

05/03/2012

47

61.25

63

26/03/2012

51

73.369509

153

17/06/2012

3061

6026.4

31

08/07/2012

3622

5005.038257

140

13/02/2012

9870

7713.968564

14

27/05/2012

29220

46728.26302

98

20/08/2012

297

377.653459

131

30/07/2012

172

182.591931

48

27/03/2012

348

411.801792

157

06/03/2012

3

8.01422

In order to a Rank the above data by country over all time we would probably write a simple DAX query as follows:

    1. DEFINE
    2.   MEASURE FactSales[UnitCount]= SUM(FactSales[Units])
    3.   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount])
    4. EVALUATE
    5.   SUMMARIZE(
    6.     FactSales
    7.    ,FactSales[CountryKey]
    8.    ,"ProductSalesUnits",FactSales[UnitCount]
    9.    ,"Rank", FactSales[Rank]
    10.   )
    11. ORDER BY
    12.   FactSales[UnitCount] DESC

Now you can see from the query above, in order to avoid any doubt we only rank over a single table and we have no joins to other tables at all. This works fine resulting in the data below.

FactSales[CountryKey]

[ProductSalesUnits]

[Rank]

227

91818113

1

226

21487836

2

39

9033389

3

14

7597054

4

82

4656585

5

109

3678302

6

75

3371681

7

140

2511239

8

107

1850520

9

208

1443241

10

31

1092047

11

153

997860

12

201

914827

13

59

870366

14

163

840927

15

Now assume we rank over a lot more data and wish to apply a simple CALCULATETABLE in order to filter some data out. We may write a DAX Query as follows:

  1. DEFINE
  2.   MEASURE FactSales[UnitCount]= SUM(FactSales[Units])
  3.   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount])
  4. EVALUATE
  5.   CALCULATETABLE (
  6.     SUMMARIZE(
  7.       FactSales
  8.      ,FactSales[CountryKey]
  9.      ,"ProductSalesUnits",FactSales[UnitCount]
  10.      ,"Rank", FactSales[Rank]
  11.     )
  12.   ,DATESBETWEEN('Date'[CalDate], DATE(2012,1,1), DATE(2012,8,31))
  13. )
  14. ORDER BY
  15.   FactSales[UnitCount] DESC

In this instance we are joining to a Date dimension, but that is it.  The above query yields the below result.

FactSales[CountryKey]

[ProductSalesUnits]

[Rank]

227

83147875

1

226

19446567

2

39

8137335

3

14

6769390

4

82

4100168

5

109

3321496

6

75

3010890

7

140

2165554

8

107

1655472

9

208

1274740

10

31

987122

11

153

908938

12

201

853322

13

59

771677

14

That works, we get results as we would expect within seconds. Now, if instead of ranking over an integer field, let’s apply the RANKX function to a real number. In this example we get a worrying result set using the DAX as follows:

  1. DEFINE 
  2.   MEASURE FactSales[SalesValue]= FactSales[Sales Value]
  3.   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount])
  4. EVALUATE
  5.   CALCULATETABLE (
  6.     SUMMARIZE(
  7.       FactSales
  8.      ,FactSales[CountryKey]
  9.      ,"ProductSalesValue",FactSales[Sales Value]
  10.      ,"Rank", FactSales[Rank]
  11.     )
  12.   ,DATESBETWEEN('Date'[CalDate], DATE(2012,1,1), DATE(2012,8,31))
  13. )
  14. ORDER BY
  15.   FactSales[UnitCount] DESC

Which outputs:

FactSales[CountryKey]

[ProductSalesValue]

[Rank]

227

84074007.25

1

226

29928143.25

3

14

10859628.74

4

39

8451588.111

4

109

7964922.769

6

82

6254219.85

6

75

4730390.37

7

107

2466064.97

9

208

1904009.18

10

140

1862708.961

11

153

1311217.35

11

22

1207366.72

13

59

1182179.95

15

Now let’s be clear, all we have done is simply change the measure from an Integer to a Float, the rest of the data is the same. You will notice that there are tie’s in the data that there should not be. Having scratched our heads for hours, rebuilt the model, re wrote the DAX, and had a number of colleagues check it over we found that when no calculate table is applied, then we get the correct answer again as follows:

  1. DEFINE
  2.   MEASURE FactSales[SalesValue]= FactSales[Sales Value]
  3.   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), FactSales[UnitCount])
  4. EVALUATE
  5.   SUMMARIZE(
  6.     FactSales
  7.    ,FactSales[CountryKey]
  8.    ,"ProductSalesValue",FactSales[Sales Value]
  9.    ,"Rank", FactSales[Rank]
  10. )
  11. ORDER BY
  12.   FactSales[UnitCount] DESC

FactSales[CountryKey]

[ProductSalesValue]

[Rank]

227

92885561.31

1

226

33237033.3

2

14

12253005.68

3

39

9414266.358

4

109

8928147.606

5

82

7120811.54

6

75

5296490.13

7

107

2756500.54

8

208

2146627.18

9

140

2115750.609

10

153

1479858.53

11

22

1339255.82

12

59

1324799.98

13

105

1320651.83

14

31

1277065.779

15

So what we have learnt here is that RANKX seems to give the wrong answer, but only when ranking over real numbers, and only when we filter the data set in some way using CALCULATETABLE. Not being able to find a clear reason for this behaviour we eventually gave in and raised it with Microsoft. Having spent a week or so working it through with the great support team at Microsoft, it seems that this is a current “feature” of RANKX. It is believed to be a floating point arithmetic issue that is driven from how floating point numbers are stored. It is documented here:

http://support.microsoft.com/kb/78113/EN-US

There is also some further information documented here:

http://technet.microsoft.com/en-us/library/gg492146.aspx

http://www.microsoft.com/en-us/download/details.aspx?id=4106

I have been told by Microsoft that this is the same as it is in Excel, and Analysis Services, however still does not explain why the CALCULATETABLE makes a difference.  I am still working with Microsoft to see if we can get to the bottom of it.   However, if precision beyond 15 significant figures within the rank is not important to you (up to 99 Billion with 2DP) this issue is very easily worked around. Take the following DAX query:

  1. DEFINE
  2.   MEASURE FactSales[UnitCount]= FactSales[Royalty Value]
  3.   MEASURE FactSales[Rank]= RANKX(ALL(FactSales[CountryKey]), ROUND(FactSales[UnitCount],2))
  4. EVALUATE
  5.   CALCULATETABLE (
  6.     SUMMARIZE(
  7.       FactSales
  8.      ,FactSales[CountryKey]
  9.      ,"ProductSalesValue",FactSales[UnitCount]
  10.      ,"Rank", FactSales[Rank]
  11.     )
  12. ,DATESBETWEEN('Date'[CalDate], DATE(2012,1,1), DATE(2012,8,31))
  13. )
  14. ORDER BY
  15.   FactSales[UnitCount] DESC

Which now correctly outputs:

FactSales[CountryKey]

[ProductSalesValue]

[Rank]

227

84074007.25

1

226

29928143.25

2

14

10859628.74

3

39

8451588.111

4

109

7964922.769

5

82

6254219.85

6

75

4730390.37

7

107

2466064.97

8

208

1904009.18

9

140

1862708.961

10

153

1311217.35

11

22

1207366.72

12

59

1182179.95

14

105

1196551.61

13

31

1132926.109

15

You will notice that all I have done is ROUND the measure in the RANKX function to 2 decimal places, this will stop the calculation engine from ranking over an approximated value, and force it RANK on a decimal with less than 15 significant figures, which will in turn fix the ranking. Further to this, rounding the values prior to it being loaded into the model will also work, just be sure to keep below the 15 Significant figures if possible.

So in summary, it seems that the calculation engine, when using the RANKX function does not handle the float data type as we may expect. This can be easily rectified by losing some precision on the RANK measure you define. We have been informed that using the Currency function will also fix the problem. I hope this stops others going through the same pain. As and When i get any further information on this issue I will post it up.

 

SQL Server 2012 Running Totals

SQL server has sometimes come under fire with the Oracle vs SQL Server debate because of the lack of some of the more advanced (and less used) functionality that is outlined in the SQL ANSI standards.   An example of this is Window Functions, which became an ANSI standard under the ANSI:2003 revision.  A Window function is an aggregate function that can be applied to a subset of a full set of data.  Now this can be achieved in current versions of SQL Server (2008 R2 and its predecessors), but not using Window Functions and therefore has a performance implication.  There are about 3 or 4 approaches to achieving a running total in SQL Server prior to the 2012 version, however none particularly elegant. 

For my following example I will outline how to perform a running total using AdvertureWorks2008 sample data.  I will remind us of 1 of the method's (probably the most common) of how we used to do running totals prior to SQL Server 2012, and then show how to do the same running total using a SQL Server 2012 Window Function.  To set the scene, we will be looking for a running total of Line Items for a given Order.  This may be kind of query you may wish to write to generate an invoice with a running total on it.

To begin with lets look at the more traditional query:

SELECT
        A.SalesOrderID,
        A.SalesOrderDetailID,
        A.LineTotal,
        SUM(B.LineTotal)
FROM
Sales.SalesOrderDetail  AS A
    CROSS JOIN Sales.SalesOrderDetail AS B
WHERE
    B.SalesOrderDetailID <= A.SalesOrderDetailID
        AND
    A.SalesOrderID = B.SalesOrderID
GROUP BY
    A.SalesOrderID,
    A.SalesOrderDetailID,
    A.LineTotal
ORDER BY                
    A.SalesOrderID,
    A.SalesOrderDetailID,
    A.LineTotal

This simply works by self joining up to certain point, so works fine for any data that can be ordered easily, as running totals usually are this method usually suffices.

The following query uses the new Window function in SQL Server 2012:  

SELECT   
    SalesOrderID,    
    SalesOrderDetailID,    
    LineTotal,    
    SUM(LineTotal)        
        OVER     (PARTITION BY                
                SalesOrderID               
            ORDER BY                
                SalesOrderDetailID) AS OrderRunningTotal
FROM    
    Sales.SalesOrderDetail
ORDER BY    
    SalesOrderID,   
    SalesOrderDetailID,
    LineTotal

You will instantly notice that the second query is far more elegant, and more simplistic to understand.  In short, in this query, we are telling SQL server to Sum the “Line Total” over an ordered partition of the data.  Both queries return the same result, however it becomes interesting when we look at the execution plans, relative to each other.  This is shown below (apologies for the size of these):

image

What is important to note that the second (bottom) plan is smaller, and simpler.  the other, very significant point is that the Query Cost (relative to batch) is a whopping 97% for the old method of running totals, meaning that the new Windowing Functions are far more efficient. 

In summary, I believe that this is an example of where SQL server is becoming a firm competitor to some of its perceived rivals, its one of the few points that can be raised as a valid point in the argument of SQL Server vs Oracle, but not any more!

Installing SharePoint 2007 on Windows Server 2008 R2

Obviously when we get a new development machine, we install the latest Microsoft software in order to test the latest applications.  All in all Windows Server 2008 R2, is basically the server version on Windows 7, therefore it shares many drivers, and many common issues with its counterpart.  Therefore its no surprise that SharePoint 2007 does not install from disk to a Windows 7, or Windows Server 2008 R2 installation. 

In order to install SharePoint, you need to create what is called a “Slipstream” version, which in essence means rolling up all the latest service packs into the installation.  Something that Microsoft I'm sure will get round to releasing in due course.  However in the meantime you can follow these simple instructions. 

I would advise burning that back to a disk, so you don't have to do it in the future.   You may still get a few compatibility issues, but it seems to install and run as expected.

PerformancePoint 2007 SP2 and SQL 2008 (Continued)

Following on from my previous post prior to Christmas – it looks like I have been beaten to it.  For a pure installation of SQL 2008 and PPS have a look here, it details the installation process to ensure a smooth installation.    I have now tried this and it seems that installation works seamlessly, and all set up on SQL 2008.  Hopefully I will get chance to continue testing.     

Performance Point SP2 and SQL Server 2008

Well, given the break over Christmas I have finally managed to get round to some testing.  This has begun with getting PPS SP2 set up on our development environment, running solely on SQL Server 2008 and Windows 2008.

To test it fully I have removed everything from our test environment and started from scratch.  Note that to install the latest version of SQL Server 2008, you must have Visual Studio SP1 and .NET 3.5 SP1 installed. If however, you do not have .NET 3.5 SP1 installed, the installation process will attempt to install it.

To install PerformancePoint you may need to use the SKIPREQCHECK command to ensure it will install. I installed all the planning components to ensure I could test all aspects fully.  DO NOT run the configuration of any of these planning components until you have applied SP2.    

Now install PPS SP2, ensuring you run all of the installers contained within the executable.  Unfortunately the prerequisites check still does not seem to recognize later versions of the SQL components that are required, I had difficulty with ADOMD.NET, which still requires version 9 SP2, and even after installing version 10 from the SQL Server 2008 feature pack, it still didn't work.  This is an issue as it is required for two of the main components of PerformancePoint Planning:

  • Planning Process Service
  • Planning Web Service

I found that the December 2008 SQL Server 2005 Feature Pack did the trick (can be found here), and allowed configuration of all four planning components. 

Having installed PPS SP2 however, it still seems as though I can not get PerformancePoint to connect to Analysis Services 2008.  The connection information does not fill me with joy, when is specifically mentions Analysis Services 2005.   

It is Christmas, so I will do some more investigation and see if I can get past this issue, before the new year.  More to come soon. 

Reporting Services Parameters and Oracle

Recently a colleague of mine needed to access Oracle to build a report but did not have the rights to create a stored procedure, which would be the obvious method of choice. We therefore need to pass parameters from Reporting Services to Oracle using the "Query Type" of text.  To do this in SQL Server it is relatively intuitive.

Create a shared Data Source - I have used Adventure Works.

Then add a DataSet:

image image

This query will return all records from the Adventure Works 'people' table,  showing the following fields; FirstName, MiddleName, LastName and Email Address. If you wanted to restrict this list using a parameter, all you would need to do is add a parameter to the report.

image image

In this instance I have added a parameter titled "EndsWith", so we can use a WHERE with any character or string to filter by the last characters of the FirstName.  I have also added a parameter called "BeginsWith". Now you can edit the DataSet to add the where statement as follows:

SELECT     FirstName, MiddleName, LastName, EmailAddress
FROM         Person.Contact
WHERE     (FirstName LIKE @BeginsWith +'%' + @EndsWith)

Now this will work perfectly with SQL Server, however should you wish to connect to Oracle, you will need to make a few simple, but not obvious, changes.  These are as follows:

SELECT     FirstName, MiddleName, LastName, EmailAddress
FROM         Person.Contact
WHERE     (FirstName LIKE :BeginsWith +'%' + :EndsWith)

So in short all you have to do is change the @ sign to a : (colon), easy when you know how, a real pain when you don't! 

This report will appear as follows when connected to Oracle, or SQL (as long as there are the same databases).

 

image

Adventure Works Samples Update and FILESTREAM

A bit late (released at the beginning of the month), but a follow up to post I made earlier this year, there is a new release of the Adventure Works Databases.  These can now be found here.  They are simplified into just 4 downloads, which makes things a lot easier! Also the install is a lot more straight forward, although for the install to complete you must have full text search and FILESTREAM installed and enabled.   It is good to see that FILESTREAM is being used already, and Adventure Works can be used to test and demonstrate this new functionality.

What is FILESTREAM?

Prior to SQL Server 2008 if you needed to store images, videos, PDF's, or any other binary data we used a blob data type and it was in-line with relational data.  Now SQL Server 2008 can store blobs in its own private namespace on the file system, which simplifies the storage of blob data. Adventure Works examples have now used it to store instructions in word doc files. (See table Production.Documents)

Enable FILESTREAM

Start -> All Programs -> Microsoft SQL Server 2008 -> Configuration Tools -> SQL Server Configuration Manager.

  • In the list of services, right-click SQL Server Services, and then click Open.

  • Locate instance of SQL Server Right-click and click Properties.

  • Click the FILESTREAM tab.

  • Select the Enable FILESTREAM for Transact-SQL access check box.

  • If you want to read and write FILESTREAM data from Windows, click Enable FILESTREAM for file I/O streaming access. Enter the name of the Windows share in the Windows Share Name box.

  • If remote clients must access the FILESTREAM data that is stored on this share, select Allow remote clients to have streaming access to FILESTREAM data.

    image

     

  • Click Apply.

  • In Query Editor, enter the following Transact-SQL code:

  • EXEC sp_configure filestream_access_level, 2
    RECONFIGURE

    Using FILESTREAM Databases

    • Create a new database is SSMS:

    image

    • Under the "Filegroups" Tab add a new FILESTREAM:

    image

    • Once set up click OK. This should now have added a new folder as follows under your SQL installation:

    image

    From here we can see that we have our new FileStreamTestData directory as well as documents folder, which has been added from the install of the SQL2008 AdventureWorks samples.  So now we want to test the storage of some files in SQL.  So lets try and store some Pictures for example.  To begin with we will need to set up a new table.  It is worth noting here that we must add a ROWGUIDCOL column, preferably the primary key, and the unique identifier.

    CREATE TABLE [dbo].[Media] (
        [mediaId] [uniqueidentifier] NOT NULL ROWGUIDCOL PRIMARY KEY,
        [dateCreated] [datetime] NOT NULL DEFAULT(GETDATE()),
        [fileName] [nvarchar](256) NOT NULL,
        [location] [geometry] NULL,
        [file] [varbinary](max) FILESTREAM);
    GO

    This will give us a new table, that we can insert into.  So If we insert an dummy row and keep an eye on the folders created when creating this new filestream.  Execute the following statement:

    INSERT INTO [FileStreamTest].[dbo].[Media]
        (
            [mediaId]
            ,[fileName]
            ,[file]
        )
        VALUES
            (NEWID(),
            'NewFile.jpg',
            convert(varbinary(max), 'Not a real Picture, but proves a point'))

    You will now notice that a new file has been added to the file system.

    image 

    The more rows we insert, the more files are added to this directory.  You will also notice that the date time stamps on the files are also reflected by when the items were added to the database.  If a row is updated with a new image, the new image is created,  however the old image remains.  This also applies when the row is deleted.  However, these directories are cleaned up periodically by the FILESTREAM garbage collector.

    So, in short it seems that this will become the way forward for storing all sorts of files in your database that can be centrally stored, managed and backed up.  A Simple application could be easily written to save, update and retrieve images through SQL Server 2008, using file stream data.


    So where is Microsoft Master Data Management (MDM)?

    Well, back in June 2007, Microsoft acquired Stratature, who had a product named +EDM.  +EDM,  was  a master  data  hub with a web UI,  business  rules,  simple human workflow  and  notification,  entity  and  hierarchy management,  versioning,  transaction  logging, and an open subscription interface.  Microsoft has not continued a release of +EDM, but has continued support for existing customers.  In November a technology preview was released to those who applied and were accepted onto the program.  Well since then we have seen a CTP on Connect, which Jamie explored here.  This Blog was written back in March 2008, and since then Microsoft has acquired yet another company called Zoomix, who focuses on MDM data quality.  Information on this acquisition, and what Zoomix is can be found here.   The hope is that all of this will be integrated into Microsoft MDM.  

    The official release plan was as follows:

    • Technology Preview - February, 2008 to a select group of customers.  Near identical functionality to Stratature release (actually released earlier than that, in November 2007).
    • TAP - Program commenced first half of 2008.  
    • CTP - First CTP release available Q3, 2008.  

    The codename for Microsoft MDM is "Bulldog", and it seems that Microsoft MDM will ship in Microsoft SharePoint - Office 14 Release which is currently due for release late 2009, early 2010, around the same time as Windows 7.   The codename for the subsequent release of the Microsoft MDM product is "Greenwich". Apparently major new features will be released in the "Greenwich" timeframe, including integration, analytical & transactional capabilities.  But for the mean time let’s see how “Bulldog” does.

    I will continue to keep an eye on MDM, from here on in, although it might be a long wait until you all get your hands on it. 

    PerformancePoint Server SP2

    A good few months ago I wrote about SP1 for PerformancePoint and getting this working on SQL Server 2008.  There were a number of attempts, and with a great deal of hacking, I got an untested environment set up, however this was not very stable and never fully tested.  SP2 is due for release in December, and there are mentions of a Beta available on the web, although unavailable through MSDN, or Connect.  It seems that the two promises of SP2 are Hyper-V compatibility and SQL Server 2008 integration.   

    A few good resources for this are as follows:

    http://hmorgenstern.spaces.live.com/Blog/cns!28A6BE83102A0EB3!419.entry and http://performancepointing.blogspot.com/2008/11/performance-point-server-sp2-pre.html