Jonathon Eve-O'Connor's Blog

Continuous Integration With TeamCity – Automating SSIS Build and Deployment

In this blog article I will be detailing how I use TeamCity in order to automate the building and deployment of SSIS projects using the project deployment model. This is a little more complicated than doing a CI build for database projects or other project types as MSBuild will not be able to build or deploy these without a little work.


This blog assumes the reader has first followed or is familiar with the steps outlined in the previous blogs in this series. These are available from:

As per the last blog Visual Studio should be installed on the build server, as well as the Business Intelligence Data tools extensions for both Business Intelligence and Database projects. Team City will need to be configured and setup to connect to TFS.

Adding the .dll

There is a dll released by a kind member of the community we will use to enable MSBuild to build/deploy the SSIS projects. The dll is called Microsoft.SqlServer.IntegrationServices.Build before you can enable CI for SSIS projects you will need to get the source and build the dll.The source for the dll is available as part of the ‘Microsoft SQL Server Community Samples: Integration Services’ project which is currently hosted at the following url: 

Once you have downloaded the source from the link you need to open the project located in the relative path ‘\main\SSISMSBuild\Project\Microsoft.SqlServer.IntegrationServices.Build.csproj’

The project may build straight out of the box, however depending on your environment you may need to amend the references so that they are suitable for the environment. I found that because I am running SQL 2014 and visual Studio 2013 a few of the references needed a few changes in order to get everything running smoothly. Once you have added the referenes you should see something like the below (no warnings):


I added a file for the strongly named key (key.snk), set the configuration to release and built the project by right clicking the solution and selecting ‘Build’. If all has gone well you should see something like the following:


Following this you need to take the new dll from the ‘\bin\Release folder. For the purposes of this blog I placed the file in the path ‘C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies’ however, you may wish to change this depending upon the environment you are working in and the Visual Studio version which is installed.

Configuring Project Settings

You need to add two xml files to your main project, this may not be required if you only have one SSIS project within your solution, however for larger projects I have found this provides a simple method of maintaining the solution and easily adding new projects to the project for CI deployment.

SSISBuild.proj sample file


<?xml version="1.0" encoding="Windows-1252"?> <Project xmlns="" DefaultTargets="SSISBuild"> <UsingTask TaskName="DeploymentFileCompilerTask" AssemblyFile="C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies\Microsoft.SqlServer.IntegrationServices.Build.dll" /> <!--note that the path to the assembly file referenced above is the one we have just added, the path may need to be amended if running in a different environment.--> <ItemGroup> <SSISProjPath Include="./CI.SSAS/CI.SSIS.dtproj"/> <!--contains the path of the SSIS project to be built--> </ItemGroup> <!-- Add all additional SSIS projects in like this: <ItemGroup> <SSISProjPath Include="./CI.SSAS/CI.SSIS.dtproj"/> </ItemGroup> --> <Target Name="SSISBuild"> <Message Text="**************Building SSIS project: @(SSISProjPath) for configuration: $(CONFIGURATION)**************" /> <DeploymentFileCompilerTask InputProject="@(SSISProjPath)" Configuration="$(CONFIGURATION)" ProtectionLevel="DontSaveSensitive"> </DeploymentFileCompilerTask> </Target> </Project>


SSISDeploy.proj sample file


<?xml version="1.0" encoding="Windows-1252"?> <Project xmlns="" DefaultTargets="SSISBuild;SSISDeploy"> <!--As before the path below may need to be amended depending upon the environment--> <UsingTask TaskName="DeploymentFileCompilerTask" AssemblyFile="C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies\Microsoft.SqlServer.IntegrationServices.Build.dll" /> <ItemGroup> <SSISProjPath Include="DataWarehouse"> <!--the top level name for the project in the SSIS catalog, amend as required--> <DeploymentFile>.\CI.SSIS\bin\Development\Project1.ispac</DeploymentFile> <!--The path to the file which is to be deployed, must match the name property configured in VS for the project.--> </SSISProjPath> <!-- Add new projcts in here as required <SSISProjPath Include="DataWarehouse"> <DeploymentFile>.\ETL.MasterData\bin\Development\MasterData.ispac</DeploymentFile> </SSISProjPath> --> </ItemGroup> <UsingTask TaskName="DeployProjectToCatalogTask" AssemblyFile="C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies\Microsoft.SqlServer.IntegrationServices.Build.dll" /> <Target Name="SSISDeploy"> <DeployProjectToCatalogTask DeploymentFile="%(SSISProjPath.DeploymentFile)" Instance="$(SSISServer)" Folder="%(SSISProjPath.Identity)" CreateFolder="true" <!-- Add additional variables (Catalog and Environment)for new version of IntegrationServices.Build.dll if working with an older version these may not be recognised --> Catalog="SSISDB" Environment="Development"/> </Target> </Project>
The files should be added at the top of the solution which should give something like this:



Right click the relevant SSIS project and set the Name property to the value you would like to see when the Project is deployed to the SSIS catalog.


Team City Configuration

Add a new Build configuration eg ‘SSIS Build and Deploy’ to the project.


Attach the TFS root as per the previous blog In the version control settings tab.

Add a new build step ‘Build SSIS Projects’. Depending on your environment and the Visual Studio version installed you may need to use a different version of MSBuild


You will notice that in the command line parameters here the value of ‘/p:SSISServer=Development

This setting should be set to match a suitable configuration in your project configuration in Visual Studio.
In this example I have used development.


Add another step ‘Deploy SSIS projects’.


Ensure that the command line parameter is of the format: ‘/p:SSISServer=YourDeploymentServerName

Upon completion the setup build steps should look like this:


When you view the project in the main Team City ‘Projects’ section you should see the following if all is well:



Checking the SSIS catalog we can see that upon check in the project has automatically been deployed to the target server.


See the next blog at:  for details of how to set up automated deployment of SSAS projects.

Stop Automatic checkout of SSIS Packages in Visual Studio

Every once in a while I work on a project affected by this issue. What happens is that upon loading a solution the project(s) are immediately checked out when no actual changes have been made to them. The issue is very irritating and can hinder collaborative work. I think it is hard to produce a definitive list of all the potential causes of this issue, however I have recently found 2 sets of circumstances which cause this issue, when I happen upon another I will post it here along with any solutions I have found to the issue.

Database File Not In Source Control

I found the changes occurring were in one of the xml files for the SSIS project – each SSIS project in a solution has a .dtproj file. According to the Microsoft website the .dtproj file “contains information about project configurations and items such as packages”. On investigation I can see that one of the main things this file does is to keep a record of all the files required for the project. I found that in projects where these issues were occurring the part of the file that references the .database file was changing:

<Database> <Name>ETL CheckOutIssues 11.database</Name> <FullPath>ETL CheckOutIssues 11.database</FullPath> </Database>

Projects which did not have the issue had xml that looked like this (no number):

<Database> <Name>ETL CheckOutIssues.database</Name> <FullPath>ETL CheckOutIssues.database</FullPath> </Database>

I looked a bit further and I found that the .database file was not included in the Source Control for the solution. However if VS cannot find the file it will create a new one and change the reference in the .dtproj file appending a number which increments each time it fails to find the file.

Furthermore new .database files are not included in source control upon creation by default so if a project is renamed and a developer gets the project source for the first time it will create a new .database file with a new naming format <newprojectname>.database. With many subsequent check-ins from different users each one changing the name of the database we can end up with a situation where the number will increment further and cause a check out each time a developer opens the solution.

The solution

Fortunately the solution to the issue is quite straightforward. Right click the project in Visual Studio and unload it.


1. Open the folder Find the .database file and rename it to the original name eg:  ‘ETL CheckOutIssues.database’ (you may need to first delete a file to do this).

2. Open the .dtproj xml file edit the references so that they correctly reference the renamed file. eg:

<Database> <Name>ETL CheckOutIssues.database</Name> <FullPath>ETL CheckOutIssues.database</FullPath> </Database>

3. Right click the file and add it to source control


4. Navigate to the folder above in Windows Explorer and check in the changes


5. Reload the solution – you should see the SSIS project is not checked out automatically


6. Ensure all developers working on the project do a get latest on the SSIS project and overwrite the existing files



To change the behaviour of TFS so that .database files will be included in source control as standard you need to make the following changes:

1. In visual Studio Navigate to Team > Team Project Collection Settings > Source Control


2. Under File types add a new type with extension *.database


Different Versions Of Data Tools

I recently found another issue which causes this where several people working on a project have slightly different versions of ‘Business Intelligence Templates for Visual Studio’ (Data Tools). I found that the copy on my machine (whilst there were no available updates when checking through VS) had an older version number which it includes in the .dtproj file. Whilst this doesn't impact the ability to develop packages on different machines running the different version, it does serve to confuse TFS and force a version checkout as the below comparison shows.


To resolve you need to ensure everyone has the same version of Data Tools on their machine. The latest version for VS2013 is freely available for download from here Once the file is downloaded you need to select the oddly titled option to ‘Perform a new Installation of SQL Server’.


Following this you must make sure that the box for the BI Templates is ticked (it may not be if you already have a slightly earlier version installed).


Once the installation completed the issue the version number stayed the same meaning the .dtproj file was no longer checked out each time different users opened an IS project.

Connecting To Google BigQuery From SSRS

Recently I have been investigating Google BigQuery as a potential tool for use in a client Datawarehousing project.

The first thing I wanted to check was that it was suitable as a source for standard reports. Excel support is well documented, I wanted to try it from SSRS to determine if I would be able to retrieve the data using other tools aside from Excel. To do this we need to use an ODBC source, unfortunately Google do not provide a native ODBC driver, however I found a third party ODBC driver for this purpose. You can sign up for a free trial to test the product and read the details about it here

The first step is to install the driver, there are 32 and 64-bit versions included in the download. If you are running on a 64-bit machine you should install both versions.

Upon signing up for the trial you should receive a license file by email. This file should be pasted into the C:\Program Files (x86)\Simba BigQuery ODBC Driver\lib and C:\Program Files\Simba BigQuery ODBC Driver\lib once the installation is completed.


Following this you need to setup the connection, to do this open the ODBC admin console.
Select ‘Add’, then select the ‘Simba BigQuery ODBC driver’


Click Sign in, a web page will open which will ask you to enter the details for your big query account, following this it will provide you with a confirmation code which you can enter in the confirmation code box, upon doing this the ‘Refresh Token Field’ will automatically populate and you can select the Catalog and project.


If all has gone well you will see the following:


The next step is to setup the connection in reporting services. To do this you need to set the connection type to ODBC, click edit, select the ODBC connection name which was created in the previous step and test the connection.


You can now use the tool to run queries against your BigQuery data. As a word of caution I have found the query syntax to be a little unforgiving in a way which it is not when querying via the standard BigQuery web interface – keywords need to be in upper case, field names are case-sensitive and when referring to tables the ‘:’ character should be replaced with a ‘.’ eg ‘publicdata:samples.natality’ becomes ‘publicdata.samples.natality’



Getting Started With Windows Azure Files Service

One of the exciting new features I have been using recently in Azure is the new ‘Azure Files’ service. Currently this is only available in preview, however I have already found it to be very useful.

What this does is that it allows you to setup a storage account in the cloud, this is similar to the BLOB storage (not currently accessible in the same way) however it carries the added benefit is that you can mount this storage making it available as a virtual drive which can be used by several machines to share a set of files!

To setup a File Service storage account you must first sign up for the preview feature at 


As soon as the subscription is activated you are ready to start using the service.

To setup a storage account (it needs to be a new storage account, created after you signed up for the preview to use the feature) you should navigate to the portal and select the storage option, then create a new storage account.


Once the storage account creation has completed you can confirm the new storage account is setup for the preview by selecting the new storage account and navigating to the dashboard. You will notice in the example below tat the Files service is marked as active and also it is highlighted that the service is in preview.


Clicking manage access keys at the bottom of the dashboard will give you the storage account name and access key. Note the storage account name and the primary access key, you will need these later (you could also use the secondary access key if preferred).


You now need to create a file share within the new storage account. To do this we use Azure PowerShell. To do this you must have first download PowerShell and setup the account to work with your subscription – if you have not done this already instructions on this can be found here: Once you have connected to your subscription you need to issue the following commands:

#Retrieve the storage account context and stored in a variable named $StorageContext
$StorageContext = New-AzureStorageContext <storage account name> <storage access key>

#Create a File share for the storage context specified in the previous step.
New-AzureStorageShare <File Share Name> –Context $StorageContext

If all goes well you should see something like the following in your PowerShell window:


The next step is to log onto the virtual machine in which you wish to create the shared drive and setup persisted credentials. This is required so that when you restart the machine the drive does not dissapear each time. From PowerShell or storage tools issue the following command:

cmdkey /add:<storage account name> /user:<storage account name> /pass:<storage account key>
Once the credential is added issue the following command to map the network drive:
net use <DriveLetter>: \\<storage account name>\<File share name>

If all goes well you should see something like the below:


That it, once you open windows explorer you should now see a permanent drive mapped as per the below. If you repeat the mapping steps you can add this drive to many machines within Azure!


The only limitation I have found so far is that you cannot create the file share on a local machine. I found that the easiest way to transfer files was to use AzCopy (provided with the Azure SDK). You can upload a file with a simple command in this format:

"C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy" /Source:d:\SourceFolder\ /Dest:https://<Share><sharename>/ /DestKey:<Access Key> /s

Further information on using Azcopy can be found here

Setting Up Credentials For Azure Automation

Recently I have been working with Simon Whiteley on a project to produce a scalable BI solution in the cloud. To achieve this we have been using the Azure Automation feature (see here for further details) to schedule Azure PowerShell runbooks which we have been using to determine the status of VM’s in the environment and changing their configuration as required.

As soon as you need the PowerShell scripts to do more than very basic functions, such as print ‘Hello World’ they require permission to be able to administer the environment. To do this you need to create what is essentially a service account in Azure with the required permissions and then save these credentials. The credentials are then used when running the PowerShell scripts. I’m writing this as whilst it’s a fairly straightforward operation once you have discovered what you need to do it took me a while to figure out what was required.

In order to set this up first navigate to the Active Directory (AD) section for the required subscription of the Azure Portal, if you do not have a local user setup for automation tasks you will need to create one. Below is a screenshot of such a user setup in the portal.


1. To create a new user select ‘Add user’ from the options at the bottom of the screen. Give the account a suitable name and setup as a ‘New user in your organisation’.


2. Setup a standard user. It is important you do not enable Multi-Factor authentication for this account as this will not work for an unattended service account.


3. Following this the user will be assigned a temporary password, this must be changed to a fixed one. Once you click create you will be provided with the login details for the account, write these down or email them as they will be required later.

image image

4. Ensure the user is setup as an administrator of the subscription. To do this navigate to ‘Subscriptions’ and select ‘Manage administrators’.


5. Add the user to the subscription as a co-administrator.


6. In order to use the new account you will need to sign out of the portal and sign in as the new user using the details from step 3. This will allow you to set a new password for the account. Save the password for the new account safely as you will need this.


8. Log back in under the original account. The next step is to create a credential that will be used in the PowerShell script for automation. Select the automation icon and it will prompt you to create a new account (you can skip this step if you already have an account created).


9. Click the arrow to view the details of the newly created automation account and then assets and ‘Add Setting’. You should see an option like the below, from this you need to select ‘Add Credential’.


10. Choose to add a PowerShell credential


11. Provide the details for the account created in step 2 with the password from step 6.


You should now be able to create and execute a runbook with automation services. As a test the below runbook can be used to list the virtual machines currently running in the environment:


Effective Handling Of Failed Lookups In SSIS

Recently I was working on a project I noticed the many of the packages were using the below pattern to deal with failed lookups. I have seen this in many organisations, whilst this isn’t something new I wanted to write about it as it seems many people are unaware there is a better performing alternative.

What I have found is that when handling failed lookups many people use a pattern which looks like the below:


On initial inspection this may seem like a sensible way to deal with unknown members, it performs the required lookup and enables the calculation of an Unknown key where the lookup fails. However we must remember the Union All transformation is semi-blocking asynchronous transformation (it slows the flow of data as it passes through – see bottom of the post for further information). As such if we use this transformation it will decrease performance of SSIS packages and we should avoid it’s use wherever a better alternative exists.

As it happens there is a very simple alternative to this pattern. We set the lookup to ignore failure, rows where the lookup fails will have a null value.


All rows now flow down the Lookup Match Output with a null value where the lookup failed, the final step is to add a derived column transformation prior to the insert into the target table to replace the null values using a formula as per the below:


The package now looks like this:


When I run both versions of the package with my test data (10,000 rows in which 50% of the lookups fail) it is 30% faster when the Union All transform is avoided. It also creates a simpler flow which is easier to follow. For a full list of the SSIS components categorised into Non-blocking, Semi-blocking and Fully-blocking there is a great blog post on this here: .

Setup Unit Testing with NUnit &amp; NBi

I recently attended the PASS Summit in Seattle. For me one of the most interesting talks was by Davide Mauri’ who presented a very interesting session on Unit testing BI Projects. At Adatis we have our own testing framework which we have been using for some time now, however the products looked good in the demonstration and I have been excited to try them out for myself and see how they perform. The results are very promising.

This is a quick blog article to detail the initial setup of the two products and running of a very basic test, if you are interested in Agile BI (it is interesting) then these two products provide a good way of implementing unit tests in a BI project. The software supports testing of SSIS package execution, T-SQL Queries, MDX, DAX and SSRS reports. Production of tests can also be automated from a table in SQL Server (keep posted for a future blog on this).

1. Download the NUnit setup files from

2. Run the setup utility. With the default options (this will install NUnit to C:\Program Files (x86)\NUnit 2.6.3)


3. Download the the files for NBi from these come as a zipped folder.

4.  You then need to copy the following files from the of the Framework folder contained in the NBi zip file into the NUnit folder (default C:\Program Files (x86)\NUnit 2.6.3\bin). The folder must contain the following additional files upon completion:


5.  You now need to add a file which points to the tests you wish to execute. I created a subdirectory in the folder called ‘Tests’ and placed my test file in there (sample test file in following step). The file must be called Nbi.NUnit.Runtime.dll.config

The contents of the below file should be in this format, we can see that for this example we are pointing to a file with the name SampleTests.nbits contained in the folder Tests.

<?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="nbi" type="NBi.NUnit.Runtime.NBiSection, NBi.NUnit.Runtime"/> </configSections> <nbi testSuite="Tests\SampleTest.nbits"/> </configuration>

6. Now you need to create a test. Tests are supplied in XML format, (there is a tool called Genbi supplied with NBi to automate the generation of the XML), for the purposes of this blog I will provide a sample test which connects to SQL Server and runs a T-SQL Query comparing it to an expected value, another which connects to a Multidimensional Model and executes an MDX query checking that it runs correctly and another that checks the results of an MDX query. This sample test file name must match the name of the file referenced in Step 5 in order for the tests to work. You will also need to amend the connection strings, queries and database names depending upon your environment!

<?xml version="1.0" encoding="utf-8"?> <testSuite name="My first test suite" xmlns="http://NBi/TestSuite"> <test name="Test MDX Syntax"> <system-under-test> <execution> <query name="MDX" connectionString="Provider=MSOLAP.5;Data Source=.\md;Integrated Security=SSPI;Initial Catalog=Adventure Works DW 2008R2;"> SELECT [Measures].[Reseller Order Count] ON 0 FROM [Adventure Works] </query> </execution> </system-under-test> <assert> <syntacticallyCorrect/> </assert> </test> <test name="Test MDX Results"> <system-under-test> <execution> <query connectionString="Provider=MSOLAP.5;Data Source=.\md;Integrated Security=SSPI;Initial Catalog=Adventure Works DW 2008R2;"> <![CDATA[ SELECT [Measures].[Reseller Order Count] ON 0 FROM [Adventure Works] ]]> </query> </execution> </system-under-test> <assert> <equalTo> <resultSet> <row> <cell>3796</cell> </row> </resultSet> </equalTo> </assert> </test> <test name="Test SQL Results"> <system-under-test> <execution> <query connectionString="Data Source=.\;Initial Catalog=AdventureWorksDW2014;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;"> <![CDATA[ SELECT SUM([SalesAmount]) AS Amount FROM [dbo].[FactInternetSales] ]]> </query> </execution> </system-under-test> <assert> <equalTo> <resultSet> <row> <cell>29358677.2207</cell> </row> </resultSet> </equalTo> </assert> </test> </testSuite>

7. In order to connect to Analysis Services if you don’t have it installed already you will need to install the ADOMD client available from: 

8. If you run the Nunit.exe file in the NUnit folder you should see the following GUI interface:


9. If you click the run button and the tests are running successfully then you should see the following:


10. If you are intending to use NUnit & NBi for automated testing you can call them straight from a command line from the NUnit directory by issuing the following command: Nunit-Console nbi.nunit.runtime.dll The results will be output to an XML file which can be used to record details of the tests or processed by SSIS, to store the test details in a database for easy future reference.

Southampton User Group - Continuous Integration for the Microsoft SQL Server Stack

Many thanks to all of those that attended my session on Continuous Integration for the Microsoft SQL Server Stack at the Southampton user group

Unfortunately some technical issues meant I couldn’t deliver the presentation as intended, I will be returning next year to show the demos and perhaps give a quick demonstration of some of the BI Unit testing tools I have been trialling recently.

The slides and sample project source from the demo have been uploaded to One Drive and are available here:!145&authkey=!AI1DhhUSQ2INGOI&ithint=folder%2ctxt if anyone has any questions let me know. There is also a tutorial section on the blog which starts here: this details the setup of the project attached end to end and should be worked through in order to configure TeamCity to work with the project.

SQL PASS Summit - Day 3

Today was the final day of the pass summit. The schedule was a little different in the morning with no keynote speech. Instead I opted to see Jerimiah Peschka’s session ‘Dynamic SQL: Build Fast, Flexible Queries’. This was an interesting talk, which covered the security aspects of dynamic SQL and how to avoid things such as SQL injection and also how to ensure that queries generated from dynamic SQL ran quickly and avoided issues such as ensuring plan reuse on each execution. Whilst I very rarely find a myself situation in which Dynamic SQL is essential I found this an interesting talk and I am sure that when the need does arise it will have been very handy.

Following this I attended a Microsoft lab on Azure Machine Learning (AML) ‘Authoring a hand-written digit recognizer in Microsoft Azure Machine Learning Studio’. This was a hands on session where computers and a set of exercises and sample data are supplied. Essentially the lesson consisted of providing a model with vector images of numbers, training the model to recognise the numbers and then passing a new set of number vector images to the model to see it identify them (> 85%) correctly. The session was interesting and gave me an overview of using AML Studio which was great. My main issue was that the data was already provided in vector format (csv files which were imported), whilst I can see time constraints mean that the conversion from image to vector during the class would have been difficult it would have been very interesting to have seen a few of the examples of the images so that we could see exactly what it was that was being identified and how different the numbers were as this would give an illustration of how clever the algorithms are.

I finished the conference with Brain Larson’s talk entitled ‘Effective Reporting through SSRS Advanced Authoring Features’. As you may know SSRS has not been updated for some time now and with the current focus on Power View I wasn’t sure how much I would benefit from this and what the plans are for this tool in the future. I was quite surprised therefore when the room filled up and overflowed so that there were quite a crowd of people in the room who had to stand in order to watch it. The talk showed some interesting uses of SSRS - images embedded in the background of reports, adding strip lines and markers and an example of creating an interactive report which allowed users to set their preferences for running other reports. The examples given were very good as was the delivery, my only issue here is that without new features added to SSRS (for example ability to do an asynchronous refresh of some report elements) I am not sure what the uptake of SSRS is going to be.

All in all I have had a fantastic and informative time here. I leave you with some pictures, taken earlier in the week :)

20141031_113229  20141030_132631

SQL PASS Summit – Day 2

This morning started with a talk from Rimma Nehme a key Microsoft architect. The talk was informative and clearly Azure and the cloud are something which Microsoft are very keen, it was well thought through and there was a fantastic analogy between pizza and different flavours of Azure which is not what I had expected.

Adam Machanic did a presentation covering different methods which could be used in order to force the query optimiser into choosing more efficient query plans. This was not for the feint hearted and included some ingenious methods which he had developed which could be used to ‘fool’ the optimiser into selecting different query plans. The methods tried to push towards parallel queries and ensuring that the right workload was assigned to each of the threads so they finished at roughly the same time. The only drawback to this was that some of the examples produced code which was quite obfuscated and not something which I thought could be used except in the case that there was an exceptional issue which necessitated their use.

Davide Mauri’s session on Unit Testing and Agile development was very interesting, he showed demonstrations of NUnit, BI.Quality and NBI. These are tools which allow users to create unit tests when following an agile approach, whilst we also use similar methods at Adatis to provide unit testing functionality the demonstrations were very effective and I will be doing some further research into these products to realise their full potential.

Connor Cunningham also presented a fantastic talk titled ‘Managing 1M + db’s – How big data is used to run SQL Azure’. This was a very insightful talk which detailed the issues that had arisen when running the SQL Azure service and the methods used in order to identify, quickly resolve and identify the root cause of the problems. Central to this was both the collection of telemetry data from many sourced which could be used to perform diagnostics and the methods used in order to identify problems. Here we saw a very interesting usage of Azure Machine Learning, which had been setup to trigger Alerts on the occurrence of unusual behaviour. Connor is a fantastic speaker and clearly one of the most knowledgeable people on the SQL engine, I really enjoyed this talk.

I’m off to the evening party now at the EMP museum, looking forward to having a few beers and meeting some new people :)