Jonathon Eve-O'Connor's Blog

SQL Saturday Plovdiv

Last weekend, Adatis made our first (of many appearances) at a Bulgarian SQL Server event - SQL Saturday Plovdiv. We’ve been busy recently setting up our new office in Sofia but had yet to engage with the local SQL community, so we were very excited to become involved and meet the local community members.


(Adatis new office space in Sofia)

Adatis were gold sponsors at SQL Plovdiv and there were roughly 200 attendees - Terry, Simon and I presented sessions on T-SQL, Azure SQLDW and SSIS.


(Plovdiv stand in action, complete with our PowerBI gallery)

Overall the event went well, sessions were popular. Attendees were very friendly and people were interested and keen to engage with us. We we excited to receive applications from talented candidates that had good potential to join our fledgling Bulgarian team (one insisting on attending our sessions to make sure we weren’t full of hot air!).


(team Plovdiv icecream break)

The next event in Sofia should be larger and we are hoping to repeat this success and learn from the Plovdiv conference to try and make the Sofia one better still. The aim is to contribute to and increase awareness of Adatis within the Bulgarian community, present great sessions and also to help us try and find new recruits to join the Bulgarian team Smile If you happen to be near to Sofia and are interested in finding out more about what we do or would like to work with us, please do feel free to contact us using the details on the main website.

Continuous Integration with SQL Server 2016 for SSIS – Updated Template for SSIS Deployment

Recently we had a couple of issues on a project setting up Continuous Integration when working with SQL Server 2016. I thought it was worth blogging this in case anyone else had come across similar issues.

The issue was occurring because we had taken the latest source code for the Integration Services Build and deploy dll’s which have had a couple of new parameters added. These must be supplied otherwise a failure will occur when you try to deploy the project if working with the templates provided in the earlier blog.Obviously the addition of parameters is no bad thing as now we have support for deployment of environments – (for more information on this see this post from Ritchie Lee here : ) However, it does require making a couple of tweaks to the script otherwise it will fail Smile Full credit to Christo Olivier for picking up on this and resolving the issue.

The error

When we were running run a build using the template SSIS Build was returning an error and the build log looked like this :


The error “MSB044: the “DeployProjectToCatalogTask” task was not given a value for the required parameter “Environment””” is actually quite clear – a required parameter “Environment” is not being supplied. However having used the scripts without issue previously it took a little time to determine the underlying cause.

The amendment therefore is to modify the file that you are using for your MSBuild script (referred to in my previous tutorials as SSISDeploy.proj)

Whereas the last line of the MSBUildScript previously looked like this:


We now need to add the following code to the template:


I have updated the previous blog to include this information, however if you are receiving this error and have made a recent change to the dll’s and upgraded to SQL 2016 this is worth checking that this is not the cause.

Attunity Connector Tips

In keeping the theme with my previous blog I thought I would put together a quick blog detailing some of the more useful features of the Attunity connectors.

1. Support For expressions. Initially, when using the components I thought that the Attunity connectors did not support Expressions, however I was mistaken. The components do support expressions, but you set these on the dataflow properties rather than against the component. To do this, you need to right-click on the empty black background of the data flow and edit the properties. You will see that there are a set of properties which can be set using expressions.

image image


2. If the component is running slowly you should make sure you are running the drivers in 64-bit mode (when running from VS it will always be 32-bit). To do this you need to make sure that you have installed the 64-bit version of the connector (this installs the 32-bit version automatically). If you find that when you deploy packages to the SSIS catalog and run them they fail, however they work when running from Visual Studio this may point to an issue with the installation of the connectors.


3. If you are running in 64-bit mode and you need to further increase the speed of the your ETL it is well worth checking the Batch Size option. Whilst the optimum setting for this depends upon our environment, I generally find that the default setting of 100 very low. I normally start by increasing this to 1000 and then working from there, in some cases I have seen the optimum value for this setting to be greater than 10,000 so results vary, however if loading speed is becoming an issue and this is set to the default value of 100 it is certainly worth investigation.


Stop Automatic checkout of SSIS Packages in Visual Studio

Every once in a while I work on a project affected by this issue. What happens is that upon loading a solution the project(s) are immediately checked out when no actual changes have been made to them. The issue is very irritating and can hinder collaborative work. I think it is hard to produce a definitive list of all the potential causes of this issue, however I have recently found 2 sets of circumstances which cause this issue, when I happen upon another I will post it here along with any solutions I have found to the issue.

Database File Not In Source Control

I found the changes occurring were in one of the xml files for the SSIS project – each SSIS project in a solution has a .dtproj file. According to the Microsoft website the .dtproj file “contains information about project configurations and items such as packages”. On investigation I can see that one of the main things this file does is to keep a record of all the files required for the project. I found that in projects where these issues were occurring the part of the file that references the .database file was changing:

<Database> <Name>ETL CheckOutIssues 11.database</Name> <FullPath>ETL CheckOutIssues 11.database</FullPath> </Database>

Projects which did not have the issue had xml that looked like this (no number):

<Database> <Name>ETL CheckOutIssues.database</Name> <FullPath>ETL CheckOutIssues.database</FullPath> </Database>

I looked a bit further and I found that the .database file was not included in the Source Control for the solution. However if VS cannot find the file it will create a new one and change the reference in the .dtproj file appending a number which increments each time it fails to find the file.

Furthermore new .database files are not included in source control upon creation by default so if a project is renamed and a developer gets the project source for the first time it will create a new .database file with a new naming format <newprojectname>.database. With many subsequent check-ins from different users each one changing the name of the database we can end up with a situation where the number will increment further and cause a check out each time a developer opens the solution. To resolve this we need to standardise the names and make sure that the dtproj file and the .database file names match.


The solution

Fortunately the solution to the issue is quite straightforward. Right click the project in Visual Studio and unload it.


1. Open the folder Find the .database file and rename it to the original name eg:  ‘ETL CheckOutIssues.database’ (you may need to first delete a file to do this).

2. Open the .dtproj xml file edit the references so that they correctly reference the renamed file and the names of the .database and .dtproj files match. eg:

<Database> <Name>ETL CheckOutIssues.database</Name> <FullPath>ETL CheckOutIssues.database</FullPath> </Database>


3. Right click the file and add it to source control


4. Navigate to the folder above in Windows Explorer and check in the changes


5. Reload the solution – you should see the SSIS project is not checked out automatically


6. Ensure all developers working on the project do a get latest on the SSIS project and overwrite the existing files



To change the behaviour of TFS so that .database files will be included in source control as standard you need to make the following changes:

1. In visual Studio Navigate to Team > Team Project Collection Settings > Source Control


2. Under File types add a new type with extension *.database


Different Versions Of Data Tools

I recently found another issue which causes this where several people working on a project have slightly different versions of ‘Business Intelligence Templates for Visual Studio’ (Data Tools). I found that the copy on my machine (whilst there were no available updates when checking through VS) had an older version number which it includes in the .dtproj file. Whilst this doesn't impact the ability to develop packages on different machines running the different version, it does serve to confuse TFS and force a version checkout as the below comparison shows.


To resolve you need to ensure everyone has the same version of Data Tools on their machine. The latest version for VS2013 is freely available for download from here Once the file is downloaded you need to select the oddly titled option to ‘Perform a new Installation of SQL Server’.


Following this you must make sure that the box for the BI Templates is ticked (it may not be if you already have a slightly earlier version installed).


Once the installation completed the issue the version number stayed the same meaning the .dtproj file was no longer checked out each time different users opened an IS project.

Setup Of Oracle Attunity Connectors

Fairly often I come across a project where there is a requirement to extract data from an Oracle source. Whilst this can be done in SSIS using a standard ODBC connection there are some drawbacks to this – the main one for me is that when working with larger data volumes the speed of data transfer can be quite slow.

As such I have found the best solution is generally to setup the Attunity Connectors in order to facilitate quick extraction from the Oracle system. For a great comparison on the performance between straight ODBC and Attunity – please see this post from Len Wyatt.

Unfortunately the setup can be a little bit fiddly so I thought I would document the process here (as much for me as anyone else).

1. The first step is to determine the version of the Attunity connectors required – this depends upon your SQL server version:

SQL Server Version Attunity Connector Version Supported Oracle Versions
SQL Server 2016 v4.0 Oracle 10.x; Oracle 11.x; Oracle 12c
SQL Server 2014 v3.0 Oracle 10.x, Oracle 11.x, Oracle 12c
SQL Server 2012 v2.0 Oracle and higher, Oracle 10.x, Oracle 11.x
SQL Server 2008 / 2008 R2 v1.2 Oracle and higher, Oracle 10.x, Oracle 11.x

2. Following this you need to install the Oracle client. Generally you will be working in a 64-bit environment which will require both the 64-bit and 32-bit versions of the Oracle client to be installed (Data tools uses a 32-bit connection but deployed packages should run in 64-bit mode for optimum performance). The Oracle client can be downloaded from you will need to sign up on the site and create an account if you don’t have one already.

NOTE: It is not enough to only download the connectors for SSIS and run the installation packages in order to retrieve data you must install the Oracle client.

Once you have determined the correct version of the Attunity connectors this you will need to select the relevant version of the Client which you wish to use. Because when Visual Studio is used to run SSIS package this runs in 32-bit mode, you will need the 32-bit version of the client installed on development machines. However, when you deploy the packages and run them from the SSIS Catalog this runs in 64-bit mode so you need to install the 64-bit version to get the best performance in production environments.

There are several version of Oracle clients available for download on the site, most recently I used the following version of the Oracle components. I find that these install relatively simply and provide the required features without requiring to much disk space or consuming to much in the way of resources.


Depending on the version of the client you have selected the installation instruction for Oracle will vary. However I would suggest you follow the following points:

1. Run the installation packages as an administrator by right clicking and selecting ‘Run as Administrator’ – previously when I have not done this it has led to unusual results.

2. You will need to install the clients on all machines which will be running the SSIS packages – this may be several servers including developer machines used for creating SSIS packages and servers used for schedules ETL runs.

3. The installation should be done in the following order:

- Install Oracle 32-bit Client (if required)
- Install Oracle 64-bit Client (if required)
- Restart machine

Once the client is installed a simple way to test that the client is working is to use SQL Developer to open the connection to the Oracle database. Once this is working correctly you then need to install the Attunity connectors.

Again depending on the system you are working on you will need to install either the 64-bit of 32-bit Attunity components for SSIS. However here the rules are slightly different.

- If you need to run in 64-bit or 64-bit and 32-bit mode, you should install only the 64-bit version of the components.
- If you need to run in 32-bit mode only then you should install only the 32-bit version of the compoents.

The latest version of the connectors (for SQL 2016) are available from:

The installation of the components is pretty straightforward – once you downloaded them you run the executable and then restart the machine.

Once completed the Attunity components will be available for selection within the SSIS toolbox in the Data Flow as per the below. The connectors are a very versatile component and support many of the features you would expect from an SSIS component. Be aware that if you are looking for expressions, then these are to be found on the Data Flow Properties rather than the properties of the actual component.


Automation Variables In Azure

If you need to to share values between Azure runbooks or parameterise them, Automation Variables may be just the thing you need.

Essentially these are exactly what you might expect – variables which you define and can use within Azure. The variables can be updated via Powershell scripts, manually via the Azure portal and of course by runbooks.

The first thing to do is to define the variable. To do this go to the Azure portal and go into the Automation section. Following this navigate to the assets section, click new and select ‘Add Setting’


Select ‘Add Variable’



Fill in the relevant details for the variable. Note the variable type can be changed programmatically later if this is required.


Set a default value for the variable if it’s required and if the variable contains sensitive information you can also select for it to be encrypted.


To access the variable from the Powershell script the syntax used is like this:

Get-AutomationVariable -Name 'VariableName'

So for example, in order to access the value of a variable called “ETLProcessStatus” and assign this to a variable named $CurrentRunStatus you would do the following:

$CurrentRunStatus = Get-AutomationVariable -Name "ETLProcessStatus"

In case you forget the syntax for retrieving or updating the value of the variables there is a nice feature where it will write the basic syntax out for you.

Click insert at the bottom of the ribbon and then ‘Setting’


Following this you can select from the following options and template commands will be inserted into the script.


In a blog which will be coming shortly, I will detail how I use this variable and some other techniques in order to assist in running a scalable cloud ETL solution.


Reflections On PASS Summit

Well it’s been quite a week and this has certainly been a summit with many announcements for the wider BI community. Whilst I really enjoyed my previous visit to PASS and the sessions were certainly very interesting, aside from a few announcements around Power BI there wasn’t much new in the BI landscape.

This summit has been quite a different experience - it really feels that BI features are again high in the priority agenda with SQL Server 2016. We have seen the resurrection of SSRS and significant enhancements across the MS BI Stack.

As usual, as well as the new technology announcements there were some excellent sessions from community members I particularly enjoyed the session from Rob Farley’s on Query Plan Operators and Jason Strate’s talk on the Plan Cache. Both of which contained techniques I will certainly be using in future.

I thought I would write up a summary of the changes which have been announced and my thoughts on them split by feature:


Probably the most unexpected announcement were changes to SSRS. There has been such little discussion of SSRS since release of the 2008 R2 edition of SQL Server I had started to suspect SSRS was going to be left to go the same way as Performance Point. This would have been a great shame as SSRS has been a great tool for providing on-premise reporting and fits nicely in a gap which other parts of the BI Stack don’t cover.

The following new features were announced:

- The SSRS Portal has been rewritten in HTML 5 thus making it cross browser compatible and will work across devices (tablets, phones etc). It  also means the portal looks much newer and more in line with other offerings from MS such as the Power BI Portal.

- It will now be possible to setup the positioning of and grouping of parameters in SSRS reports. This will make quite a significant change to the usability of reports and was something which could not be done without building a custom wrapper to display the reports.

- The portal will support hosting of DataZen reports, the functionality of DataZen will remain and reports hosted on the site will be separate from traditional SSRS reports, however this additional report type will be rolled into SSRS and the current DataZen product will not exist in it’s current form.

- SSRS Report parts can now be pinned to Power BI dashboards.

- Reports can now be exported directly to PowerPoint without requiring the additional use of third party tools.

- Changes have been made to the default charts and report objects which should mean that off the shelf reports immediately look nicer than  previously. This seems like an immediate win as all those who have worked with SSRS will know, it can take some time in order to produce nice looking reports using SSRS and much of this work is down to changing the current default settings.

- The Power BI Mobile App will now support connections to RS, allowing on the move users to view on premise SSRS reports.

- Tree Map and Sunburst chart types have been added providing additional visualisation types.

- Printing no longer is dependant upon ActiveX.

- Support for .Net 4

There were also a couple of things I noticed which were not quite so positive in the list of announcements:

- When asked about the licensing model MS could not confirm how SSRS would fit into the licensing model in SQL 2016. This could be because this has yet to be decided, however it did make me wonder that with all these enhancements is this something which could potentially incurincurr an additional fee.

- Reports deployed to SharePoint will not benefit from all of the new features, for example they will not benefit from any of the parameter changes and I am unsure of the other enhancements such as HTML 5 and new chart types will be supported in reports deployed to SharePoint. Basically don’t deploy your reports to SP unless you have to.

Overall though the picture is fairly positive and certainly far more than I had expected to be announced. The HTML 5 compliance and changes to parameters will certainly appeal to clients. My only criticism is that functionally the navigation and user experience of the reports will be quite similar for a desktop user. Certainly the HTML 5 will improve the look, and changing the parameters will also make some improvements but the reports will still essentially be static parameterised reports – there is certainly no announcement of any new features such as AJAX style interaction with different report parts in the way that is currently supported by Power BI.


Whilst perhaps not quite as major as the changes announced to SSRS, SSIS has some pretty nifty improvements.

- Reusable (parameterised) control flow templates have been added to enable re-use of SSIS logic. Whilst I will have to try this out the implementation of this for myself certainly there are often many similar actions performed using the ETL process which could certainly benefit from this.

- Column names now appear in the data viewer in event of an error! This is a great change and whilst it has always amazed me that this has not been present in previous releases (one might think this is pretty fundamental!) this will be a great enhancement for many people working with SSIS on a daily basis. 

- The latest version of VS 2015 (available now in preview!) will support all versions of SSIS, allowing all SSIS development to take place in the same version of Visual Studio. On a side note, ‘Data tools’ has been rolled into one product allowing for development of DB Projects, SSIS packages, SSRS and SSAS. Whilst this may not sound groundbreaking these are long overdue features which I think will have many people very happy.

-  New connectors allow direct connectivity to Azure storage and Hadoop.

- Packages can now be deployed incrementally to the SSIS catalog, rather than required entire projects to deployed at a time.

- A new role ‘SSIS_logreader’ will be added which will allow access to read logs without requiring Admin permissions.

- Foreach loop will now be able to iterate over blobs in Azure Storage.

- A new property on SSIS packages AutoAdjustBufferSize will, when set to true try to calculate the optimum buffer size for the package.

- Support for Excel 2013 as a datasource.

- ODataSource and Connection managers now support v3 & v4 protocols enabling support for JSON and ATOM.

These features certainly will make some improvements to what is already an excellent product. Fixing some of the major bugbears such as VS support for different versions and showing column names in the event of an error rather then leaving the developer to guess will further improve the product and the additional support for connections to Azure will be very useful for those hosting data in the cloud. Unfortunately, when I asked if the issues of source control around SSIS packages will be addressed (difficulties to collaboratively work & add packages when several people are working on the same project) this is not something we can expect for this release.


Last but not least there were a number of changes to SSAS in both the new Tabular models and the traditional multidimensional cubes.

- Tabular models will have proper support for many-to-many relationships and bi-directional cross filtering. Whilst for many this was an expected feature already being available in the Power BI Desktop app, this is a major improvement. I think that currently one of the major reasons people choose to implement the traditional multidimensional model over Tabular is the complexity around these measure calculations in Tabular.

- Improved designed for Tabular models – this was shown in the demonstration and included some major improvements such as colour highlighting of DAX code, better intellisense and the ability to add comments to measure definitions. Whilst from an end user perspective there will be no change, certainly from a productivity and developer happiness side these features will be hugely appreciated and help make the Tabular model a more mature product. Again I think this is another feature which is likely to make people who had previously rejected the Tabular model reconsider it’s viability.

- Tabular models will support parallel partition processing and the translations, which for Tabular has only been available with BIDS helper as an unsupported feature will become fully supported.

- Direct Query has a number of enhancements. Firstly the queries generated from it are far more efficient than previously (they used to be very inefficient, so I will wait tentatively to see how much of an improvement this is). It also supports calculated columns (which must be used with caution!) and perhaps the largest change is that it has better MDX support meaning that Excel can now be used to connect to the model and query it directly.

- The DAX language has some improvements - it now supports variables and can return multiple result sets from a single query.

- A new Tabular JSON document representing the model will become available and can be used when compatibility level is set to the new 1200 value. From the demo it looked like this format should be far more readable to the eye as well as making changes to the model itself quicker due to an improved structure of the metadata for the model.

- Support has been added to track extended events in AS graphically.

- Power BI Will support connections to Multidimensional models.

- Issues around inefficiencies with MDX queries generated by Excel have been resolved leading to significantly faster performance in the 2016 release of Excel.

Overall this is a really nice set of improvements. We see the Tabular model becoming a more mature product and also some enhancements which will give great benefit to users of the traditional MD models, such as integration with Power BI and faster performance in Excel. There are also some significant improvements both to Power BI and the Database engine which I am looking forward to.

I should also note that Microsoft have announced their BI Roadmap which gives an overview of their future direction here:

Anyways that’s it for now, I’ll leave you with this picture from earlier in the week. When me and colleague Simon took a trip to Mt St Helens :)


Enhanced Analysis Services Multidimensional Support in Datazen

Datazen is a great new addition to the MS BI stack which has been greeted with much enthusiasm since the announcement from Microsoft they had acquired it and it would be freely available to use for customers with SQL Enterprise licenses.

However there have been a few complications around how the interface with Analysis services worked – the main one being that only measures on the first axis were included in the query.

For example executing the following basic MDX query:

1 SELECT 2 NON EMPTY {[Measures].[Internet Sales Amount]} ON 0, 3 NON EMPTY {[Product].[Product].[Product]} ON 1 4 FROM [Adventure Works]

Would result in a dataset which did not include the members from the product dimension (not great!).

Until recently the workaround to get around this involves creating a calculated member which returns the Member_Caption and then placing this on the same axis as the measures. Such as in the example below.

1 WITH MEMBER [Measures].[ProductName] as [Product].[Product]"Member_Caption") 2 3 SELECT 4 {[Measures].[ProductName], [Measures].[Internet Sales Amount]} on 0, 5 nonempty({[Product].[Product].[Product]}, [Measures].[Internet Sales Amount]) on 1 6 FROM [Adventure Works]

 However with the latest version of Datazen, this is no longer required! This means that you can write the query above in the original format and it will work as expected. For people that use management studio or report builder to create their MDX queries rather than hand coding it this now means it is possible to use these statements in your Datazen MDX queries.

Firstly you need to connect Datazen server to your SSAS instance – there is a great article on the Microsoft site here which explains the process.

1. When you connect to Analysis Services to create a new dataset you are now presented with the following options. Note the MDX is from the first example which would not have worked previously.


2. In the second screen you need to select the data types and display names for each of the measures. I suggest that it is a good idea to change the names to something which should be as clear as possible during report creation (by default the display name is the same as the Column Name).


3. There’s not much to do on this screen provided that the preview looks correct – you just click Next :)


Now the data set is ready to be used in reports.

Dynamic Bandings in Power BI Desktop

Recently I’ve been looking at applying dynamic bandings for a client. What was required was to split the data into different bandings upon the values in the source data. For example – if it was product price, the bottom third would have a low value, the middle third would show as being in the middle and the top third would show it was a high value.

The client was using the latest version of Power BI Desktop which supports several new functions. In order to achieve this I used one of the new percentile functions (PERCENTILE.EXC) which made the process really simple, so I thought I would share the technique.

Firstly I imported the dataset – in this example I have a used a basic dataset. Test Value is a number which increments from 1 – 300 over 300 rows.


Secondly I added a calculated column ‘Banding’ for which I used the following DAX formula:

1 Banding = 2 IF ( 3 VALUE ( PERCENTILE.EXC ( SampleData[TestValue], 1 / 3 ) ) 4 > Table1[TestValue], 5 "Low", 6 IF ( 7 VALUE ( PERCENTILE.EXC (SampleData[TestValue], 2 / 3 ) ) 8 > Table1[TestValue], 9 "Medium", 10 "High" 11 ) 12 )

Uploading the workbook to Power BI you can now see that we have 100 rows belonging to each of the different categories based upon the dynamic banding we have applied using the percentile based calculation.


We can now use this to determine how products of different colours perform – for example we can see that compared to blue products that there are more red ones in the High category. We can also break down sales of product colour by banding.


The only issue is that we can’t sort in the order Low > Medium > High as Power BI is arranging the values alphabetically. In order to do this we need to add a column to use for sorting. This can be achieved using the following DAX formula:

1 BandingSortOrder = 2 IF ( 3 VALUE ( PERCENTILE.EXC ( SampleData[TestValue], 1 / 3 ) ) 4 > Table1[TestValue], 5 "1", 6 IF ( 7 VALUE ( PERCENTILE.EXC ( SampleData[TestValue], 2 / 3 ) ) 8 > Table1[TestValue], 9 "2", 10 "3" 11 ) 12 )

In the model designer select the Banding calculated column and on the ‘Modeling’ option tab click ‘Sort By Column’ and select BandingSortOrder.


Hide the BandingSortOrder column from the Report View so it’s not visible in the reporting layer (right click in fields to do this).


Now when we publish to Power BI we can see it’s now ordered from Low to high:


Setting Up The Power BI Analysis Services Connector

The Power BI Analysis Services Connector is used in order to expose a Tabular model to Power BI allowing end users to consume data from the model directly for building of reports and ad-hoc analysis.

The setup of the connector is very straightforward however you will should bear the following in mind

- Only Tabular models are supported with the connectors – you will not be able to use this to enable reporting from a multidimensional database.
- The Analysis Services Connector performs best if it is hosted on the same server that hosts the Tabular model.
- The speed of the internet connection between the server running the Analysis Server Connector and the Power BI service is crucial to performance.
- You can’t run the Analysis Services Connector on a server also running either the Data Management Gateway or the Power BI Personal Gateway.

Installation Steps

1. Download the connector from

2. Run the Analysis Services Connector Setup Program on the machine hosting the Tabular model.


3. Once the installation has completed you will be given the option to launch the connector




4. Enter the login details required to connect to Power BI


5. If the details are correct you will see the below screen:


6. Enter the account details required to connect to the Tabular Instance – clearly this needs to be an account with access to read from the Tabular model.


7. Give the connection a suitable name and a friendly error message to be displayed to users in the case that the connection fails


This should complete the wizard, the next step is to log onto the Power BI site – if all has gone well you should see the model as per the below.