James

James Russell's Blog

Migrating Team Foundation Server backlog

What is the issue

I recently had a need to move a Microsoft Team Foundation Server (TFS) backlog I had created in an on-premises instance of TFS to a newly created Microsoft Visual Studio Online (VSO) instance. My existing on-premises TFS backlog had multiple Features, Backlog Items and Tasks as well as detailed information for each item; my new VSO backlog was completely empty. This blog post will describe steps to transfer my TFS backlog from on-premises to VSO but it can be used or slightly edited when transferring TFS backlogs between any versions of TFS

Steps for migration

1. Export your current on-premises TFS backlog to Excel with the required level of information. To do this, navigate to the TFS webpage for your on-premises TFS and open the work tab, open the ‘Queries’ page and create a new query. Be sure to change the ‘Type of query’ to ‘Tree of work items’ and customise the query to include all relevant information that has been added to your on-premises TFS backlog which you wish to transfer, in my ‘Column Options’ I included; ID, Work Item Type, Title, Assigned To, Effort, State and Iteration Path.

clip_image002

2. Open Excel (with the Team Foundation Server add-in), go to the ‘Team’ tab and select ‘New List’ then connect to the on-premises TFS and the relevant project, finally choosing your query from the Query List. You should then have a table representing the items in your backlog.

clip_image004

3. Create a dummy Feature, Product Backlog Item and Task in your new VSO TFS backlog (A separate item for however many different levels you have in your backlog). This is to give you the correct number of indented title columns for your level hierarchy.

clip_image006

4. Create a query in your VSO TFS as described in Step 1 which directly matches the one created in your on-premises TFS with the same columns, filters etc.

5. Open another instance of Excel and follow the instructions in step 2 but this time connect to the VSO TFS project and open the query created in step 4. Copy all the rows from your table created from your on-premises TFS query in Excel from step 2 and paste them underneath the dummy items in your on-line query. Do not copy the ID’s for the items you are copying, leave them blank as the ID’s will be assigned once we publish them to your VSO TFS.

*You may get validation errors for the newly added items, I had validation errors due to unsupported field values for ‘State’ where I had ‘In Progress’ values in my on-premises TFS which was not in the list of supported values for my VSO TFS. This is due a TFS constraint where you cannot create a new item (which we are effectively doing) unless it is in its ‘initial state’. To rectify this I filtered the ‘State’ column for where the value is not ‘New’ (which is their initial State for the column) and then manually changed all the values to ‘New’. After the backlog has been published you will need to navigate to the backlog in the on-line TFS and manually change the state of those items on the board back to what they were in the on-premises TFS backlog.

6. Once the new items have been added select ‘Publish’ and the changes will be uploaded to TFS. Navigate to the new on-premises TFS backlog webpage and check that everything has been added successfully.

Introduction to Azure Key Vault for SQL Server

What is it

Azure Key Vault is a feature available in Microsoft Azure which can be used to manage and store keys that are used to encrypt your data. A big benefit to using Azure Key Vault is that the process of managing and maintaining your keys is completely streamlined; Keys can be created very quickly for development and testing purposes and can then in turn be seamlessly migrated to production keys where permissions can be granted or revoked as necessary.

A variety of keys and secrets can be held in Azure Key Vault including; Authentication keys, Storage account keys, Data encryption keys, .PFX files and passwords. Stored keys are protected by Hardware Security Modules (HSMs); keys can be imported or generated in theses HSMs which are processed in FIPS 140-2 Level 2 validated HSMs.

Uses and limitations for SQL Server

Key Vaults can be created and used by anyone with an Azure subscription. They can be useful to Azure developers and security admins, but also to administrators who manage other Azure services for an organisation who can then be responsible for the management and maintenance of keys or secrets and can provide users with URIs which can be applied directly to their applications.

Azure Key Vault can be integrated with SQL Server as an Extensible Key Management (EKM) provider to protect SQL Server encryption keys. This is particularly useful when using Always On Encryption which is available with SQL Server 2016 as with Always On Encryption SQL Server does not hold the keys used to decrypt the data it stores in Always On encryption fields making Azure Key Vault a perfect utilisation as a centralised key store for this functionality.

SQL Server has a variety of encryption methods including; Transparent Data Encryption (TDE), Column Level Encryption (CLE) and Backup Encryption; these encryption methods implement a traditional key hierarchy where by the data is encrypted using a symmetric data encryption key (DEK) which is further protected by encrypting it with a hierarchy of keys stored in SQL Server. By instead using Azure Key Vault as the EKM provider architecture SQL Server can protect the data encryption keys by using an asymmetric key stored externally to SQL Server which in turn adds an additional security layer and separation between the management of keys and data. This functionality can be adopted by both cloud based SQL Server instances on Azure virtual machines and on-premises SQL Server instances.

In order to implement Azure Key Vault to protect your SQL Server encryption keys you will use the SQL Server Connector; this acts as a bridge between SQL Server and Azure Key Vault. The SQL Server Connector needs to be registered with the relevant SQL Server instance which allows Azure Key Vault to be used as a cryptographic provider, next the configuration and permissions are set up from within Azure and the appropriate credentials are created from within SQL Server. Finally an asymmetric key is opened in Azure Key Vault which can be used to protect database encryption keys on the SQL Server instance.

The SQL Server Connector is available as a download from Microsoft here and requires Windows Server 2012 or Windows Server 2012 R2 as your operating system. It currently supports the Enterprise 64-bit versions of SQL Server 2016, 2014, 2012 SP2, 2012 SP1 CU6 and 2008 R2 CU8; for earlier versions of SQL Server 2008 and 2012 there is a patch available which is linked from the afore mentioned Microsoft download page. For more information on the SQL Server Connector and other software requirements please see the Details and System Requirements again on the afore mentioned Microsoft download page.

Potential limitations and issues to keep in mind include the following;

· Azure Key Vault, like most cloud applications is a paid for service and although there is no upfront cost or termination fees there is a price for both key storage and operations. There are two available service tiers but only Premium offers HSM protected keys which are used with SQL Server implementations. The prices as of November 2016 are;

     - £0.6109 per version of a key per month and £0.0183/10,000 operations for HSM protected keys.

     - £0.0183/10,000 operations for secrets and software protected keys.

     - £1.8327 per renewal request and £0.0183/10,000 operations for Certificate operations.

· Authentication to Azure Key Vault requires Azure Active Directory; this is however included with Azure subscriptions.

· Key Vault permissions assign a principal access to all secrets and keys within a vault which is something to keep I mind when assigning management and maintenance of the key vault. The best practise would be to have a separate Azure Key Vault for each application instance storing secrets or keys.

· Where you have applications using Azure Key Vaults split across separate regions it is best practise to create separate key vaults in each of the relevant regions that will be used by those applications for both performance and security considerations.

· There are transaction limits applied which allows the following maximum amount of transactions in 10 seconds, per vault, per region;

     - 5 for ‘HSM- CREATE KEY’ transaction types.

     - 1000 for ‘HSM- other transaction’ types.

     - 10 for ‘Soft-key CREATE KEY’ transaction types.

     - 1500 for ‘Soft-key other transaction’ types.

     - 2000 for ‘All secrets, vault related transaction’ types.

Further Reading

I will be posting a future blog showing an in-depth real-life application of Azure Key Vault with a SQL Server instance using the SQL Server Connector and the steps taken to create it including initial set-up, testing and deployment strategies. I will also discuss the permissions and configurations that can be used and the roles that can be assigned for the management and maintenance of the implementation.

For more detailed information see the following links;

· Microsoft Azure – Key Vault

· Microsoft Documents – What is Azure Key Vault

· Microsoft – SQL Server Connector for Microsoft Key Vault

SSAS Tabular Tables Losing Structure of Measures

What is the issue

While recently working on a SQL Server 2012 Tabular cube in Visual Studio I came across an extremely frustrating issue where the visual layout of my measures was re-ordered. You are most probably aware that the actual structure or ordering of the measures within the workspace of a Tabular table in Visual Studio holds no relevance to how Visual Studio sees or uses the measures however, if you are like me then you will naturally organise your measures into some kind of pattern making them easier to use, maintain or locate.

In this instance, I had all of my visible ‘switching’ measures listed in the first column and then their hidden, dependant measures which were different date analysis calculations listed in adjacent columns. For example, all of my ‘Year to Date’ calculations were in the second column, my ‘Month to Date’ calculations were in the third column, etc. Eventually I had over 20 visible switching measures in the first column, each with 8 separate hidden date analysis calculations giving me a total of 160 measures which were all neatly organised in the table’s workspace. An example of what this may look like is shown in the following diagram;

clip_image001

At a later point I opened the table to add a new measure and to my surprise all of the existing measures had moved position and been restructured as shown in the following diagram;

clip_image002

For the solution, we were using TFS as our source control and by going back through the version history of the SSAS Tabular project I found that a change to a completely different table which was checked in weeks earlier had the changes to this table’s layout checked in with it. Unfortunately, as there had been several other changes and check ins since this change, rolling back wasn’t an option. Here I had two options, painstakingly cut and paste each measure back into its original location or leave the table in its new format and move on. Although I have a slight OCD side to me when it comes to things like this for times-sake of the project I left it how it was; as I mentioned earlier the structure of the measures on a table holds no real relevance in SQL Server 2012 or 2014, it just purely helps with management and maintenance of the groups of measures. A colleague has also run into this issue since and their table was re-structured differently but Visual Studio seems to either add an additional column or take an existing one away and restructure the measures accordingly.

Potential solutions

I am unaware to the exact reason for this happening and therefore can’t give a suitable way to structure your tables for prevention. There is only one way I found to correct this issue once it has occurred other than manually cutting and pasting your measures back to the way you want. Luckily one of our team had not worked on the project since the initial check-in of the project which had the restructured table and had not got the latest code of the project. I copied their Model.bim.layout file which is located in the project folder in File Explorer and replaced my version with it. After restarting Visual Studio and re-opening my Project I had found that my table structure was back to its original layout. The issue here is that we have multiple developers working on the same Tabular project but different child tables therefore simply going through every table to check it hasn’t been restructured before every check-in would be inefficient as a layout change could easily be missed.

The solution that our development team adopted was to separately attach the Model.bim.layout file to the solution in a Solution Items folder and then have this stored in source control as shown in the diagram below;

clip_image004

This file should then be used to replace the existing Model.bim.layout file in the developer’s local Tabular project whenever they start working on the Tabular project and then replace the file in the Solution Items after they have finished working on the Tabular project. Checking this file in at the same time as their changes to the Tabular project allows the file to have an independent version history and the separation of this file from the Tabular project prompts the developer to manually check the layout of the tables before checking in the Model.bim.layout file, hopefully adding an additional check to prevent this issue from happening.

Although this extra step which is needed whenever changes to the Tabular project are made will add time to the check-in process it is in my opinion worth it to ensure the structure of your tables in a Tabular project are kept consistent and easily maintainable; especially when they start to get large and complex.

I have only witnessed or heard of this happening in Visual Studio 2013 with Tabular projects in both SQL Server 2012 and SQL Server 2014. I am hoping that with the many improvements and fixes that Microsoft has released for SQL Server 2016 Tabular projects this will no longer happen but if you have witnessed this in SQL Server 2016 then please comment below and let me know. Also, if you know of a better way to prevent this issue from occurring or have more information on why it happens again, please comment below.

Introduction to Dynamic Data Masking

What is it

SQL Server Dynamic Data Masking is a new feature which has been released with SQL Server 2016. It is designed to allow the creation of a pre-defined rule which can be applied to the data in a column limiting the exposure of the actual data.

For example, if you have the following password column in a table which contains user passwords and has a datatype of nvarchar;

Password

Kingston123

Sh@r0na

you can apply a data masking rule to the column which will make is appear as the following to any unauthorised user;

Password

XXXX

XXXX

Masking Options

Dynamic Data Masking currently has 4 possible masking options which can be applied to a column;

Default

This will mask the full value depending on the column’s data type.

For string data types; char, nchar, varchar, nvarchar, text and ntext the value will be replaced with XXXX. If the length of the field is less than 4 characters, then that number of X’s will be used.

For numeric data types; bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float and real the value will be replaced with 0.

For date and time data types; date, datetime2, datetime, datetimeoffset, smalldatetime and time the value will be replaced with 01.01.2000 00:00:00.0000000.

For binary date types; binary, varbinary and image the value will be replaced with 0.

Email

This will mask the full value exposing the first letter of the string and masks the rest with XXX@XXXX.com. For example, ‘james.russell@myemail.co.uk’ will be masked as ‘jXXX@XXXX.com’.

Custom String

This will mask part of a value exposing a number of characters at the start of end of the string based on a prefix and suffix padding value. For example, ‘abcdefghij’ when given a prefix padding of 2 and a suffix padding of 4 will be masked as ‘abXXXXghij’.

Random

This can be used with any numeric data type and will mask the original value with a random value based on the supplied range. For example, ‘123456789’ when given a range of 1-5 will be masked as either 1, 2, 3, 4 or 5.

Uses and Limitations

Dynamic data masking is designed to restrict the exposure of sensitive to non-privileged users with minimal impact on the application layer. As its application only effects the result set of a query over designated database fields while keeping the actual data in the database the same, it is perfect for reporting or Business Intelligence uses; this also means that it can be incorporated without modifying pre-existing queries.

This feature is implemented by running Transact-SQL commands in SQL 2016 or by using the Azure portal for Azure SQL Databases.

It is important to note that dynamic data masking is not designed with the purpose of extensive database security and will not be able to prevent database users from running intrusive queries to expose extra pieces of the sensitive data by connecting directly to the database. The feature can however be used in conjunction with other SQL security features such as encryption, row level security and auditing.

There are a number of other restrictions\applications that should be noted;

· Dynamic data masking will not work with; encrypted columns using Always Encrypted, FILESTREAM column types, COLUMN_SET or a sparse column that is part of a column set, computed columns (if the computed column depends on a masked column then the result will be masked data), keys for a FULLTEXT index.

· Updates can still be made to a masked column, even though when queried the user will see masked data.

· Using SELECT INTO or INSERT INTO to copy data from a masked column into another table will result in masked data in the target table.

· When using SQL Server import and export functionality on a database containing masked data the resulting backup file or imported table will contain masked data.

I will be posting a future blog showing in-depth real-life applications of dynamic data masking with examples, applied permissions and further applications for both on-premises SQL 2016 and Azure SQL Databases.

Further Reading

I will be posting a future blog showing in-depth real-life applications of dynamic data masking with examples, applied permissions and further applications for both on-premises SQL 2016 and Azure SQL Databases.

For more detailed information see the following links;

       MSDN Dynamic Data Masking, Get started with SQL Database Dynamic Data Masking (Azure Portal)