Dan

Dan Evans' Blog

SQL Server backup to Azure Storage with SSIS

This is the final part of a 3 blog series detailing some of the basics regarding SQL Server backups to Azure Blob storage. This blog shows how backups (and management of) to Azure Storage can be part of your ETL process using SSIS.

The SSIS backup database task has gained the dialog that allows you to back up a database to a URL. This is simply leveraging the same backup to URL functionality present in SSMS. This way you can implement your backups as part of your ETL process, perhaps running at the end of a load.

The guide below is for a Full Backup of a single database. It also contains a PowerShell script that will clean up old backup files to stop the container getting too full. It does this using the last modified date property of the blobs and a provided threshold value. The database being backed up in this example is a very small database used for testing out programming concepts such as test driven development.

image

Create a SQL Server Credential

You need to add a credential into SQL Server for access to Azure Storage. Use SQL Server Management Studio to do this with the follow code, substituting your details.

CREATE CREDENTIAL [https://adatissqlbackupstorage.blob.core.windows.net]

WITH IDENTITY = '<storage account name>',

SECRET = '<Access Key>'

The backup task will need this credential in order to create the backup. You could be more specific and try setting up tighter access to a single container user a SAS (Shared Access Signature) but in this I’m keeping it simple and just using the storage account.

When you open up the backup database task, in the ‘Back up to‘ dropdown, select ‘URL’.  Also choose which database(s) you want to backup up.

image

In the destination window select the credential that you created and then enter the name of the storage container you are backing up into. The URL prefix should be in the form of:

https://<storage account name>.blob.core.windows.net/<container name>

image

PowerShell maintenance script

Because there is no way of maintaining backups stored in the cloud via SSIS natively, we have to create some scripts that run using PowerShell to clean up backups that have reached a certain age.

This is the editor for the execute process task. Here you want to edit the following properties:

image

Executable - This is the property where you set the path to the executable, i.e. Azure PowerShell.

Arguments - This is the property where you set the path to your script and also provide any variables.

Standard Input Variable -This is where you can input the actual variables.

In my example I have tried to “variablize” as much as possible:

image

The script below accepts two parameters:

  • AzureStorageAccountName – Name of the storage account

  • AzureContainerName – Name of the Container that the backup blobs reside in

In order to simplify the process of signing into an azure subscription you can create something called an Azure Publish Settings File using the following command in PowerShell:

Get-AzurePublishSettingsFile

You may find that you are having problems with your scripts trying to use different subscriptions if you have imported more than one of these files. The easy solution is to just delete all the files in the folder C:\Users\<USER>\AppData\Roaming\Windows Azure Powershell. Once this is done you can re-run the above statement to get the file for the Azure subscription you want.

The example has a time of only 24 hours which I used mainly for testing. You will probably want to set this to longer than that, perhaps a week or a month or whatever period of time you want to retain your backups for.

clip_image002[5]

Save your script with the extension ‘.ps1’. This extension is primarily associated with PowerShell.

There you have it. A very basic script that will allow you to stop you hoarding too many backup file blobs.

SQL Server Managed Backup to Windows Azure

This Blog is part 2 of a 3 blog series concerning backing up your SQL Server databases to Azure Blob storage. This part details the basic setup of the SQL Server Managed Backups feature.

With automating backups you need to schedule jobs as part of a strategy and write custom scripts to maintain these backups – there is currently no way to delete old backups. The SQL Server Managed Backup service allows you to specify a retention period which can be between a minimum of 1 day and a maximum of 30 days. Other than the storage location there is nothing else to configure, at least when getting started and there are some advanced options available including encryption and a custom backup schedule (2016).

The frequency and the type of the backups, by default, is determined by the workload. A full backup takes place if the managed backup service is enabled for the first time, log growth is equal to or larger than 1GB, a week has passed since the last full back up or if the log chain is broken. Transaction log backups happen if log space is 5MB or larger, 2 hours has passed since the last log backup and any time the transaction log backup falls behind the full database backup.

With the custom backup schedule you can specify your own frequency. Full backups can be set to daily or weekly. Log backups can be set with minutes or hours. Weekly backups can be taken on the day of your choice and the start time for any backup is also configurable. Finally you can allow a maximum amount of time for a backup to complete in.

With SQL Server 2014 the maximum size allowed for a backup is 1TB because the storage used (Page Blob), has this limitation. 2016 uses backup to block blob which has a maximum size of 200GB but through striping allows up to 12TB. Block blobs cost less to store data than page blobs. If using 2014, there is a requirement for databases to be in full or bulk logged models as simple is not supported. However simple recovery model is supported for 2016. System databases are not supported in SQL Server 2014 but they are for SQL Server 2016.

Create an Azure Storage Account

The first prerequisite is to create an Azure Storage Account. You need this regardless of whether you choose to use managed backups or whether you use your own strategy using SSIS or scripting.

clip_image002

Create a Blob Container

After your storage account has been created you should then create a blob container. A container is a means of providing a grouping for sets of blobs. Blobs have to be in a container. There is no limit to the number of blobs you can have inside a container and no limit to the number of containers in a storage account so you can go to town with your organising of blobs and their containers.

clip_image004

In the Azure Portal if you select the storage account you just created you can create a container by navigating via Blobs > Containers. Here is where you can add the container. You must give it a name and also an access type. The choices here are Private, Public Blob & Public Container.

· Private

o No anonymous access and a shared access policy is required to get in.

· Public Blob

o Container access is restricted and blobs within it won’t be listed if anonymous access is attempted. However blobs themselves can be anonymously accessed via a URL directly.

· Public Container

o Container and all the blobs in it are open to anonymous access. All blobs within the container can be listed and viewed by anyone.

The access policy can be changed after the container is created. I created this one with the Private setting.

Create a Shared Access Signature

Next if you wish to allow access to your private blob container then you need a Shared Access Signature (SAS). You can’t do this via the portal and have only two options: programmatically through PowerShell or by using Azure Storage Explorer. Below you can see that I am using Azure Storage Explorer. Add your storage account and navigate to your containers. Highlight your container and click security. Here you can generate the SAS. Give it full permissions so that the SQL credential you will create afterwards is able to carry out the backups.

clip_image007

You can also specify an expiry on the SAS. The SAS will be provided together with a URL so separate out the URL for the container from the SAS token.

Create a SQL Server Credential

Now you need to add a credential into SQL Server for access into the blob container. Use SQL Server Management Studio to do this with the follow code, substituting your details.

2016

CREATE CREDENTIAL https://adatissqlbackupstorage.blob.core.windows.net/managed-backups-container]

WITH IDENTITY = 'SHARED ACCESS SIGNATURE',

SECRET = '<SAS TOKEN STRING HERE>'

2014

CREATE CREDENTIAL [https://adatissqlbackupstorage.blob.core.windows.net]

WITH IDENTITY = 'adatissqlbackupstorage',

SECRET = '<ACCESS KEY STRING HERE>'

With 2014 the managed backup creates the containers to store the backups. It is created with the naming format ‘machine name-instance name’.

Create the Managed Backup

Note: Ensure that the SQL Server Agent is running

Note: The following code is for SQL Server 2014. The code is different for 2016.

Use the following code, substituted with your own values to set up the managed backup. If you wish to apply this to all databases (effectively the instance) then use * or NULL in the database name parameter.

USE msdb;

EXEC smart_admin.sp_set_db_backup

@database_name = 'DOJO'

,@retention_days = 5

,@credential_name = [https://adatissqlbackupstorage.blob.core.windows.net]

,@encryption_algorithm = NO_ENCRYPTION

,@enable_backup = 1;

This code specifies not to use any encryption, just to make this simple.

Once created, you can monitor the health status of the job and also configure email notifications for errors and warnings. Check out the following link for more information:

https://msdn.microsoft.com/en-us/library/dn449498%28v=sql.120%29.aspx

SQL Server Backup and Restore with Microsoft Azure Blob Storage Service

This Blog is part 1 of a 3 blog series concerning backing up your SQL Server databases to Azure Cloud Storage. This first blog will provide a summary of the options currently available to you.

The reason to leverage Azure for your SQL Server backups is obvious. It is a far better alternative to tape backups. It eliminates the need for offsite transportation and makes high availability of the backups easier to obtain. In addition to this there is no hardware management overhead. Other things that should be factored are the cost (only pay for what you use) and the increased accessibility of the backups themselves.

When 2016 is released, there will be 3 supported methods of storing SQL Server backups in Azure Blob storage. The first two below can be done from on premise SQL Server instances but the third requires the instance to be running in an Azure VM:

Backup to URL

Done in the same way as backups have been done before to disk (or tape). This became available in SQL Server 2012 SP1 CU2. From 2016 onwards you will be able to use block blobs instead of page blobs. This method can be leveraged using the SSIS backup task.

SQL Server Managed Backups

A way to let SQL Server manage backups, negating the need to form your own strategies. This feature also allows you to set retention periods and works at a database level or at instance level.

Snapshot Backups for Database Files

Coming with SQL Server 2016 and provides near instantaneous backups and restores with point in time restore. The database files themselves need to be directly in Azure Storage as well as the instance being in an Azure VM.

Here are a few positives and negatives about each option:

· Backup to URL

o + Can be done during an SSIS load

o + Can be executed whenever you wish

o - Currently no way to maintain the backups from within SSIS natively

· SQL Server Managed Backups

o + Fully managed by SQL Server

o + Can have custom schedules

o + Maintains backups (retention period)

o - Always invoked as part of a schedule or under certain conditions

· Snapshot Backups

o + Very fast

o - Requires database files to be situated in Azure Blobs

o - Requires instance to be on an Azure VM – not an on-prem solution

The first two options I will explore in parts 2 and 3. However I will not go into more detail about database file snapshot backups as I will stick to on premise solutions.

Costs

Costs in this section are correct at the time of writing (2015/10/28). For updated costs check the links.

Data Transfer

The following information has been lifted directly with minor alterations from: https://azure.microsoft.com/en-gb/pricing/details/data-transfers/

Inbound data transfers

There is no cost for transferring data into Azure.

Outbound data transfers

Outbound data transfers

Zone 1*

Zone 2*

Zone 3*

First 5 GB/Month 1

Free

Free

Free

5 GB – 10.0 TB 2/month

£0.0532 per GB

£0.0844 per GB

£0.1106 per GB

Next 40 TB
(10-50 TB)/month

£0.0508 per GB

£0.0825 per GB

£0.107 per GB

Next 100 TB
(50-150 TB)/month

£0.0428 per GB

£0.0795 per GB

£0.1039 per GB

Next 350 TB
(150-500 TB)/month

£0.0306 per GB

£0.0734 per GB

£0.0978 per GB

Over 500 TB/month

Contact Microsoft

Contact Microsoft

Contact Microsoft

A sub-region is the lowest level geo-location that you may select to deploy your applications and associated data. For data transfers (except CDN), the following regions correspond to Zone 1, Zone 2 and Zone 3.

Zone 1: US West, US East, US North Central, US South Central, US East 2, US Central, Europe West, Europe North

Zone 2: Asia Pacific East, Asia Pacific Southeast, Japan East, Japan West, Australia East, Australia Southeast

Zone 3: Brazil South

As an example you have a Blob Storage account in West Europe (Zone 1). In this you have a backup file for a database that is approximately 9GB in size. You begin a restore of that data and pull it from azure to a local disk.

The first 5GB of this file are free. The remaining 4 GB falls into the £0.0508 per GB tariff. The total cost of transferring the file out of Azure would amount to £0.2032, assuming you only ever did this once per month.

Standard Storage

Block blobs

There is a charge of £0.0022 per 100,000 transactions for Block Blobs. Transactions include both read and write operations to storage.

Storage Capacity

LRS

ZRS

GRS

RA-GRS

First 1 TB/month

£0.0147 per GB

£0.0184 per GB

£0.0294 per GB

£0.0373 per GB

Next 49 TB (1 to 50 TB)/month

£0.0145 per GB

£0.0181 per GB

£0.0289 per GB

£0.0366 per GB

Next 450 TB (50 to 500 TB)/month

£0.0142 per GB

£0.0178 per GB

£0.0284 per GB

£0.036 per GB

Next 500 TB (500 to 1,000 TB)/month

£0.014 per GB

£0.0175 per GB

£0.0279 per GB

£0.0354 per GB

Next 4,000 TB (1,000 to 5,000 TB)/month

£0.0137 per GB

£0.0172 per GB

£0.0274 per GB

£0.0348 per GB

Over 5,000 TB/month

Contact Microsoft

Contact Microsoft

Contact Microsoft

Contact Microsoft


Page Blobs and Disks

Storage Capacity

LRS

GRS

RA-GRS

First 1 TB/month

£0.0306 per GB

£0.0581 per GB

£0.0734 per GB

Next 49 TB (1 to 50 TB)/month

£0.0306 per GB

£0.0489 per GB

£0.0611 per GB

Next 450 TB (50 to 500 TB)/month

£0.0306 per GB

£0.0428 per GB

£0.055 per GB

Next 500 TB (500 to 1,000 TB)/month

£0.0306 per GB

£0.0398 per GB

£0.0489 per GB

Next 4,000 TB (1,000 to 5,000 TB)/month

£0.0275 per GB

£0.0367 per GB

£0.0459 per GB

Over 5,000 TB/month

Contact Microsoft

Contact Microsoft

Contact Microsoft

 

This concludes the first overview blog. Hopefully this has given you a rough idea of the options available to you for backing up to Azure Storage. The next two blogs coming shortly will focus on Managed backups and SSIS backups to URL in a more detail, showing how to configure and use them.

SQL PASS Summit 2014 – Day 3 & Summary

Here we are, the final day of the conference and my time in Seattle. I am writing this just before boarding the plane home. The sun finally revealed itself in the morning! Great weather, shame we were inside. No Keynote but instead there were two morning sessions before lunch. I choose to go to a talk on the key issues with Power BI. These were actually to do with Licensing and the scheduled data refresh. Quite interesting but it demonstrated that there is some ironing out to do with Power BI.

Second session covered the details of different types of SQL Joins – Nested Loops, Merge, and Hash. A talk that was paced nicely and a refresher that was at a level good for beginners too.

Final session of the conference for me was some advanced SSRS reporting. It was completely demo driven and there were a few little techniques I hadn’t employed before in there which made it worth seeing. Brian Larson delivered solid examples and his experience with the technology shows. SSRS is definitely showing its age now, however. Still no mention of any future updates…

After this we had to leave early in order to grab an evening flight back which was a shame as there was another Azure Machine Learning session that I would have really liked to attend.

Overall I’ve enjoyed my first time at PASS Summit immensely. It has been great to meet and mingle with new people.

On the downside, there seemed to be quite a mix in terms of what the schedule said the level was for sessions and what I felt that the session turned out to be once attending.

I certainly hope to attend next year! It has been a great experience and I hope to see more of Seattle, perhaps adventuring south, next time (It was announced the Seattle would host PASS again next year, albeit a little earlier in October). We did get to see a lot of it, arriving a week early, even driving up to Vancouver and back down. The scenery in the North Cascades is fantastic! We also got to sample plenty of the food. I’m sure I’ve put on a ton of weight as a result.

Anyway, back to the UK and back to the job. I look forward to trying out Azure ML once I get back!

SQL PASS Summit 2014 – Day 2

A well received Keynote this morning given by Dr Rimma Nehme. There was a sentiment that this one was better especially because of the distinct lack of marketing waffle. Despite being tied to her laptop due to a broken clicker she delivered a well-rounded talk on Azure and received a standing ovation.

I finished the rest of the morning by attending Adam Machanic’s session on Query Tuning which focused exclusively on parallelism, with many ways of tricking the query optimizer into forcing parallel executions. I have no idea how he manages to come up with some of these - Excellent talk is all I can say.

In the afternoon, I went to Tim Mitchell’s Bullet-resistant SSIS Packages talk. This was enjoyable and featured some great tips regarding package restartability and event handlers. There were plenty of questions which meant that it felt cut short at the end. It was a nice refresher in a way but I did not learn anything new, unfortunately.

Third talk was a bit of a mistake on my part. NOTE: always read the description for the talk and don’t try and judge it just by the title! Levels aren’t always a great indicator either. ‘Power Query: Business Users and ETL Developers Hug Day’ did not have hugs. Neither did I feel, that it was a level 300 talk. It was 200 at a push and was more of an introduction to Power Query with a few nice examples. While they were good I didn’t gain anything from this but that’s mostly my fault.

Lastly was an introduction into Azure Machine Learning. I found this to be very interesting as I’ve not encountered the technology before. Raymond Laghaeian provided an introduction into what Machine Learning is – an iterative process of feature engineering - following it up with a demo of supervised machine learning. I will definitely be having a play with this when I get home. If you are at all interested in Machine Learning you should really give it a try. It’s free and the only requirement to get started is to have a Microsoft account! Check it out here.

Anyway, onto the community Appreciation Party!

SQL PASS Summit 2014 – Kick Off

Day 1 has kicked off in Seattle, a remarkable city. Having arrived a week early, I’ve had plenty of time to check out the sights and the food and have enjoyed it immensely - a brilliant venue for PASS!

There were a few announcements at this morning’s Keynote, mostly regarding Azure. Azure SQL Databases are gaining larger index handling, parallel queries, extended events and in-memory columnstore for data marts. Joseph Sirosh gave a talk about Machine Learning & Information management showing a cool example of Azure Stream Analytics using Microsoft Kinect sensor information of customer interactions in a shop being uploaded straight into Power Map! I am looking forward to hearing more on Machine Learning.

There are also a handful of great improvements for Power BI. I am most looking forward to the new Live Ops Dashboards and drill-through actions! Combo Reports also look promising…

Moving onto the first session, I chose to attend ‘What’s new in Microsoft Power Query for Excel’. As it turns out there’s not a massive amount of new stuff – some new data sources and a tick box when you do a merge to remove name prefixes. However one of these new sources is the long-awaited Analysis Services data source. The new ODBC Data Source is a great addition also. There was a mention regarding the possibility of a decoupled M-query SSIS component! We probably won’t hear of anything until later in 2015, unfortunately. I would say this was not a level 300 session, more like 100/200.

The second session was ‘SQL Server 2014 Always On (High Availability and Disaster Recovery)’: a nice overview of what was brought in in 2012 and the newer 2014 features – these including an increased maximum number of secondary replicas, increased availability of readable secondary’s and the Add Azure Replica Wizard. Despite not being a DBA and it being a level 300 session, I found it easy to follow and absorbable. I feel many of the DBAs in the room may not have taken away any information they would have not already have known, however.

Niko Neugebauer gave a fantastic, in depth session on ‘ETL Patterns with Clustered Columnstore Indexes’. It was a fast moving talk, despite the time spent waiting for some executions. It demanded your full attention! Definitely worthy of its 400 level. It left me a little tired for Marco Russo’s ‘DAX Patterns’ session which showed some examples of workarounds for common tabular weaknesses like distinct counts for Type 2 slowly changing dimensions and cumulative quantities.

Overall it was a strong day. I am looking forward to tomorrow. More to follow…

Dynamic Top N rows in Power Query

I had a situation recently where I was importing a CSV file into Power Query. This file had some summary data at the beginning of the file that I wanted to skip. This is perfectly easy enough using the Remove Top Rows step. However a problem I soon encountered was that if the files summary data varied in the number of rows then the Remove Top Rows step produced errors, especially if you then want to promote a row to be your header in a later step.

To get around this you can search for a particular string that is expected to appear, perhaps one that will become one of your headers or signify the start of a section of the file.

clip_image002

In this example I am using an Azure Usage file. I want to cut out the summary data at the top and start with the Daily Usage data.

Below is an abstract of a large query, starting at one of the file import steps:

1. Source = Csv.Document(File.Contents("NameRemoved.csv"),null,",",null,1252)

2. #"FilteredRows" = Table.SelectRows(Source, each Text.Contains([Column1], "Daily Usage")),

3. #"Position of Daily Usage" = Table.PositionOf(Source, FilteredRows {0}),

4. #"TopRemoved" = Table.Skip(Source, (#"Position of Daily Usage" + 1)),

5. #"First Row as Header" = Table.PromoteHeaders(#"TopRemoved"),

Pay attention to where steps 2, 3 and 4 all reference the step 1. These steps can be added using the advanced query editor.

Breaking it down by steps starting with Step 2 (#”FilteredRows”); this filtered a particular column, in this case column 1, by the string you are looking for, e.g. “Daily Usage”. The result of this is inserted into a table using Table.SelectRows.

clip_image004

Step 3 (#”Position of Daily Usage”) then finds the position of the 1st row of the table from Step 2, within the imported data in Step 1. Table.PositionOf requires a table as the input. The {0} denotes the first row in the table.

clip_image006

The row number is then used in Step 4 (#”TopRemoved”), in the Table.Skip function. In this example I wanted to remove the row with the Column 1 String “Daily Usage” so I added a + 1.

clip_image008

Importing multiple files into Power Query

You can automatically append data from multiple files by choosing to place them in a folder and query the folder. You can choose a folder and it will also search subfolders. You can drop more files into the folder (or remove some) at a later stage, refresh the query and the new files will appear automatically.

image

Once you select From Folder you will be asked to specify the folder location with the option to browse to it.

Once selected, Power Query will find suitable files and list them in the query editor window.

clip_image002

There will be a column named Content. Like the name suggests this is the contents of the file. There are other columns with details of the file like the name, extension and file path. There is also an attributes column that can be expanded to expose more properties.

At the top of the Content column there is a button. If you click this it will import your files and combine them.

clip_image004

However when you use this method, you lose all the information about the files themselves and are left with just the contents. What if you want to use the information about the file? To get around that you can create a custom column like so:

clip_image006

This example is using csv files. Creating this adds a table object into each row of the custom column. From there you can expand the contents alongside the file information.

The same can be done for files stored on SharePoint.

clip_image008

You will be prompted to provide the URL for the SharePoint site. This will be the root URL.

When you do this, a navigator window will open up but your files won’t appear. If you click on the root and click edit you can see why in the query editor.

You’ll notice that the Source query step is in the form of this:

= SharePoint.Tables("https://SharePoint/Site/")

Change this to SharePoint.Files. This will list all the files within the site! Now you can filter this list by file extension or by Folder Path (using the ‘Begins with’ Text Filter), for example.

From Nothing to Source Control

Source Control, often called Revision Control or Version Control, is a process of maintaining projects. It is essential for any software project, collaborative or otherwise. Below is a non-exhaustive list of benefits of Version Control:

  • Version Control and Tracking
    • Provides locking and serialized changes to any file
  • Structure
    • Everyone operates in the same way
  • Centralized storage
    • Multiple users can work on the same project simultaneously. When a person makes a change and checks it in, everyone else can get the latest version
  • Backups
    • If you accidentally delete a file you can just undelete it
  • Ability to branch
    • If you want to develop a large feature or make small fixes separate from the main trunk so as not to disturb or break the main development effort your can create a branch of code
  • History
    • You can see who made changes to what and when
  • Roll back
    • If a build is broken you can roll back the changes made

These features are fantastic for large projects but also for small ones, even if you are working by yourself. As well as the above points, there is also capacity to accommodate integrated automated builds and project planning and tracking, for starters.

So how do you go about getting free source control?

Visual Studio Online includes free TFS for up to 5 users! However this does not include MSDN Subscribers.

Navigate to http://www.visualstudio.com/

clip_image002

Click on ‘Get started for free’ and you will be redirected to a login screen. This is to create a Visual Studio Online account.

You will then be asked to sign in with your Microsoft account or an organisational account before being asked to fill in some extra details. If you do decide to sign up with an organisational account, it means that users can only sign into the Visual Studio Online account if they are a member of the connected Azure AD or 365 Subscription. If you sign up with a Microsoft account then users are fully managed by the Visual Studio Online account owner (and Administrators) and only Microsoft accounts can sign into it. When you sign up you will need to provide a URL in the form of:

https://<yoururlhere>.visualstudio.com/. This has to be unique. If you choose a URL that is already taken you will be presented with an error message stating that that URL is reserved.

clip_image004

Once you’re all signed up you will be presented with your newly created Visual Studio Online page.

Here you can start creating Projects and adding users.

If you navigate to Users you will see that you have yourself as a user and that on the far right you should have the number of free and total users.

clip_image006

You can add new users by email address and select whether they are a basic or Eligible MSDN Subscriber. You can edit existing users to change their License type. Finally you can delete users.

If you navigate back to the homepage you can create your first project. With your Project you can choose between two different versions of Version control. These are Team Foundation Version Control and Git.

With Team Foundation, typically, team members have only one version of each file on their dev machines. Historical data is maintained only on the server. Git is a decentralized version control system. Each developer has a copy of the entire source repository on their dev. You can learn more about the differences in this extensive page: TFVC or Git

You also have the choice of a process template. You have the choice of 3:

  • Microsoft Visual Studio Scrum 2013.3
  • MSF for Agile Software Development 2013.3
  • MSF for CMMI Process Improvement 2013.3

These define the set of work item types (WITs), queries, and reports that you’ll use to plan and track your project. They are detailed here: Process Templates

Once you have created the project you can open it with Visual Studio. This requires you to have Visual Studio 2013. There is a link on the home page to obtain VS 2013. There are 3 free Express versions to choose from or you can trial Ultimate 2013.

clip_image008

You need to configure the workspace mappings before you can open the solution. This maps the files to a local directory on your machine. Once you have chosen a directory of your choice then you can click ‘Map & Get’ and then you are ready to go. The same applies if you are using Git where you have to specify the directory for the local repository.

In the Solutions section you should see the option ‘New…’ This brings up the familiar New Project dialogue. Once you add one, you can check it in via ‘Pending Changes’, and you’re done!

If you wanted to sign into your online source control starting from Visual Studio on your desktop instead then open up Visual Studio and on the ribbon choose ‘TEAM’ > ‘Connection to Team Foundation Server…’ The Team Explorer window will appear on the right. Then choose ‘Select Team Projects’ and add your URL you created earlier, as a server. You will need to provide credentials.

If you wish to add more projects you can do so via the home page using ‘New’ under the recent projects & teams section. You can always get back to it by clicking the Visual Studio Online link that is in the blue bar at the top left of the page. This can also be done from within the Team Explorer on your desktop VS.