Cameron McLurcan's Blog

Deploying Scheduled Triggers In Data Factory v2 With PowerShell

So in ADFv2, scheduled triggers have been overhauled to become their own deployable components, able to kick off multiple parameterised pipelines together in each trigger.

This new version of the trigger seems far more modular and flexible than the functionality in v1, however there are a couple of catches and easy-to-miss steps during implementation if you choose to use PowerShell rather than the brand new visual authoring tool, so I’ll break down the process below.

First, you need to build your trigger, as a separate JSON object to the pipeline. No additional code is needed in the pipeline itself. There is a more complete example in the documentation Microsoft has provided, but on top of that here is a working example I’ve written below:

  "name": "ST_OrchestratorTrigger",
  "properties": {
    "type": "ScheduleTrigger",
    "typeProperties": {
      "recurrence": {
        "frequency": "Day",
        "interval": 1,
        "startTime": "2017-12-20T04:00:00-05:00",
        "endTime": "2099-11-02T22:00:00-08:00",
        "schedule": {
          "hours": [ 4 ],
          "minutes": [ 0 ]
    "pipelines": [
        "pipelineReference": {
          "type": "PipelineReference",
          "referenceName": "PL_MasterOrchestration"
        "parameters": {}

As you might guess, this trigger runs the PL_MasterOrchestrator pipeline (with no input parameters) at 4am each day.

A caveat on the recurrence – it seems that when running, the trigger doesn’t strictly adhere to the “startTime” after the first time it runs, so despite starting at 4am, the subsequent trigger could be minutes or possibly hours off on the next day.

To ensure it always triggers at exactly 4am, you need to add the “schedule” recurrence attribute as shown above.

Deployment of the trigger (as of writing this blog) is still only done through PowerShell scripts, using the cmdlet below:

Set-AzureRmDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName `
                -DataFactoryName $DataFactoryName `
                -DefinitionFile $TriggerFile `
                -Name $TriggerName `

Each of these parameters are exactly as you would expect from the v1 parameters for setting other objects. -Force shouldn’t strictly be necessary, but sometimes is worthwhile putting in depending on the exact circumstances, -DefinitionFile is asking for the .JSON file itself, etc.

So, at this point you’ve created the scheduler, and deployed it to your data factory. Scheduled triggers in v2 need to be started, once they are deployed. You can do this with the following:

Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName 
-DataFactoryName $DataFactoryName 
-Name $TriggerName 

Just another quick note here: you will need to stop the trigger again if you want to redeploy it. Trying to overwrite a trigger while it’s running will just result in an error message.

As well as finding the necessary cmdlets 2/3rds of the way down the Microsoft document linked above, there is a list of all relevant cmdlets found here.

Hope this helps!

The documentation again:

Stretch Databases: The Basics

What Is It?

A stretch database is a database where the rarely used portions of the data are stored in the cloud. Offered by SQL Server 2016 alongside Azure, it will allow seamless migration of data between the cloud and your local server databases.

Consider data as being warm or cold; warm data is accessed often, cold is not. So for all intents and purposes, cold data is wasting space on the local server most of the time it’s there. The feature was created with historical data and other similar types of data in mind. Data that would not be accessed regularly, but still needs to be available on occasion.

stretchdb the basics 1

A stretch enabled database can have tables that query as if they are local, yet have some of their rows stored in Azure blob storage. The only noticeable difference by the user is that some queries take longer to process.

There is a fair amount of control over data migration. Functions can be created to stretch specific rows, based custom filter functions. Migration can be monitored and troubleshooted, or simple paused and resumed at will.

Once the data is in the cloud it is backed up automatically, with Azure holding storage snapshots from the last 7 days for easy recovery by the user. Any backups of the stretch enabled database actually only takes the local data – migrated data can only be restored using the Azure storage snapshots.


Requirements And Prerequisites

The main requirement is having an Azure account with a billable subscription already set up. Blob storage is set up during the stretch database setup, so nothing further is needed on the Azure side.

The Stretch Database Advisor, included in the separately downloaded Data Migration Assistant, can indicate which tables are eligible for stretching to Azure (I’m planning on going through this in a future blog post). They do not mention what incompatibilities the other tables have, however.

stretchdb the basics 2

Any table that is not shown as compatible must be manually examined, to see what incompatibilities there are. I have included a full list of incompatible features, column types, and data types below.


Limitations And Incompatibilities

As of writing this, there are still a large number of index, constraint, and data type limitations on tables that use stretch databases according to the MSDN website. Data management operations are also limited – you cannot update or delete rows marked for migration, or that have already been migrated. These rows must be migrated back to the local database in order to be altered.

Table Properties

· Memory optimised tables cannot become stretch enabled

· Replicated tables cannot become stretch enabled.

· Tables using change tracking and change data capture cannot become stretch enabled.

Column And Data Types

· A table cannot be stretch enabled if it has more than 1,023 columns.

· FileTables and Filestream data are incompatible.

· Column set and computed columns are incompatible.

· Text, ntext, image, timestamp, sql_variant, XML and CLR data types are all incompatible.


· Uniqueness is not enforced on migrated Unique and Primary Key constraints.

· Default and Check constraints are incompatible.

· Foreign Key constraints cannot reference a stretch enabled table


· Indexed views are incompatible.

· Fulltext, XML, and Spatial indexes are incompatible.

· A table cannot be stretch enabled if it has more than 998 indexes.

· Filtered indexes do not include migrated data on stretch enabled tabled.


Further Reading

The MSDN website has the full description of stretch databases

SQL Data Partners have also done an interview with Anthony van Gemert, one of the product managers for the stretch feature