Adatis

Adatis BI Blogs

Class Hierarchies, SOLID Code and Json.NET Serialization Part 2

Back in Part 1 of this series we saw how Json.NET allows us to handle creating derived classes through deserializing a document that contains multiple class types definitions, with the help of our own class derived from the Json.NET CustomCreationConverter class. In this post we are going to complete our discussion of the implementation by covering further the deserialization of the classes and how we can wire all this up with an Inversion of Control (IoC) container approach to instantiating our classes. If you are unfamiliar with the general ideas behind IoC Containers you can read about them here and here. For C# and .NET specifics, the excellent book “Dependency Injection in .Net” by Mark Seemann and Steven Van Duersen is available here and well worth a read. Picking up where we left off, we now have the ability to return our specified sub-class types for the various Hive data validation rule types we want to implement, with the aid of our JsonDataRuleBaseConverter class. So how do we actually make use of this class within our code in line with our Dependency Injection and Inversion of Control approach? Json.NET gives us some examples using the IoC Container library Autofac, which serves very well for the basic requirement of giving the control over object creation specifics to the calling code. There are many other suitable IoC Container frameworks out there but this one serves us well, with the added bonus of having some examples to work from.IoC ContainersWhy Bother?Fair question I’m sure. They do add complexity at first and appear to be quite tricky to the uninitiated. Well, in essence, they allow us to avoid the direct instantiation of objects deep within our code, which can be very problematic when it comes to making a change. If you use new(), your code will become brittle. Not good like Peanut Brittle, bad like crystal-decanter-falling-off-the-mantle-piece brittle.  Pee-ew new()…“Why, what’s the big deal about new()?“ I hear you ask, after all we’ve all been happily new()ing away for all this time. Well, using new() is considered a ‘code smell’ if done beyond our Composition Root. Whenever there is a new() call, we are saying “create a concrete class tied to a specific implementation”. At this point we have introduced a dependency within our code on the actual implementation which may itself be rather volatile. Each time it changes with a new version of the code we will need to recompile as our dependency is no longer valid. Any changes to the implementation details anywhere in this (potentially deep and involved) dependency graph will ripple up through your code, causing all manner of rework. If we can avoid these new() calls, we can avoid the implementation dependencies being littered throughout our code. If we have only interface references, and leave the actual instantiation of the specific implementation classes that subscribe to these interfaces to our IoC Container, we can then remove these dependencies between our assemblies from everywhere except where we are using our IoC Container. This is a very big deal in medium to large code bases, and in even small sized projects will quickly pay dividends when it comes to considerations of adaptability, such as with maintenance and extension. Using new() in all but the top level code will lead to your peers looking strangely at you, holding their noses and reaching for the Propeller Hat. Instead we use Dependency Injection with interface parameters in our constructor methods to provide the required objects, determining the concrete instance required via our IoC Container, thereby leaving the Propeller Hat to gather dust and sticking to our SOLID principle of coding against interfaces and not implementations. Right, convinced? Maybe, maybe not, but if you’re still reading I’m guessing your interested. Okay a tincy wincy bit more theory and then back to our code.Basic IoC Container Framework ConceptsComposition RootThis is basically the entry point for our application, such as the Main() function in a console app or the Application_Start of an ASP.NET application. We want to place our IoC container code as close to this point as is possible, so as to provide the required object resolution services to the application at a concise early point. It will give us “a central point at which we do all our connecting of collaborating classes” (to paraphrase from the above book). This is going to make maintenance a whole lot easier should things need to change.Registration and ResolutionThese are the two main steps we need to consider when coding up our IoC container. First we need to map what actual concrete implementations are going to be created whenever we refer to an interface within our code. This is referred to as Registration and is done within a “Container”. Secondly we want to control the actual instantiation of the objects, including their lifetime scope, by “resolving” the contracts that the interfaces provide against the registered implementing classes. These key concepts of Registration and Resolution, together with the third R, being Release, are often referred to as the 3 Calls Pattern. You ‘Register’ your mappings, ‘Resolve’ the interfaces referenced in the code into actual concrete classes, typically just those at the top of the dependency graph, and then ‘Release’ the objects when no longer required (or alternatively leave this to the garbage collector). Okay, with that idea served up, on to our first R.RegisterThis is actually pretty straight forward in our case. Autofac does offer a lot in the way of options here, but for our purposes all we are doing is mapping a single interface to the respective class for each of our IDataRule and IDataValidationRuleSet interfaces. We are going to place this into a Singleton class that we can use from within Json.NET when it comes to deserializing our docs, which I’ve named AutofacRegister. First we need some fields for our Autofac objects, being a Container and a ContractResolver, which I’ll get onto shortly. /// /// Singleton Registration class for Autofac service contracts /// public class AutofacRegister { private static AutofacRegister instance; private static AutofacContractResolver contractResolver; private static IContainer container;ContainersAll IoC Containers frameworks  have the notion of a Container (bonkers right?) which is used for holding the registration details and creating the instances of the registered classes. Not all classes need be created with the IoC container, and we can still use new() if we so desire, but the important thing to remember is to avoid creating nasty chains of dependencies on actual implementations by using these new() calls within dependent assemblies beyond our top level entry point at Composition Root.Autofac uses the notion of a ContainerBuilder, which not surprisingly is used for creating a container for us. For our classes and interfaces, our relatively simple registration code therefore looks like the following. /// /// Registers service contracts for this instance. /// private void Register() { ContainerBuilder builder = new ContainerBuilder(); builder.RegisterType<DataRuleBase>().As<IDataRule>(); builder.RegisterType<HiveDataValidationRuleSet>().As<IDataValidationRuleSet>(); container = builder.Build(); contractResolver = new AutofacContractResolver(container); } So we map our two interfaces to their concrete classes, build a new Container, and then we pass this Container off to the constructor of a custom class we have created called AutofacContractResolver. This contract resolver will serve to assist with the deserializing of our JSON doc into the required classes. We expose this contractResolver as a property on the above class, making sure to have called Register() on our Singleton before actually using it, as below. public AutofacContractResolver ContractResolver { get { if (contractResolver == null) Register(); return contractResolver; } }ResolveNow let’s take a quick peak at our custom contract resolver class.AutofacContractResolverAs shown below, this derives from the Newtonsoft.Json.Serialization.DefaultContractResolver class, and will override some of the standard methods given therein. /// /// Resolves contracts using the Autofac Dependency Injection framework /// /// <seealso cref="Newtonsoft.Json.Serialization.DefaultContractResolver"> public class AutofacContractResolver : DefaultContractResolver { private readonly IContainer container; public AutofacContractResolver(IContainer container) { this.container = container; }So we pass an IContainer object into our constructor, which we are going to use for our IoC-related shenanigans. Okay, digging in further, we have a private method which is concerned with determining the actual object contract that will apply for our deserialized JSON. Note that this is defined as ‘new’ and as such ‘hides’ the base class method of the same name. /// /// Resolves the contract. /// /// Type of the object. /// private new JsonObjectContract ResolveContract(Type objectType) { // attempt to create the contract from the resolved type IComponentRegistration registration; if (container.ComponentRegistry.TryGetRegistration(new TypedService(objectType), out registration)) { Type viewType = (registration.Activator as ReflectionActivator)?.LimitType; if (viewType != null) { return base.CreateObjectContract(viewType); } } // fall back to using the registered type return base.CreateObjectContract(objectType); }So this checks to see if we have registered the objectType type, and then gets the most specific type that the component type can be cast to, using the .LimitType property. It will then get the respective JsonObjectContract via the base class call to CreateObjectContract(), with either the more specific viewType variable or, in the case of the objectType type not being registered in the IoC container, the less specific objectType variable. This JsonObjectContract will be used to determine the deserialization behaviour into the underlying type. So with a feel for what this is going, let’s see where it is being used. The overridden CreateObjectContract method in our class below calls ResolveContract() so as to get a reference to a contract. This is then used to specify a class converter that will be used to deserialize into the various subclasses. /// /// Creates a <see cref="T:Newtonsoft.Json.Serialization.JsonObjectContract" /> for the given type. /// /// Type of the object. /// /// A <see cref="T:Newtonsoft.Json.Serialization.JsonObjectContract" /> for the given type. /// protected override JsonObjectContract CreateObjectContract(Type objectType) { // use Autofac to create types that have been registered with it if (container.IsRegistered(objectType)) { JsonObjectContract contract = ResolveContract(objectType); //set the required class converter to allow deserialising the various subclasses switch (objectType.FullName.ToUpper()) { case "DATAVALIDATION.IDATARULE": contract.Converter = new JsonDataRuleBaseConverter(); break; default: break; } contract.DefaultCreator = () => container.Resolve(objectType); return contract; } return base.CreateObjectContract(objectType); }In the case of our objectType being a DataValidation.DataRule class, we use the JsonDataRuleBaseConverter class we saw in Part 1, which contains the required logic to handle the deserializing into one of the Hive data validation rule derived classes. For all other classes we don’t need to handle these tricky multiple subclass types (we have only one derived class for IDataValidationRuleSet for example, being the HiveDataValidationRuleSet) and so we don’t need a Converter to be attached to the Contract. For any objects that are registered in our IoC container, we then call the container.Resolve() method to attach a DefaultCreator Function reference to the contract that will be used as the method for creating our object. For non-registered object types we simply return a contract created by calling the base DefaultContractResolver CreateObjectContract() method (no frilly class handling logic required). So quite an undertaking to get to be able to resolve an IDataRule to one of the Hive Data Validation Rule derived classes. It should be noted that we should really be using what Autofac terms “Lifetime Scopes” rather than directly resolving from the Container itself. As their name suggests, these govern the lifetime of objects and are defined over the scope for which the object lifetime is required. This results in more control over the disposal of the object. You can read about using Lifetime Scopes when resolving your objects in the Autofac documentation here.ReleaseOnce we’re done with our objects, they should be released so as to reclaim memory effectively. With Autofac we use the aforementioned Lifetime Scope in order to define the code over which the object exists, with a “using” block. I won’t go into details here as it is pretty straight forward, but you can find out more at the Autofac documentation here and here. In our case we are only using the container for a single short-lived method call in our calling code and as such this is not a concern.Now that’s all very fine and dandy but how do we actually make any use of all this?Deserializing with JsonSerializerSettingsYou may remember from Part 1 that we used our JsonDataRuleBaseConverter directly in the call to JsonConvert.DeserializeObject() as below:IDataRule dataRule = JsonConvert.DeserializeObject<IDataRule>(JsonIn, new JsonDataRuleBaseConverter());Well there is another overload to this method that allows us to pass in a JsonSerializerSettings object, which can contain a ContractResolver object. This will allow us to use our Autofac IoC container and our JsonDataRuleBaseConverter to handle interface to concrete class mappings and also the derived class handling for each of our HiveDataValidationRule types. This is achieved with the following. HiveDataTypeDataRule actual = JsonConvert.DeserializeObject(JsonIn, new JsonSerializerSettings { ContractResolver = AutofacRegister.Instance.ContractResolver });This effectively wires in our above container.Resolve(objectType) whenever we need an object created from our serialization, thereby providing the deserialization process with the specifics for our class hierarchy.So there you have it, all the benefits of IoC Containers, interfaces and inheritance for our class libraries deserialized from JSON documents. Pretty powerful stuff, courtesy of Newtonsoft and Autofac.

Class Hierarchies, SOLID Code and Json.NET Serialization Part 1

This is the first in a two-part series on using JSON with class hierarchies and interfaces whilst adhering to the OO principles of SOLID coding. It turns out that deserializing JSON documents into these more involved class models is not as straight forward as we would like, and requires some knowledge of how the JSON library will behave when presented with these structures.As part of a Data Quality framework implementation it was required to persist data validation rules in JSON format, to be deserialized into a set of classes for implementing the required checks and alerting regarding the data. Following the Dependency Injection principle, we code against an interface rather than an implementation. So there are a number of interfaces for the various classes involved, and each of our implementing classes will subscribe to one or more of these interfaces. This allows easily extendible and maintainable code with a reduced possibility of breaking client classes using our validation library. Right, so that’s the general idea. In accordance with this the interface hierarchy below was developed, allowing an inheritance as well as composition (decorators etc.) approach to structure the code reuse. As you can see, each data validation rule type inherits from a base interface which contains some common functionality. Each validation rule has a name, properties for validation rule execution row counts, a rule type and a threshold defined via the IDataRuleThreshold interface, which consists of a number of Threshold Types (Pass, Warn, Fail etc.) and accompanying Threshold Limit values to determine whether the returned row counts constitute a Failure, Warning etc.. All of these sit within the base interface IDataRule and are inherited by all derived interfaces. The various child interfaces for the rule types have some divergence however, containing specifics required for the exercising of the different data validations. For example the IDistinctDataRule rule type has a TargetField property but does not require a DataType property, whereas the DataType rule type does, as the latter requires the desired data type to check against as part of the rule conditions. Hence the need for the different interface definitions.Each of the Data Validation rules is contained within a IDataValidationRuleSet, consisting of one or more rules. All rules target a specific object specified within the SourceObjectName property.This provides a relatively straight-forward class hierarchy with which we can define and action our data validation rules. The actual implementation of the rules is intended to be target platform specific, allowing rules to be implemented on different database platforms via specific assemblies that inherit from the base rule classes.For the project in question, we were targeting Hive for the data validation, and as such the following implementation hierarchy was developed.Loosely Coupling with Dependency InjectionOkay, so that’s the class hierarchy. Remembering that we are following good OO principles of coding our objects so that they refer to interfaces, such as is seen with the DataRuleBase class, which contains a Threshold property that is of type IDataRuleThreshold, and also a ThresholdLimitsExceeded property that is a list of IThresholdLimit types. The IDataRuleThreshold interface object is passed in via a constructor argument, thereby allowing us to decide at construction time the actual implementation of this interface that we want.Within our implementation for Hive, the HiveDataTypeDataRule class, for example, also uses interfaces for the above properties and constructor arguments. As with DataRuleBase we can keep things loosely coupled with respect to the Threshold property until construction time.The constructor parameters include the interface object threshold and calls the base constructor (as well as setting a couple of properties specific to the derived class). [JsonConstructor] public HiveDataTypeDataRule(string name, DataRuleType ruleType, IDataRuleThreshold threshold, string targetField, DataType dataType) : base(name, ruleType, threshold) { TargetField = targetField; //set via the property so as to use the respective JsonConverter DataType = dataType; }We can then derive classes from our abstract DataValidationRuleBase, which allows us to use any methods defined with the base class (such as the Equals method, which will come in handy when writing tests). The DataRuleBase class is the starting point for deriving classes that could provide data validation for a variety of platforms, such as MS SQL, Oracle, or in our case Hive.The DataValidationRuleSetBase contains a list of IDataRule (not DataRuleBase, or anything more specific such as HiveDataTypeDataRule, which would be too tightly coupled to be of any use), thereby allowing us to use any class that implements the specified interface. We could develop any class that implements IDataRule and our DataValidationRuleSetBase (or any classes deriving from it or implementing IDataValidationRuleSet) will be able to work with it. Our HiveDataValidationRuleSet derives from DataValidationRuleSetBase, and so the DataRules property can therefore contain any of our Hive data rule types, as they all derive from DataRuleBase, which implements IDataRule. Again, following standard Dependency Injection practice, we can pass in the referenced objects that implement the IDataRule interface via a constructor, thereby removing any explicit reference to implementations within our HiveDataValidationRuleSet class, as below.[JsonConstructor] public HiveDataValidationRuleSet(string name, string sourceObjectName, List<IDataRule> dataRules) : base(name, sourceObjectName, dataRules) { }The constructor simply calls the base class constructor with the same constuctor arguments. So nothing tightly coupled here to worry about.Well that’s all rather wonderful, albeit pretty standard OO goodies, but how do we go about deserializing a JSON document into the required classes for actually exercising the data validation rules?Newtonsoft Json.NET Library As with pretty much anything JSON-related in DotNet, the library to use is Newtonsoft Json.NET. This is a fantastic library for all things JSON. It has very good examples and API reference material, and appears to have covered pretty much every eventuality for coding against JSON in DotNet. From custom constructors to Dependency Injection and IoC container considerations this really is an amazing piece of work. You can find out more here.Deserializing Into Virtual Objects?As stated loud and clear in the Newtonsoft Json.NET documentation, it is simply not possible to deserialize a JSON document into a non-concrete (i.e. abstract or interface) target as these cannot in themselves be instantiated. If you try, you’ll get an error similar to the following:“Could not create an instance of type JsonSerialization.IDataTypeDataRule. Type is an interface or abstract class and cannot be instantated”.So, if we want to deserialize a JSON document that contains a HiveDataValidationRuleSet, by default it will try and parse the JSON into objects of type IDataRule. Fur balls all over the place. Not going to work. So what now? We must therefore use an approach to deserializing our classes that allows specifying the resultant target object type. Don’t worry, Newtonsoft have got this one covered (as with everything else). CustomCreationConverterAs the name suggests this class allows creation of a class using some predefined conditional logic. We need to deserialize our JSON into a DataValidationRuleSet that contains a variety of different data rule types that will be executed against the source object. We can do this using the following class deriving from the CustomCreationConverter. Notice how it uses a generic type for the base class:public class JsonDataRuleBaseConverter : CustomCreationConverter<DataRuleBase>     {So this will allow us to deal with the conversion from a DataRuleBase abstract class to a concrete class such as a DataTypeDataRule class. How so? Well we’ve made things a little easier for ourselves with that RuleType property we mentioned earlier.This will allow us to choose a class type to deserialize to, based on this value. DataRuleType is an enum of the various class types we will be allowing in our JSON. The CustomCreationConverter class we’re deriving our JsonDataRuleBaseConverter from has a Create method that we’ll override in order to give us the required data rule class. /// /// Creates the specified object subtype from the RuleType property. /// /// Type of the object. /// The jObject. /// public DataRuleBase Create(Type objectType, JObject jObject) { DataRuleType ruleType = jObject["RuleType"].ToObject<DataRuleType>(); string name = (string)jObject.Property("Name"); DataRuleThreshold threshold = jObject["Threshold"].ToObject<datarulethreshold>(); string targetField; switch (ruleType) { case DataRuleType.DataType: targetField = (string)jObject["TargetField"]; DataType hiveDataType = jObject["DataType"].ToObject<datatype>(); HiveDataTypeDataRule dtdr = new HiveDataTypeDataRule(name, ruleType, threshold, targetField, hiveDataType); return dtdr; case DataRuleType.Distinct: targetField = (string)jObject["TargetField"]; HiveDistinctDataRule ddr = new HiveDistinctDataRule(name, ruleType, threshold, targetField); return ddr; case DataRuleType.Format: targetField = (string)jObject["TargetField"]; string formatPattern = (string)jObject["FormatPattern"]; HiveFormatDataRule fdr = new HiveFormatDataRule(name, ruleType, threshold, targetField, formatPattern); return fdr; case DataRuleType.NullValue: targetField = (string)jObject["TargetField"]; HiveNullValueDataRule nvdr = new HiveNullValueDataRule(name, ruleType, threshold, targetField); return nvdr; case DataRuleType.Range: targetField = (string)jObject["TargetField"]; string rangeStart = (string)jObject["RangeStart"]; string rangeEnd = (string)jObject["RangeEnd"]; HiveRangeDataRule rdr = new HiveRangeDataRule(name, ruleType, threshold, targetField, rangeStart, rangeEnd); return rdr; default: return null; } } The ReadJson method provided is then overridden with the following code, which simply passes our Json object to the Create method for the actual instantiation of the required derived Hive data validation rule class, as below: /// /// Reads the JSON representation of the object. /// /// The <see cref="T:Newtonsoft.Json.JsonReader" /> to read from. /// Type of the object. /// The existing value of object being read. /// The calling serializer. /// /// The object value. /// public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { if (reader.TokenType == JsonToken.StartObject) { // Load JObject from stream JObject jObject = JObject.Load(reader); // Create target object based on JObject var target = Create(objectType, jObject); return target; } else return null; }There are other ways of indicating the type within the JSON being deserialized, such as with the “$type” JSON property and the TypeNameHandling setting described here, with an example here, which can be used to specify the class type to use directly, with a value that is the fully qualified class name,  but this can start to make the JSON rather cluttered and difficult to change should you decide to do things differently from within your class library.We can then create a specific type of data rule by passing in our JSON document and using our JsonDataRuleBaseConverter as below.IDataRule dataRule = JsonConvert.DeserializeObject<IDataRule>(JsonIn, new JsonDataRuleBaseConverter());Our underlying type will be that of the actual derived class created based on the RuleType property specified in the JSON document even though we have only coded against the IDataRule interface here and not needed to resort to specifying a concrete implementation class. Up Next…In the next post in the series I’ll explain how we wired all this up so that we can go about deserializing them using the JsonDataRuleBaseConverter that is derived from the Json.NET CustomCreationConverter. We’ll also see how we can use Dependency Injection and Inversion of Control (IoC) containers to create our objects. Okay that’s it for now. Tune in next time for another thrilling instalment of Class Hierarchies, SOLID Code and Json.NET Serialization.

Using Lookup, Execute Pipeline and For Each Activity in Azure Data Factory V2

In my previous blog I looked how we can utilise pipeline parameters to variablise certain aspects of a dataset to avoid duplication of work. In this blog I will take that one step further and use parameters to feed into a For Each activity that can iterate over a data object and perform an action for each item. This blog assumes some prior knowledge of Azure Data Factory and it won’t hurt to read my previous blogPreviously I showed how to use a parameter file to copy a single table from Azure SQL DB down into a blob. Now lets use the For Each activity to fetch every table in the database from a single pipeline run. The benefit of doing this is that we don’t have to create any more linked services of data sets, we are only going to create one more pipeline that will contain the loop. The big picture here looks like below. The first activity to note is the lookup activity. This can go off and fetch a value from either SQL or JSON based sources and then incorporate that value into activities further down the chain. Here we are using SQL and you can see that we have supplied a SQL query that will fetch the schema and table names of our database. One “gotcha” is that even though you supply a SQL query, you still need to provide a dummy table name in the SQL dataset. It will use the query above at run time but won’t pass deployment without a table name. Also note that at this point, we do nothing with the returned value.Next, we have the Execute Pipeline activity which can accept input parameters and pass those down into the executed pipelines (or child pipelines as per the diagram). Within the type properties we can specify the parameters we want to pass in. The names here need to match whatever parameters we specify in the child pipeline but for the “value” we can make use of the new expression language to get a hold of the output of the previous lookup activity. We then reference the pipeline we want to execute, and that we need to wait for it to complete before continuing with our parent pipeline. Finally, we use the “dependsOn” attribute to ensure that our Execute Pipeline activity occurs AFTER our lookup has completed successfully. At this point we have told the child pipeline which tables to copy and then told it to start. Our child pipeline now just needs to iterate over that list and produce our output files. To do this it only needs one activity which is the For Each. The For Each really has two components which are the outer configurables (such as the items to iterate over) and then the inner activity to perform on each item. The outer section looks like this: Here we can configure the “isSequential” property which when set to “false” allows Data Factory to parallelise the inner activity, otherwise it will run each activity one after another. The other property is the “items” which is what the activity will iterate through. Because we fed the table list in to the “tableList” parameter from the Execute Pipeline activity we can specify that as our list of items. Now for the inner activity: Whilst this is a fairly chunky bit of JSON, those familiar with the copy activity in ADF V1 will probably feel pretty comfortable with this. The key difference is that we are again making use of expressions and parameters to make our template generic. You can see in the “output” attribute we are dynamically specifying the output blob name by using the schema and table name properties gleaned from our input data set. Also, in the source attribute we dynamically build our SQL query to select all the data from the table that we are currently on using the @item() property. This method of combing text and parameters is called string interpolation and allows us to easily mix static and dynamic content without the needed for additional functions or syntax. That’s it! By making use of only a few extra activities we can really easily do tasks that would have taken much longer in previous versions of ADF. You can find the full collection of JSON objects using this link: http://bit.ly/2zwZFLB. Watch this space for the next blog which will look at custom logging of data factory activities!

Pipeline Parameters in Azure Data Factory V2

The second release of Azure Data Factory (ADF) includes several new features that vastly improve the quality of the service. One of which is the ability to pass parameters down the pipeline into datasets. In this blog I will show how we can use parameters to manipulate a generic pipeline structure to copy a SQL table into a blob. Whilst this is a new feature for ADF, this blog assumes some prior knowledge of ADF (V1 or V2)   Creating the Data Factory V2 One other major difference with ADF V2 is that we no longer need to chain datasets and pipelines to create a working solution. Whilst the concept of dependency still exists, that is no longer needed to run a pipeline, we can now just run them ad hoc. This is important for this demo because we want to run the job once, check it succeeds and then move onto the next table instead of managing any data slices. You can create a new V2 data factory either from the portal or using this command in PowerShell: $df = Set-AzureRmDataFactoryV2 -ResourceGroupName <resource group> –Location <location> -Name <data factory name> If you are familiar with the PowerShell cmdlets for ADF V1 then you can make use of nearly all of them in V2 by appending “V2” to the end of the cmdlet name. ADF V2 Object Templates Now we have a Data Factory to work with we can start deploying objects. In order to make use of parameters we need to firstly specify how we will receive the value of each parameter. Currently there are two ways: 1.      Via a parameter file specified when you invoke the pipeline 2.      Using a lookup activity to obtain a value and pass that into a parameter This blog will focus on the simpler method using a parameter file. Later blogs will demonstrate the use of the lookup activity When we invoke our pipeline, I will show how to reference that file but for now we know we are working with a single parameter called “tableName”. Now we can move on to our pipeline definition which is where the parameter values are initially received. To do this we need to add an attribute called “parameters” to the definition file that will contain all the parameters that will be used within the pipeline. See below: The same concept needs to be carried through to the dataset that we want to feed the parameters in to. Within the dataset definition we need to have the parameter attribute specified in the same way as in the pipeline. Now that we have declared the parameters to the necessary objects I can show you how to pass data into a parameter. As mentioned before, the pipeline parameter will be populated by the parameter file however the dataset parameter will need to be populated from within the pipeline. Instead of simply referring to a dataset by name as in ADF V1 we now need the ability to supply more data and so the “inputs” and “outputs” section of our pipeline now looks like the below:Firstly, we declare the reference type, hence “DatasetReference”. We then give the reference name. This could be parameterised if neededFinally, for each parameter in our dataset (in this case there is only one called “tableName”) we supply the corresponding value from the pipelines parameter set. We can get the value of the pipeline parameter using the “@Pipeline.parameters.<parameter name>” syntax.At this point we have received the values from a file, passed them through the pipeline into a dataset and now it is time to use that value to manipulate the behaviour of the dataset. Because we are using the parameter to define our file name we can use its value as part of the “fileName” attribute, see below:Now we have the ability to input a table name and our pipeline will fetch that table from the database and copy it into a blob. Perhaps a diagram to help provide the big picture:Now we have a complete working pipeline that is totally generic, meaning we can change the parameters we feed in but should never have to change the JSON definition files. A pipeline such as this could have many uses but in the next blog I will show how we can use a ForEach loop (another ADF v2 feature) to copy every table from a data base still only using a single pipeline and some parameters. P.S. Use this link to see the entire json script used to create all the objects required for this blog.  http://bit.ly/2zwZFLB