ADF Tools

Azure Data Factory Tools

This page is dedicated to one topic only: Publish your ADF from code to the service in Azure! Easily!!!

It’s another approach in opposite to ARM templates located in ‘ADF_Publish’ branch.
Many companies leverage that workaround and it works great.

PowerShell module (open-source and free!)

Back in June 2020, I decided to write a brand new PowerShell module to help all of us when publishing the whole Azure Data Factory code from your (master) branch or directly from your local machine. The module resolves all pains existed so far in any other solution, including:

  • replacing any property in JSON file (ADF object),
  • deploying objects in an appropriate order,
  • deployment part of objects,
  • deleting objects not existing in the source any longer,
  • stop/start triggers, etc.

The module is publicly available in PS Gallery: azure.datafactory.tools
Source code and full documentation are in GitHub here.

Azure DevOps extension (open-source and free!)

On top of that, I built an extension to Azure DevOps which uses the above PS module to deploy entire ADF seamlessly.
This approach minimizes the efforts and reduces the needs of PowerShell skills down to zero. It is a very easy and comfortable way to deploy ADF from code with Azure DevOps Pipeline.

The task (extension) is publicly available in Microsoft Marketplace: Deploy Azure Data Factory by SQLPlayer

Knowledge Hub

I thought that might be a good idea to gather all the questions you have asked across my YouTube channel or post blog comments and build a kind of knowledge hub. Here you are.

Video

🎬 Publish from code with one task in DevOps Pipeline (1/2)

🎬 Publish from code with one task in DevOps Pipeline (2/2)

Please, don’t forget to subscribe and/or click thumb-up on the video. Thanks in advance!

Q&A

How one should handle triggers in this case? Let say, we move all but the trigger to UAT/PRD but they have their own triggers which should not be disturbed when moving objects from DEV.
In such a case, you must use a selective deployment, which means you determine which objects should be published. The DevOps extension and PS module I created a few months ago allow you to do so. You can set up a different set of objects to be deployed per environment. That’s not a problem.

Just want to confirm here that do we have to keep separate CSV (config-UAT.csv) files for each stage in the deployment folder? Let say I have two stages (pre-prod and prod) in my DevOps project. How to manage that using a single CSV file in both stages CI/CD environment?
No, you don’t have to. You can create one general config file for all your environments, but put tokens/placeholders for values and define them in DevOps Variables. Find out the example in the documentation: using-tokens-as-dynamic-values

Does it support parameterized linked services? (source)
Absolutely. You can replace any property value inside ADF artefacts (JSON files). In the case of Linked Service to SQL Database, you will need to replace “connectionString” property. A similar example is in the documentation: Step: Replacing all properties environment-related. This video presents how to set it up within Azure DevOps pipeline.

 

I have a question. How to ask?
The best and fastest way is to raise a new issue on GitHub: New Issue/Question.
I will answer as soon as I can.

Thanks!
Kamil Nowinski (@NowinskiK)