Azure Data Factory Notes 1682135573
Azure Data Factory Notes 1682135573
Course Contents
• Introduction of Azure
• Introduction of Azure Data Factory
• Data Factory components
• Differences between v1 and v2
• Triggers
• Control Flow
• SSIS in ADFv2
• Demo
Introduction of Azure
• Azure is Microsoft's cloud computing platform, provides cloud services that gives you the freedom to build, manage, and
deploy applications on a massive global network using your favorite tools and frameworks.
A quick explanation on how Azure works
• Cloud computing is the delivery of computing services over the Internet using a pay-as-you-go pricing model. In
other words it's a way to rent compute power and storage from someone’s data center.
• Microsoft categorizes Azure cloud services into below product types:
• Compute
• Storage
• Networking
• Web
• Databases
• Analytics and IOT
• Artificial Intelligence
• DevOps
Introduction of Azure
Introduction of Azure Data Factory
• Azure Data Factory is a cloud-based data integration service to compose data storage, movement, and
processing services into automated data pipelines.
• It compose of data processing, storage, and movement services to create and manage analytics pipelines,
also provides orchestration, data movement and monitoring services.
• In the world of big data, raw, unorganized data is often stored in relational, non-relational, and other storage
systems, big data requires service that can orchestrate and operationalize processes to refine these
enormous stores of raw data into actionable business insights.
• Azure Data Factory is a managed cloud service that's built for these complex hybrid extract-transform-load
(ETL), extract-load-transform (ELT), and data integration projects.
• Azure Data Factory is a data ingestion and transformation service that allows you to load raw data from over
70 different on-premises or cloud sources. The ingested data can be cleaned, transformed, restructured, and
loaded back into a data warehouse.
• Currently, there are two versions of the service: version 1 (V1) and version 2 (V2).
Introduction of Azure Data Factory
• The pipelines (data-driven workflows) in Azure Data Factory typically perform the following four steps:
• Connect and collect: The first step in building an information production system is to connect to all the
required sources of data and processing, such as software-as-a-service (SaaS) services, databases, file shares,
and FTP web services. The next step is to move the data as needed to a centralized location for subsequent
processing.
• Transform and enrich: After data is present in a centralized data store in the cloud, process or transform the
collected data by using compute services such as HDInsight Hadoop, Spark, Data Lake Analytics, and Machine
Learning.
• Publish: After the raw data has been refined into a business-ready consumable form, load the data into Azure
Data Warehouse, Azure SQL Database, Azure Cosmos DB, or whichever analytics engine your business users
can point to from their business intelligence tools.
• Monitor: After you have successfully built and deployed your data integration pipeline, providing business
value from refined data, monitor the scheduled activities and pipelines for success and failure rates.
Data Factory Components
• Azure Data Factory is composed of four key components. These components work together to provide the
platform on which you can compose data-driven workflows with steps to move and transform data.
• Pipeline: A data factory might have one or more pipelines. A pipeline is a logical grouping of activities that
performs a unit of work. For example, a pipeline can contain a group of activities that ingests data from an
Azure blob, and then runs a Hive query on an HDInsight cluster to partition the data.
• Activity: Activities represent a processing step in a pipeline. For example, you might use a copy activity to
copy data from one data store to another data store. Data Factory supports three types of activities: data
movement activities, data transformation activities, and control activities.
• Datasets: Datasets represent data structures within the data stores, which simply point to or reference the
data you want to use in your activities as inputs or outputs.
• Linked services: Linked services are much like connection strings, which define the connection information
that's needed for Data Factory to connect to external resources. For example, an Azure Storage-linked
service specifies a connection string to connect to the Azure Storage account.
• Linked services are used for two purposes in Data Factory :
• To represent a data store that includes, but isn't limited to, an on-premises SQL Server database, Oracle database, file
share, or Azure blob storage account.
• To represent a compute resource that can host the execution of an activity. For example, the HDInsight Hive activity
runs on an HDInsight Hadoop cluster.
Data Factory Components
• Overview of Data Factory flow
Data Factory Components
• Overview of Data Factory flow
My Pipeline1 My Pipeline2
For Each…
Trigger
Activity 3
params params params
Event Wall
Activity 1 Activity 2
Clock Activity 4
On Demand
param “OnError”
…
s Activity1
Data Factory Components
• Other components of Data Factory.
• Triggers: Triggers represent the unit of processing that determines when a pipeline execution needs to be
kicked off. There are different types of triggers for different types of events.
• Pipeline runs: A pipeline run is an instance of the pipeline execution. Pipeline runs are typically instantiated
by passing the arguments to the parameters that are defined in pipelines. The arguments can be passed
manually or within the trigger definition.
• Parameters: Parameters are key-value pairs of read-only configuration. Parameters are defined in the
pipeline. Activities within the pipeline consume the parameter values.
• Control flow: Control flow is an orchestration of pipeline activities that includes chaining activities in a
sequence, branching, defining parameters at the pipeline level, and passing arguments while invoking the
pipeline on-demand or from a trigger. It also includes custom-state passing and looping containers, that is,
For-each iterators.
Differences between v1 and v2
Feature Version 1 Version 2
Datasets A named view of data that references Datasets are the same in the current
the data, can be utilized in activities as version. However, you do not need to
inputs and outputs. define availability schedules for
Datasets identify data within different datasets.
data stores, such as tables, files, folders,
and documents
3..NET:
client.Pipelines.CreateRunWithHttpMessagesAsync(+ parameters)
4. AzurePortal
(Data factory -> <Author & Monitor> -> Pipeline runs)
Triggers
Tumbling Window
Tumbling window triggers are a type of trigger that fires at a
periodic time interval from a specified start time, while
retaining state. Tumbling windows are a series of fixed-sized,
non-overlapping, and contiguous time intervals.
Triggers
WebActivity call a custom REST endpoint and pass datasets and linked services
LookupActivity look up a record/ table name/ value from any external source to be referencedby
succeeding activities. Could be used for incrementalloads!
Get Metadata retrieve metadata of any data in Azure Data Factory e.g. did another pipelinefinish
Activity
Do UntilActivity similar to Do-Until looping structure in programminglanguages.
If Condition do something based on condition that evaluates to true orfalse.
Activity
Control Flow
New! Control FlowActivities in v2
Control activity Description
Append Variable to add a value to an existing array variable defined in a Data Factory pipeline.
Activity
Filter activity to apply a filter expression to an input array.
Set Variable to set the value of an existing variable of type String, Bool, or Array defined in a Data Factory
Activity pipeline.
Validation activity to ensure the pipeline only continues execution once it has validated the attached dataset
reference exists
Wait activity the pipeline waits for the specified period of time before continuing with execution of subsequent
activities.
Webhook activity to control the execution of pipelines through your custom code.
Data flow to run your ADF data flow in pipeline debug (sandbox) runs and in pipeline triggered runs. (This
activity Activity is in public preview)
SSIS in ADFv2
Managed Cloud
Environment
Pick# nodes & node size
Resizable
SQL Standard Edition, Enterprise coming soon
Azure
SSIS
Project
Integration
Runtime
Compatible
Same SSIS runtime across Windows, Linux, Azure
Cloud
Get Started
Hourly pricing (no SQL Server license
SSIS in ADFv2
Integration runtime - Different capabilities
1. DataMovement
Move data between data stores, built-in connectors, format
conversion, column mapping, and performantand scalable data
transfer
2. Activity Dispatch
Dispatch and monitor transformationactivities (e.g.Stored Proc on
SQLServer,Hive on HD Insight..)
https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime 52
SSIS in ADFv2
Integrationruntimes
1.Azure IntegrationRuntime
https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime
SSIS in ADFv2
SamplesADF and IRlocations
SSIS in ADFv2
Scaleable IntegrationServices
How to scale up/out using 3 Settings on Azure SSIS IR
3. SSIS packages can be executed via custom code/PSH using SSIS MOM .NET
SDK/API
› Microsoft.SqlServer.Management.IntegrationServices.dll is installed in .NET GAC
with SQL Server/SSMS installation
4. SSIS packages can be executed via T-SQL scripts executing SSISDB sprocs
› Execute SSISDB sprocs [catalog].[create_execution] +
[catalog].[set_execution_parameter_value] + [catalog].[start_execution]
SSIS in ADFv2
Scheduling Methods
1. SSIS package executions can be directly/explicitly scheduled via ADFv2 App (Work in
Progress)
› For now, SSIS package executions can be indirectly/implicitly scheduled via ADFv1/v2
Sproc
Activity
comments
Hope you have a good foundational understanding of Azure cloud services. If not, you should go
through the Azure Databases and Azure Storage courses before going any further.
In this course, you will be introduced to fundamental concepts of data factory, creating data factory,
building pipelines, copying data, transforming data.
Happy Learning!
Prelude
In the world of Big data, the raw and unorganized data is stored in relational, non-relational, and
other storage systems. The raw data doesn't have the proper context or meaning to provide
meaningful insights to data scientists, analysts, or business decision makers. It requires some service
to orchestrate and refine these raw data into actionable business insights.
Azure Data Factory is a managed cloud integration service that is built for these complex hybrid
extract-load-transform (ELT), extract-transform-load (ETL), and data integration projects.
Integration Runtime
The image shows how the different integration runtimes can be used in combination to
offer rich data integration capabilities and network support.
Azure Data Factory (ADF) allows to create and schedule pipelines that ingest
data from different data stores.
Pipeline:
It is the logical grouping of activities that perform a task, where each can
operate individually or independently in parallel. A data factory can contain
one or more pipelines. The major benefit of these pipelines is, it allows us to
manage a set of operations instead of managing each operation individually.
Activity:
An activity is represented as a processing step or task in a pipeline such as you
can have a copy activity for copying data between data stores. It performs
three kinds of activities such as data transformation, data movement, and
control activities.
Key Components in ADF
Datasets:
Datasets represent data structures with a data store that points to data that
needs to use in your activities as inputs and outputs.
Linked Services:
Linked services are used to represent connection objects for sources,
destinations and compute resources that contains the connection strings
(connection information needed for data factory to connect with external
resources).
These four components comprise the ADF that works together to compose pipelines
with steps to move and transform data.
Note: An Azure subscription can have more than one data factory instances.
Pipeline Runs:
It is an instance of the pipeline execution that is instantiated by passing
arguments to the parameters, which are defined in pipelines. The arguments
can be passed manually or within the trigger definition.
ADF Pricing
The pricing is broken down into four ways that you are paying for this service.
There are different pricing models for these. For the Azure activity runs, it is about
copying activity. So you are moving data from an Azure Blob to an Azure SQL
database or Hive activity running the high script on an Azure HDInsight cluster.
With self-hosted activity runs, you can copy activity moving from an on-premises
SQL Server to an Azure Blob Storage, a stored procedure to an Azure Blob Storage,
or a stored procedure activity running a stored procedure on an on-premises SQL
Server.
2. Volume of data moved:
It is measured in data movement units (DMUs). You should be aware of it, as this will
change from default to auto, by using all the DMUs it can handle. This is paid on an
hourly basis. Let’s say you specify and use two DMUs. It takes an hour to move that
data. The other option is that you could use eight DMUs and it takes 15 minutes. This
price is going to end up the same. You’re using 4X the DMUs, but it’s happening in a
quarter of the time.
ADF Pricing
3. SSIS integration run times:
Here, you’re using A-series and D-series compute levels. When you go through these,
you will understand that it depends on the compute requirements to invoke the
process (how much CPU, how much RAM, how much attempt storage you need).
You’re paying a small account for pipelines (about 40 cents currently). A pipeline is
considered inactive if it’s not associated with a trigger and hasn’t been run for over a
week. Yes, it’s a minimal charge, but they do add up. When you start to wonder
where some of those charges come from, it’s good to keep this in mind.
Supported Regions
The regions currently supporting for provisioning the data factory are West Europe, East US, and
East US 2.
However, the data factory can access data stores and compute resources from other regions to
move data between data stores or process data using compute services, the service that powers
data movement in data factory is available globally in many areas.
Play
11:50
-16:55
Mute
Settings
Enter fullscreen
Play
Note: You can register your gateway resource in any region, but it recommended to be in
the same region of your data factory.
On-Premises Data Gateway
The picture shows how the data gateway works between on-premises data sources and
Azure services.
ADF Management Tools
Azure Data Factory can be managed such as creating ADF, creating pipelines, monitoring
pipelines through various ways such as:
1. Azure Portal
2. Azure PowerShell
3. Azure Resource Manager Templates
4. Using REST API
5. Using .NET SDK
Datasets
Datasets identify data such as files, folders, tables, documents within different data stores. For
example, an Azure SQL dataset specifies the schema and table in the SQL database from which the
activity should read the data.
Before creating a dataset, you have to create a linked service to link your data store to the data
factory.
Both linked service and datasets are defined in JSON format in ADF.
{
"name": "AzureStorageLinkedService",
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": {
"type": "SecureString",
"value":
"DefaultEndpointsProtocol=https;AccountName=<accountname>;AccountKey=<accountkey>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Dataset Structure
Dataset structure is defined in JSON format for an AzureBlob dataset as shown
below.
{
"name": "AzureBlobInput",
"properties": {
"type": "AzureBlob",
"linkedServiceName": {
"referenceName": "MyAzureStorageLinkedService",
"type": "LinkedServiceReference",
},
"typeProperties": {
"fileName": "input.log",
"folderPath": "adfgetstarted/inputdata",
"format": {
"type": "TextFormat",
"columnDelimiter": ","
}
}
}
}
Pipeline Overview
A typical pipeline in an Azure data factory performs the above four activities represented
in the picture.
Workflow of Pipelines
Connect and Collect:
The first step in building a pipeline is connecting to all the required sources of data
and processing the movement of data as needed to a centralized location for
subsequent processing. Without data factory, it requires to build custom data
movement components or write services to move to integrate these data sources.
Transform and Enrich
The collected data that is presented in the centralized data store is transformed or
processed by using compute services such as HDInsight Hadoop, Spark, Data Lake
Analytics, and Machine Learning that is produced as feed to production
environments.
Workflow of Pipelines
Publish:
After the raw data was refined it is loaded into Azure Data Warehouse, Azure SQL
Database, Azure CosmosDB, or whichever analytics engine your business users can
point to from their business intelligence tools.
Monitor:
After the successful build and deployment of your data integration pipeline, you can
monitor the scheduled activities and pipelines for success and failure rates. ADF has
built-in support for monitoring pipeline Azure Monitor, API, PowerShell, Log
Analytics, and health panels on the Azure portal.
Creating a Pipeline
The video shows creating a pipeline from scratch in the Azure portal.
Play
05:19
-06:35
Mute
Settings
Enter fullscreen
Play
Monitoring Pipelines
The video shows monitoring an up and running pipeline from ADF resource explorer in
Azure Portal.
Play
05:07
-07:03
Mute
Settings
Enter fullscreen
Play
Scheduling Pipelines
A pipeline is active only between its start time and end time that does not execute
before, or after the start and end times, respectively. If it is paused, it does not get
executed irrespective of its start and end time. For a pipeline to run, it should not be
paused.
You can define the pipeline start and end times in the pipeline definition in JSON
format as below.
"start": "2017-04-01T08:00:00Z",
"end": "2017-04-01T11:00:00Z"
"isPaused": false
Specifying Schedule for an Activity
You must specify schedulers for the activities that are executing in the pipeline to run the
pipeline.
"scheduler": {
"frequency": "Hour",
"interval": 1
},
{
"name": "AzureSqlInput",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "AzureSqlLinkedService",
"typeProperties": {
"tableName": "MyTable"
},
"availability": {
"frequency": "Hour",
"interval": 1
},
"external": true,
"policy": {}
}
}
Visual Authoring
The Azure Data Factory user interface experience (UX) lets you visually author and
deploy resources for your data factory without having to write any code.
You can drag activities to a pipeline canvas, perform test runs, debug iteratively, and
deploy and monitor your pipeline runs. There are two approaches for using the UX to
perform visual authoring:
Visual authoring with the Data Factory service differs from visual authoring with
VSTS in two ways:
The Data Factory service doesn't include a repository for storing the JSON
entities for your changes.
The Data Factory service isn't optimized for collaboration or version control.
The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory to provide
data integration capabilities across different network environments such as data movement, activity
dispatch, and SSIS package execution.
While moving data from data stores in public and private networks, it provides support for built-in
connectors, format conversion, column mapping, and scalable data transfer.
The IR (Integration runtime) provides the bridge between the activity and linked Services.
Azure
Self-hosted
Azure-SSIS
Activity dispatching the following data transform activities in public network: HDInsight Hadoop,
Machine Learning Batch Execution and update resource activities, Data Lake Analytics U-SQL activity
and other custom activities.
Azure IR supports connecting data stores over a public network with public accessible endpoints.
Self-hosted IR is to perform data integration securely in private network. You can install a self-hosted
IR on-premises environment behind your corporate firewall, or inside a virtual private network.
It is capable of:
Running copy activity and data movement activity between data stores.
Activity dispatching the following transform activities against compute resources in On-Premise or
Azure Virtual Network: HDInsight Hadoop, Machine Learning Batch Execution and update resource
activities, Data Lake Analytics U-SQL activity, and other custom activities.
It is easy to move your SQL Server Integration Services (SSIS) workloads, projects, and packages to
the Azure cloud, as it deploys, runs and manages SSIS projects and packages in the SSIS Catalog on
Azure SQL Database or with familiar tools such as SQL Server Management Studio (SSMS).
Moving your on-premises SSIS workload to Azure will reduce your operational costs and provides
maximum scalability.
To lift and shift existing SSIS workload, you can create an Azure-SSIS IR. which is dedicated to run or
execute SSIS packages. It can be provisioned in either public network or private network.
The IR Location defines the location of its back-end compute, and essentially the location where the
data movement, activity dispatching, and SSIS package execution are performed. The IR location can
be different from the location of the data factory it belongs to.
There are three types of copy data scenarios performed across different environments between data
stores such as:
On-premise to Azure
Azure cloud data store instance to another Azure cloud data store instance
SaaS Application to Azure
From ADF editor and monitor tile, choosing editor option to manually create key components
(dataset, linked services) and perform copy activity.
Let's assume a scenario copying data from on-premise SQL server to Azure Data Lake Store using
Copy data tool in ADF. Follow the below steps to perform this kind of activity:
It has six steps to be evaluated in a sequence such as Properties, Source, Destination, Settings,
Summary, and Deployment.
Properties:
Destination:
Choose the data store data lake store and provide the details of it such as connection name, network
environment, azure subscription, data lake store account name, authentication type, tenant.
Summary:
Shows you the properties and source settings, click next for deployment.
Deployment:
Shows the deployment status and other options that allow to edit pipeline and monitor.
Watch this video in open network (not using tcs LAN) from 9th minute to have a better
understanding about how to perform copy activity from on-premise to Azure.
Copying data from one Azure data store to another data store using copy tool is explained in the first
topic (Introduction to Azure data factory), please refer to card no: 11.
Note: Check this video on open network, it explains performing copy activity manually by creating
linked service, datasets for data stores using Azure portal. This is a similar process for any type of
scenario to create pipeline.
U-SQL Transformations
Data Lake Analytics U-SQL Activity that runs a U-SQL script on an Azure Data Lake
Analytics compute linked service.
1. Create an Azure Data Lake Analytics account before creating a pipeline with a
Data Lake Analytics U-SQL Activity.
2. The Azure Data Lake Analytics linked service requires a service principal
authentication to connect to the Azure Data Lake Analytics service.
3. To use service principal authentication, register an application entity in Azure
Active Directory (Azure AD) and grant it the access to both the Data Lake
Analytics and the Data Lake Store it uses.
Play
15:43
-18:20
Mute
Settings
Enter fullscreen
Play
Custom Activity
If you need to transform data in a way that is not supported by Data Factory, you can
create a custom activity with your own data processing logic and use the activity in the
pipeline. You can configure the custom .NET activity to run using either an Azure Batch
service or an Azure HDInsight cluster.
You can create a Custom activity with your data movement or transformation logic
and use the activity in a pipeline.
The parameters required to define for a custom activity in JSON format are: name,
activities, typeproperties.
Refer to this link for azure batch basics as the custom activity runs your customized
code logic on an Azure Batch pool of virtual machines.
{
"name": "MyCustomActivityPipeline",
"properties": {
"description": "Custom activity sample",
"activities": [{
"type": "Custom",
"name": "MyCustomActivity",
"linkedServiceName": {
"referenceName": "AzureBatchLinkedService",
"type": "LinkedServiceReference"
},
"typeProperties": {
"command": "helloworld.exe",
"folderPath": "customactv2/helloworld",
"resourceLinkedService": {
"referenceName": "StorageLinkedService",
"type": "LinkedServiceReference"
}
}
}]
}
}
Hands-on scenario
Your company has a requirement to maintain employee details in the SQL database.
You are given a text file containing employee details that has to be migrated. As a
cloud computing professional, you plan to simplify the task by using Azure Data
Factory. i) Create a storage account: Location: (US) East US 2, Performance: Standard,
Account Kind: Storage V2 (general-purpose v2), Replicapagetion: Locally-redundant
storage (LRS). ii) Create a Blob storage and upload the file containing employee data
(refer to the sample employee data provided in the following table). iii) Create SQL
database: Server: Create new, Location: (US) East US 2, Compute + storage: Basic,
Network Connectivity: Public endpoint, Allow Azure services and resources to access
the server: Yes, Add current client IP address: Yes. iv) Set a firewall rule for the SQL
database to allow IP addresses from 0.0.0.0 to 255.255.255.255 and write an SQL
query to create a table database (refer to the SQL query provided in the following
table). v) Create Data Factory: Region: East US 2, Configure Git later: mark as later. vi)
Create a Copy data tool in the data factory to move data from blob storage to the
SQL database..
Employeedata.txt
FirstName|LastName
John|Brito
Alwin|Chacko
SQL Query
Note:
Use the credentials given in the hands-on to log in to the Azure Portal, create a new
resource group and use the same resource group for all resources. The
Username/Password/Services Name can be as per your choice, after completing the
hands-on, delete all the resources created.
Course Summary
Monitoring ADF
Hope you had a better understanding of the data factory concept. Hope you had hands-on practice
in creating pipelines and activities.
Azure Updates
Like many other cloud infrastructure platforms today,
Azure is continuously developing updates to its services and components.
Every effort has been taken to update course content where there are significant
changes to product capability. However, there will be occasions where the course
content does not exactly match the latest version of the product.
Hence, we encourage you to check Azure updates as a starting point for the latest
information about updates in Azure.