0% found this document useful (0 votes)
35 views

Data Architect JD

The document describes the experience and skills of a candidate with over 12 years of experience in data management, big data, data warehousing, and analytics. They have expertise in Azure technologies like Databricks, SQL, and Spark, as well as data modeling, ETL processes, and communicating insights to non-technical audiences. The candidate has experience architecting cloud data platforms, developing ETL pipelines, querying data, and managing projects using agile methodologies.

Uploaded by

praja
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Data Architect JD

The document describes the experience and skills of a candidate with over 12 years of experience in data management, big data, data warehousing, and analytics. They have expertise in Azure technologies like Databricks, SQL, and Spark, as well as data modeling, ETL processes, and communicating insights to non-technical audiences. The candidate has experience architecting cloud data platforms, developing ETL pipelines, querying data, and managing projects using agile methodologies.

Uploaded by

praja
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

 Overall 12+ years of experience of experience in Data Management, Big Data, Data

Warehousing and Analytics.


 At least 5 to 10 years of experience in Architecting and Implementing cloud platform includes
Azure Databricks, Azure SQL, Apache Spark (Scala/Python) etc.
 Expert in Azure Data Factory, Azure Data Lake Analytics, Python, Spark SQL, Azure Data Bricks
 Strong SQL skills with experience in Azure SQL DW
 Develop Apache Spark SQL using Scala/Python to examine and query datasets.
 Develop Dataframes for ETL transformation and loads
 Experience handling Structured and unstructured datasets
 Experience in Data Modeling and Advanced SQL techniques

 Experience implementing Azure Data Factory Pipelines using latest technologies and techniques
 Use the interactive Databricks notebook environment using Apache Spark SQL, Examine
external data sets, Query existing data sets using Spark SQL
 Visualize query results and data using the built-in Databricks visualization features, Perform
exploratory data analysis using Spark SQL.
 ETL processing and data extraction using Azure Databricks to write a basic ETL pipeline using
the Spark design pattern,  Ingest data using DBFS mounts in Azure Blob Storage, Ingest data
using serial and parallel JDBC reads
 ETL transformations and Loads using Azure Databricks to apply built-in functions to manipulate
data, Write UDFs with a single DataFrame column inputs, Apply UDFs with a multiple
DataFrame column inputs and that return complex types
 Manage Delta lake using Data bricks to use the interactive Databricks notebook environment,
Create, append and upsert data into a data lake.

 Must be knowledgeable in software development lifecycles/methodologies i.e. agile


 Data storytelling: Communicate actionable insights using data, often for a non-technical
audience.
 Business intuition: Connect with stakeholders to gain a full understanding of the problems they
are looking to solve.
 Analytical thinking. Find analytical solutions to abstract business issues.
 Critical thinking: Apply objective analysis of facts before concluding.
 Interpersonal skills: Communicate across a diverse audience across all levels of an organization.
 Has strong presentation and collaboration skills and can communicate all aspects of the job
requirements, including the creation of formal documentation
 Strong problem solving, time management and organizational skills  

You might also like