0% found this document useful (0 votes)
13 views4 pages

VENKAIAHPATRA Hyderabad Secunderabad, Telangana 3.08 Yrs

Venkaiah P is an Azure Data Engineer with over 3 years of experience in IT, specializing in SQL and Big Data technologies. He has worked on various Azure transformation projects, implementing ETL solutions using Azure Data Factory and developing Spark applications for data analysis. His professional background includes roles at CAPGEMINI and Gaman Software Solutions, with a strong focus on data migration, database design, and reporting solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views4 pages

VENKAIAHPATRA Hyderabad Secunderabad, Telangana 3.08 Yrs

Venkaiah P is an Azure Data Engineer with over 3 years of experience in IT, specializing in SQL and Big Data technologies. He has worked on various Azure transformation projects, implementing ETL solutions using Azure Data Factory and developing Spark applications for data analysis. His professional background includes roles at CAPGEMINI and Gaman Software Solutions, with a strong focus on data migration, database design, and reporting solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

RESUME

Venkaiah P [email protected]
Azure Data Engineer Mobile: +91-8121497936
________________________________________________________________________

Experience Summary:
• Overall 3+ years of professional experience in IT and Analysis, Design, Development,
Documentation, Deployment and Integration using SQL and Big Data technologies.
• Experience in implementing various Big Data Analytical, Cloud Data engineering, and Data
Warehouse / Data Mart, Data Visualization, Reporting, Data Quality, and Data virtualization
solutions.
• Experience with Azure transformation projects and Azure architecture decision making Architect
and implement ETL and data movement solutions using Azure Data Factory (ADF), SSIS
• Experience in Developing Spark applications using Spark - SQL in Data bricks for data extraction,
transformation and aggregation from multiple file formats for analyzing & transforming the data
to uncover insights into the customer usage patterns.
• Implemented large Lamda architectures using Azure Data platform capabilities like Azure Data
Lake, Azure Data Factory, HDInsight, Azure SQL Server, Azure ML and Power BI.
• Hands on Experience in Azure Cortina Analytics Platform – Azure Data Factory, Storage, Azure
Logic Apps, Azure Data Bricks, Azure data lake, Azure Stream Analytics.
• Hands on experience in Creating Data Base Objects like Tables, Views, Writing Stored
Procedures, External Sources and File Formats using Poly base.
• Having good Experience in Data Factory – Designing ELT, Optimizing ELT Pipelines, Auditing and
setting up Poly base account in SQL DWH.
• Experience in designing & implementing Data Lake Analytics.
• Having experience in gathering requirements, Analysis, design, development, testing and
implementation in Azure Data Factory.
• Using Web activity in ADF configuring Logic apps for Success & Failure of pipelines.
• Having experience in streaming Analytics services like Azure Event Hub, IOT, Azure Stream
Analytics.
• Having experience in Mounting Azure Data Lake Gen 1& Gen 2, Azure Storage in Azure Data
bricks
• Having experience in loading data incrementally &full load from Azure storage to Azure SQL
Data warehouse in Data bricks.
Professional Experience:
 Worked as a Data Engineer in CAPGEMINI, Hyderabad from March 2022 to August 2022
 Worked as a Software Engineer in Gaman Software Solutions Pvt. Ltd, Hyderabad
From Dec 2018 to March 2022

Education Qualification:

• M. Tech from JNTU Kakinada

Project 1: Mainframe to Azure Data Migration March 2022 to August 2022

Client Procter & Gamble


Team Size 3
Technologies Azure Data Factory, Azure Data Bricks, Synapse Analytics
Python, Pyspark, Microsoft SQL Server, Adls gen2, Blob
Storages
Role Associate Consultant

Responsibilities:
• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform
and load data from different sources like Azure SQL, Blob storage, Azure SQL Data
warehouse, write-back tool and backwards.

• Extract Transform and Load data from Sources Systems to Azure Data Storage services
using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake
Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure
Storage, Azure SQL, Azure DW) and processing the data in In Azure Data bricks.

• Analysing large amounts of data sets to determine optimal way to aggregate and report on
these data sets.
• Design, Implement and maintain Database Schema, Entity relationship diagrams, Data
modelling, Tables, Stored procedures, Functions and Triggers, Constraints, clustered and
non-clustered indexes, partitioning tables, Schemas, Functions, Views, Rules, Defaults and
complex SQL statement for business requirements and enhancing performance.

• Created Airflow Scheduling scripts in Python

• Develop framework for converting existing Power Center mappings and to PySpark (Python
and Spark) Jobs.
Project 2: Feb 2020 to March 2022

Project:

As the academic medical centre and University Hospital for Albert Einstein College of Medicine,
Montefiore Medical Centre is nationally recognized for clinical excellence breaking new ground in
research, training the next generation of healthcare leaders, and delivering science-driven, patient-
centred care

Responsibilities:

• Developed stored procedures in MS SQL to fetch the data from different servers using FTP
and processed these files to update the tables.

• Developed Complex database objects like Stored Procedures, Functions, Packages and
Triggers using SQL .

• Designed and Developed ETL jobs to extract data from Salesforce replica and load it in data
mart in Redshift.
• Involved in performance tuning, stored procedures, views, triggers, cursors, pivot, unpivot
functions, CTE's
• Developed and delivered dynamic reporting solutions using SSRS.
• Extensively used Erwin for Data modelling. Created Staging and Target Models for the
Enterprise Data Warehouse.

• Responsible for estimating the cluster size, monitoring, and troubleshooting of the Spark
data bricks cluster

• Resolved the data type inconsistencies between the source systems and the target system
using the Mapping Documents and analysing the database using SQL queries.
• Worked on ETL testing, and used SSIS tester automated tool for unit and integration testing.
• Designed and created SSIS/ETL framework from ground up.
• Created new Tables, Sequences, Views, Procedure, Cursors and Triggers for database
development.
• Created ETL Pipeline using Spark and Hive for ingest data from multiple sources.
• Creating reports using SQL Reporting Services (SSRS) for customized and ad-hoc Queries
• Coordinated with clients directly to get data from different databases.
• Worked on MS SQL Server, including SSRS, SSIS.
• Designed and developed schema data models.
Project 3: Microsoft SQL Server 2016 Poly Base (Blob Storage/ External Tables)

Dec.2018 to Jan-2020

Client : Royal Mail Group


Team Size : 3
Technologies : MS SQL, Microsoft Azure
Tools : Power BI, Azure Blob Storage, Azure SQL DW
Role : Associate Software Engineer

Project :

Like Tables in SQL Server, we have Internal / External tables in SQL Server 2016. External Tables will
point to the files in the Blob Storage. These External Tables will not contain data; it will point to the
external files. When we retrieve the data, it will load data from external files stored in the Blob Storage.
From SQL Server 2016 External Tables loaded (Incremental or Full Load) data into Azure SQL Server 2016
Data Warehouse using Azure Data Factory (ETL Process).
Responsibilities:
• Creating external data sources & data table.
• Configuring data source in power Bi
• By using power query transforming data
• Creating reports & publishing in Power Bi service
• Solved issues in ELS period.
• Work experience in supporting services in production.
Declaration:

I do here by declare that the above information is given according to the best of my knowledge.

Date:

Place:

Venkaiah P

You might also like