0% found this document useful (0 votes)
3 views

Nivas Sr. Database developer

Nivas is a Senior Database Developer with 8 years of experience in SQL Database, Data Analysis, and Database Engineering, proficient in cloud migration projects and various programming languages. He has extensive expertise in ETL processes, database security, and implementing Microsoft BI platforms, along with hands-on experience in AWS and Azure services. Nivas has demonstrated skills in optimizing database performance, developing data pipelines, and creating complex dashboards using tools like Tableau and Power BI.

Uploaded by

harijava700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Nivas Sr. Database developer

Nivas is a Senior Database Developer with 8 years of experience in SQL Database, Data Analysis, and Database Engineering, proficient in cloud migration projects and various programming languages. He has extensive expertise in ETL processes, database security, and implementing Microsoft BI platforms, along with hands-on experience in AWS and Azure services. Nivas has demonstrated skills in optimizing database performance, developing data pipelines, and creating complex dashboards using tools like Tableau and Power BI.

Uploaded by

harijava700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Nivas

SR. Database Developer


Phone: (980)-307-5939
Email: [email protected]
 I have 8 Yrs of experience in SQL Database/Data Analysis, Database Engineering, Data Modeling, Database Design,
Database Programming, Implementation, Development, and Testing of Database Systems & Client/Server Applications
using SQL Server and MSBI tools.
 Extensive experience in planning and executing database migration projects to cloud platforms such as AWS and Azure,
ensuring seamless transition and minimal downtime.
 Proficient in Relational Database Management Systems (RDBMS) and Strong knowledge of programming using Stored
Procedures, Built-in Functions, Triggers, Views, etc.
 Expertise in working on all activities related to the development, implementation, administration, and support of ETL
processes for large-scale Data Warehouses.
 Experienced in working with diverse programming, scripting, and query languages including Python, SQL, Java, and
JavaScript.
 Extensive experience in implementing database security measures to safeguard sensitive data, including encryption, access
controls, and user authentication.
 Extensive experience in Implementation of Microsoft Business Intelligence (BI) platforms including SQL Server Integration
Services (SSIS), and SQL Server Reporting Services (SSRS)in SQL Server2008R2/2012/2014/2016.
 Utilized IBM Data Studio for DB2 UDB for database development, administration, and performance monitoring tasks.
 Created Pipelines in AWS Glue using Linked Services/Datasets/Pipeline to Extract, Transform, and Load data from different
sources like Amazon RDS, Amazon S3, Amazon Redshift, write-back tool, and backward.
 Hands-on experience in AWS Cloud, AWS Glue, Amazon S3 (for data lake storage), Amazon Redshift (for data
warehousing), Amazon Athena (for analytical services), Amazon DynamoDB (NoSQL database), Amazon EMR (for big data
processing with Apache Spark), Hadoop, and Apache Spark.
 Proficient in leveraging Databricks for advanced data processing tasks, including real-time stream processing, batch
analytics, and machine learning model training, to derive actionable insights from large and complex datasets.
 Experienced in developing and managing data pipelines using AWS Glue, Amazon Redshift, and Amazon S3 for data
extraction and transformation.
 Extensive experience in designing, implementing, and managing NoSQL databases, including MongoDB, Cassandra, Redis,
and DynamoDB, to handle large-scale, unstructured, and semi-structured data.
 Implemented containerization solutions using Docker to streamline development and deployment processes.
 Developed and maintained DB2 UDB pureScale clusters for scalable and highly available database deployments.
 Proficient in utilizing a wide range of Azure services such as Azure Virtual Machines, Azure Storage, Azure SQL Database,
Azure Functions, Azure Data Factory, and Azure DevOps to meet diverse business requirements.
 Hands-on experience developing AWS Step Functions workflows for event-based data movement, file operations on S3,
SFTP/FTP Servers, and getting/manipulating data in Amazon RDS.
 Utilized Aurora's Multi-AZ deployment for automatic failover and high availability, minimizing downtime and ensuring
continuous operation.
 Skilled in optimizing Spark workloads on Databricks clusters for improved performance and resource utilization, utilizing
techniques such as partitioning, caching, and cluster configuration tuning to achieve optimal processing efficiency.
 Skilled in building robust data platforms on Azure, including data ingestion, transformation, storage, and analytics
components, using services like Azure Databricks, Azure Synapse Analytics, and Azure Cosmos DB.
 Excellent knowledge of integrating AWS Glue with a variety of data sources and processing the data using the ETL jobs, job
parameters, and manually/scheduled job orchestration.
 Integrated Aurora databases with AWS services such as AWS Lambda and Amazon S3 for seamless data processing and
storage.
 Implemented DB2 UDB database compression to reduce storage requirements and improve overall system performance.
 Implemented AWS Identity and Access Management (IAM) for authentication of AWS Glue jobs and other services involved
in the data pipeline.
 Utilized Postgres foreign data wrappers (FDWs) to access and query data from external data sources such as other relational
databases or web services.
 Proficient in implementing DevOps practices and automation techniques on Azure using tools like Azure DevOps, Azure
Automation, and Azure Resource Manager (ARM) templates to streamline deployment and management processes.
 Proficiency in utilizing cloud-based services, such as Amazon S3, EC2, RDS, Lambda, Redshift, AWS Glue, SQS, API Gateway,
Event-Bridge, CloudWatch, and CloudFormation.
 Demonstrated ability to lead end-to-end data pipeline development, optimize data infrastructure, and collaborate
effectively with cross-functional teams.
 Highly Proficient in Agile Methodologies and Proven track record of achieving cost reduction and performance
improvement through pipeline optimizations, ensuring data security and compliance.
 Executed best practices in structuring SQL queries, debugging unexpected SQL results, etc. Worked on all kinds of reports
such as Yearly, Quarterly, Monthly, and Daily.
 Expertise in SQL Server Integration Services (SSIS) and SQL Server Reporting Services (SSRS) with good knowledge of SQL
Server Analysis Services (SSAS).
 Highly proficient in the use of T-SQL for developing complex Stored Procedures, Triggers, Tables, Views, Relational
Database models and Data integrity, and SQL joins.
 Worked with T-SQL, DCL, DDL, and DML Scripts and established relationships between tables using Primary Keys &
Foreign Keys.
 Extensive experience in developing complex PL/SQL programs, including stored procedures, functions, packages, and
triggers, to automate database operations and enhance application functionality.
 Experienced with creating IT solutions for a large inventory of SQL Servers using technologies such as high availability
clusters, Always-On, Mirroring, Log-Shipping, and Replication.
 Experience in the concepts of Data Warehousing, Data Lakes, Data Marts, ER Modeling, Dimensional Modeling, Fact, and
Dimensional Tables, Star and Snowflake schema, Normalization/De Normalization.
 Expertise in SQL Server Integration Services (SSIS) and SQL Server Reporting Services (SSRS) with good knowledge of SQL
Server Analysis Services (SSAS).
 Strong understanding of Data Analysis project life cycle and experience in developing a variety of dashboard solutions for
business requirements and data visualizations using Tableau, Power BI, and SQL.
 Professionally adept in designing and developing Tableau dashboards, skillfully utilizing stack bars, bar graphs, and scatter
plots to deliver insightful and visually compelling data representations.
 Well-versed in data visualization and reporting tools like Power BI, Tableau, and Matplotlib, with expertise in statistical
analysis and forecasting methodologies.
 Expertise Experience in building complex SSIS and DTS packages using Business Intelligence Development Studio.
 Excellent in High-level Design of SSIS packages for Integration and Migration of data from heterogeneous sources like Flat
files, Excel, CSV, Text Format Data, and Oracle to MS SQL Server.
 Hands-on experience working with different file formats like JSON, CSV, Avro, parquet, etc... using Databricks and Data
Factory.
 Extensively working in reading Continuous JSON data from different source systems using EventHub into various
downstream systems using stream analytics and Apache spark structured streaming (Databricks).
 Designed and developed many large-scale, batch& real-time big data applications that use Python, Spark, and other Hadoop
ecosystem components.
 Expert in report writing using SQL Server Reporting Services (SSRS) and creating various types of reports like Ad-Hoc, Drill
down, Drill Through, Parameterized, Matrix, Chart, Cascading, Table, and sub-reports based on Relational and OLAP
databases.

Technical Summary:

Languages Pyspark, Python, SQL, Java, Shell Scripting, PL/SQL


Databases MySQL, SQL Server, Oracle, Snowflake, Mongo DB
Big Data tools HDFS, Hive, Pig, Spark, Kafka, Sqoop, Flume, Yarn, Map Reduce
AWS Services Amazon S3, Amazon Redshift, Amazon EMR, Amazon RDS, AWS Glue, Amazon Athena
Azure Services Azure Storage, Azure SQL Database, Azure Functions, Data Factory, Databricks
IDE’s AWS Cloud9, PyCharm, Visual Studio
Other Tools Apache Airflow, Snowflake, Kubernetes, Docker, Jenkins
ETL Tools MS SSIS, PL/SQL, TSQL, SQL Server bulk insert and BCP utilities, Informatica 7. x,8.x
Business Intelligence Power BI, Tableau, SSRS, Crystal Reports
Cloud Amazon Web Services (AWS), Azure

Work Experience
First Horizon Bank, Memphis, TN January 2023 – Till Date
Sr.Database Developer
Responsibilities:
 Worked independently in the development, testing, implementation, and maintenance of systems of moderate-to-large size
and complexity.
 Led end-to-end data center migration projects, ensuring minimal downtime and seamless transition of infrastructure,
applications, and databases.
 Used the Agile methodology to build the different phases of Software development life cycle (SDLC).
 Extensively worked using AWS services along with wide and in depth understanding of each one of them.
 Create and developed data load and scheduler process for ETL jobs using Matillion ETL package.
 Extensive experience in designing, developing, and managing Oracle RDBMS solutions for high-performance, reliable, and
scalable database environments.
 Proficient in database development and administration using PostgreSQL, including schema design, query optimization, and
performance tuning.
 Developed and implemented performance optimization strategies, such as database compression, partitioning, and load
balancing, for high-traffic enterprise systems.
 Handled Business logics by backend Python programming to achieve optimal results. using Tableau as a front-end BI tool
and MS SQL Server as a back-end database to design and develop dashboards, workbooks, and complex aggregate
calculations.
 Responsible for ETL (Extract, Transform, and Load) processes to bring data from multiple sources into a single warehouse
environment.
 Configured role-based access control (RBAC) and fine-grained access control (FGAC) to restrict data access based on user
roles and responsibilities.
 Designed and implemented PL/SQL scripts for data processing tasks, such as data validation, transformation, and migration,
ensuring high performance and data integrity.
 Developed data models optimized for performance, scalability, and flexibility, leveraging NoSQL structures such as
document stores, key-value pairs, and column-family databases.
 Configured and managed NoSQL clusters to ensure high availability, fault tolerance, and scalability across distributed
environments.
 Proficient in writing dynamic SQL and PL/SQL blocks to handle complex business logic and automate tasks within Oracle
databases.
 Proficient in working with IBM's Universal Database (UDB) platform, including DB2 UDB, for managing relational databases
in enterprise environments.
 Designed and enforced database security policies, including password management, user account auditing, and secure
authentication protocols (e.g., Kerberos, LDAP, SAML).
 Designed and deployed VPCs in AWS to create isolated network environments for secure application hosting.
 Provided the technical support for debugging, code fix, platform issues, missing data points, unreliable data source
connections and big data transit issues.
 Performed comprehensive performance tuning in Cloud SQL by optimizing query execution plans, indexing strategies, and
resource utilization to enhance database efficiency.
 Developed and maintained PostgreSQL databases, ensuring data consistency, integrity, and security in compliance with
organizational requirements.
 Designed and executed backup and disaster recovery strategies for Cloud SQL instances, ensuring data integrity and high
availability across environments.
 Developed and optimized PL/SQL stored procedures, functions, triggers, and packages to enhance database performance
and maintainability.
 Developed and executed migration strategies for servers, databases, applications, and storage from one data center to
another.
 Developed error-handling and exception-management frameworks in PL/SQL to ensure robust and reliable execution of
database operations.
 Conducted pre-migration assessments and impact analysis to identify dependencies, risks, and potential challenges.
 Extensive experience in using Git for version control, including branching, merging, and managing code repositories for
efficient collaboration and project management.
 Investigated data sources to identify new data elements needed for data integration.
 Experience in database administration tasks such as installation, configuration, backup, and recovery of UDB instances to
ensure data availability and integrity.
 Created complex SQL queries and scripts to extract, aggregate and validate data from MS SQL, Oracle, and flat files using
Informatica and loaded them into a single data warehouse repository.
 Carried out various mathematical operations for calculation purpose using python libraries.
 Extensive experience in managing Aurora database clusters, including configuration, monitoring, and tuning to ensure
optimal performance and reliability.
 Designed and documented Use Cases, Activity Diagrams, Sequence Diagrams, OOD (Object Oriented Design) using Visio.
 Created complex program unit using PL/SQL Records, Collection types.
 Developed Data Migration and Cleansing rules for the Integration Architecture (OLTP, ODS, DW).
 Created several Databricks Spark jobs with Pyspark to perform several tables to table operations.
 Prepared Dashboards using calculations, parameters, calculated fields, groups, sets and hierarchies in Tableau.
Environment: Hadoop3.0, Agile, Amazon Web Services, Elastic Map Reduce cluster, Scala, Pyspark, EC2s, Cloud Formation,
Amazon S3, Amazon Redshift , Python, MS Visio, JIRA, MySQL, HDFS, Kafka1.1, GIT, EC2, S3, Spark, OLTP, ODS, MongoDB,
Tableau

Discovery Insurance Company, Kinston, North Carolina August 2021 – December 2022
Sr.Database Engineer
Responsibilities: -
 Worked on Kafka producers and Consumers to stream the data from external rest APIs to Kafka topics.
 Utilized Spark-Streaming to consume the data from Kafka topics and write the processed streams to different databases and
HDFS using Pyspark.
 Used Spark's in-memory capabilities for handling large datasets efficiently. Implemented effective joins, transformations, and
other operations using Spark.
 Implemented backup and recovery strategies for Aurora databases, ensuring data integrity and disaster recovery
preparedness.
 Implemented ProxySQL as a high-performance MySQL/MariaDB proxy to manage database connections, enhancing
scalability and reducing connection overhead.
 Implemented data sharding, replication, and partitioning strategies in NoSQL systems to enhance data distribution,
performance, and redundancy.
 Extensive experience in developing scalable and efficient applications using Python, focusing on data processing,
automation, and web development.
 Optimized PL/SQL code for performance, including tuning SQL queries, indexing, and using bulk operations to improve data
processing speeds.
 Conducted regular security audits and vulnerability assessments on databases to identify and mitigate potential security
risks.
 Developed data pipelines using Python libraries such as Pandas, NumPy, and PySpark to handle data extraction,
transformation, and loading (ETL) processes.
 Designed and implemented PL/SQL-based ETL processes, extracting and transforming data from multiple sources into enter-
prise data warehouses.
 Developed Power BI data models utilizing advanced DAX calculations for complex business logic and analysis.
 Developed Spark code using Scala and Spark-SQL/Streaming for faster testing and processing of data.
 AWS CI/CD Data pipeline and AWS Data Lake using EC2, AWS Glue, AWS Lambda.
 Implemented row-level security (RLS) and dynamic security models in Power BI to restrict data access based on user roles
and permissions.
 Implemented Git branching strategies (e.g., Git Flow, feature branches) to facilitate parallel development, code review, and
seamless integration.
 Automated data migration workflows using scripts and tools like Ansible, Terraform, and PowerShell, reducing manual
efforts and errors.
 Administered and optimized Oracle databases, including tuning SQL queries, managing database schemas, and automating
backup and recovery processes.
 Designed and implemented database migration strategies using DMS, including pre-migration assessments, data replication
setups, and real-time monitoring.
 Configured database firewalls and network security settings to prevent unauthorized access and ensure compliance with
regulatory requirements (e.g., GDPR, HIPAA, PCI-DSS).
 Performed thorough pre-migration assessments, including dependency analysis and database performance benchmarking,
to ensure a seamless migration from Aurora to CloudSQL.
 Implemented Network Access Control Lists (NACLs) and Security Groups to enforce granular security policies for controlling
inbound and outbound traffic.
 Designed and optimized schemas for NoSQL databases, considering access patterns, query requirements, and data storage
efficiency.
 Implemented database triggers in PL/SQL to enforce business rules, maintain data integrity, and automate real-time data
synchronization.
 Utilized monitoring tools like Stackdriver and Cloud Monitoring to track database performance metrics in Cloud SQL,
proactively addressing latency or throughput issues.
 Implemented advanced features of PostgreSQL such as partitioning, replication, and indexing to optimize database
performance and scalability.
 Integrated Git repositories with CI/CD tools to automate code builds, run automated tests, and deploy applications across
multiple environments.
 Optimized post-migration database configurations on CloudSQL to achieve performance parity or improvements, addressing
specific workloads and queries.
 Designed and implemented advanced Oracle database solutions, including RAC (Real Application Clusters) for high
availability and scalability, and Data Guard for disaster recovery.
 Configured ProxySQL for load balancing, query caching, and failover handling in high-availability database clusters,
significantly improving uptime and performance.
 Implemented automation scripts in Python to streamline repetitive tasks, including data integration, file processing, and
system monitoring.
 Configured and managed container orchestration platforms such as Kubernetes to automate deployment, scaling, and
management of containerized applications.
 Sound knowledge of developing ETL processes in AWS Glue to migrate data from external sources into AWS Redshift.
 Hands-on experience in Converting existing AWS infrastructure to serverless architecture with AWS Lambda and Kinesis,
deploying with Terraform and AWS CloudFormation.
 Designed and implemented data refresh schedules and incremental data loading strategies in Power BI Service to ensure
data accuracy and timeliness.
 Managed cross-functional teams and coordinated with cloud architects, DBAs, and developers to ensure a successful Aurora
to CloudSQL migration.
 Conducted performance tuning and optimization of Postgres databases, analyzing query execution plans and database
statistics to improve overall system performance.
 Designed and implemented data migration strategies to migrate on-premises databases to Amazon Aurora, minimizing
downtime and ensuring a seamless transition.
 Established VPC peering connections to enable secure communication between different VPCs within the same or different
AWS accounts.
 Proficient in creating advanced dashboards and Visual Insights using a Power BI environment.
 Presented key performance indicators (KPIs), trends, and patterns through visualization.
 Integrated Power BI with other Microsoft services such as SharePoint Online and Teams for seamless collaboration and
reporting distribution.
 Utilized database activity monitoring (DAM) tools to track user actions, detect suspicious behavior, and respond to security
incidents.
 Developed and maintained database objects such as tables, views, and stored procedures in DB2 UDB to support
application functionality and business processes.
 Extensive experience in writing complex SQL queries, stored procedures, and triggers in PostgreSQL to support application
functionality and business requirements.
 Developed and optimized stored procedures and functions using T-SQL.
 Designed and implemented complex SSIS packages for data migration and integration.
 Assisted in Microsoft SQL Server management and database maintenance.
 Used different AWS DATA Migration Services and Schema Conversion Tool along with Matillion ETL tool.
 Knowledge of Modifying and maintaining SQL Server stored procedures, views, and SSIS packages.
 Created documentation concerning common manual EDI errors & the appropriate resolution process.
 Fine-tuning spark applications/jobs to improve the pipelines' efficiency and overall processing time.

Environment: Amazon Web Services, Elastic Map Reduce cluster, Scala, Pyspark, EC2s, Cloud Formation, Amazon S3,
Amazon Redshift, Dynamo DB, Matillion, Cloud Watch, .Net, C#, MS SQL Server 2012/2008 R2, SQL Server Integration
Services (SSIS), MS SQL Server Reporting Services (SSRS), Power BI, MS SQL Server Analysis Services (SSAS), MS Access
2007, MS Excel.

McKesson, Irving, TX June 2020 – July 2021


Database Developer
Responsibilities:
 Participated in daily scrum discussions and collaborated with the team to define stories, discuss status, and address blockers.
 Assessed current architecture and created a detailed migration plan using a work breakdown structure aligned with the
cloud migration strategy.
 Led Oracle database migrations, including on-premise to cloud transitions, ensuring seamless migration with minimal
downtime.
 Assisted in data management deliverables such as business need analysis and high-level data modeling techniques within
the Azure cloud platform.
 Worked with technical and business peers on development efforts, including designing, analyzing, assessing, and reporting,
focusing on Azure cloud services.
 Developed scripts with stored procedures, views, functions, streams, tasks, workflows, and complex queries within the
Azure environment.
 Experienced in using PL/SQL to design and implement ETL processes, facilitating data integration between different systems
and databases.
 Experienced in developing RESTful APIs and microservices using Python frameworks like Flask and Django to enable
seamless integration between systems.
 Configured CI/CD pipelines to streamline code deployment processes, reducing manual effort and minimizing deployment
times.
 Implemented data masking and obfuscation techniques for non-production environments to maintain data privacy during
development and testing.
 Proficient in writing and optimizing complex queries using NoSQL-specific query languages like MongoDB Query Language
(MQL) and Cassandra Query Language (CQL).
 Performed database tuning in Oracle environments, optimizing performance through the use of AWR (Automatic Workload
Repository) reports, SQL tuning advisors, and indexing strategies.
 Performed continuous data replication through DMS, keeping source and target databases in sync during complex migration
projects with minimal data loss and downtime.
 Optimized query routing with ProxySQL to direct read/write traffic across master-slave databases, ensuring optimal resource
utilization and performance.
 Created custom visuals and templates in Power BI to meet specific reporting requirements and enhance data visualization.
 Designed and deployed scalable data processing pipelines on Databricks to ingest, process, and analyze large volumes of
data efficiently, enabling real-time insights and decision-making.
 Utilized Helm charts to deploy and manage Kubernetes applications, ensuring version control and easy updates.
 Wrote analysis documents on existing ETL/ELT load processes to aid in effort estimations for the migration process to Azure.
 Provided level of estimates, wrote detailed Design/Solution documents, and developed ETL loading strategies along with
offshore developers, emphasizing Azure cloud technologies.
 Crafted complex, high-performance SQL queries in RDBMS (SQL Server, DB2, Oracle) and NoSQL systems within the Azure
ecosystem.
 Utilized Oracle PL/SQL packages to modularize code, making it reusable and easier to maintain, thereby reducing
development time.
 Developed build automation scripts using tools like Maven, Gradle, and npm to ensure consistent and reliable software
builds within CI/CD pipelines.
 Integrated NoSQL databases with big data frameworks like Apache Spark and Kafka to enable real-time data ingestion,
processing, and analytics.
 Conducted thorough analysis of database workloads, identifying bottlenecks and improving system performance through
indexing, query optimization, and resource allocation.
 Leveraged Databricks as a unified analytics platform to streamline data engineering processes and perform advanced
analytics tasks within the Azure ecosystem.
 Utilized Python for data analysis and visualization, employing libraries such as Matplotlib, Seaborn, and Plotly to generate
insights and reports.
 Validated data loads from source to stage and then ODS and Data Mart layers within Azure cloud databases.
 Built automated processes to perform database and ETL object dependency analysis within the Azure environment.
 Utilized troubleshooting skills and collaborated closely with operations, support, engineering, and other functions to ensure
successful migrations to Azure.
 Enhanced security by integrating ProxySQL with TLS/SSL encryption for secure communication between database clients and
servers.
 Optimized DMS tasks and configurations to handle large-scale data migrations efficiently, reducing migration time and
ensuring high data integrity.
 Familiarity with PostgreSQL extensions and contributed modules, leveraging additional functionality and capabilities to
enhance database operations and performance.
 Conducted performance tuning and optimization of containerized applications to ensure efficient resource utilization and
fast startup times.
 Optimized database performance through query tuning, index management, and partitioning strategies, reducing query
execution times by over 50%.
 Supported end-to-end data flow in the new Azure cloud database/data platform.
 Accountable for enhancements and testing of new changes to the Azure cloud platform.
 Experienced in integrating Postgres databases with application frameworks and development platforms to support seamless
data access and manipulation.
 Tuned and optimized Cloud Spanner instances by adjusting key configurations such as splits, nodes, and query plans,
improving latency and throughput.
 Performed database-level dependency analysis on existing and new database objects, identifying objects for
decommissioning or migration to another existing database within the Azure ecosystem.
 Familiarity with advanced features of DB2 UDB such as database partitioning, data compression, and workload management
for optimizing database performance and resource utilization.
 Identified risks, actions, and issues through proactive communication and collaboration with stakeholders and various
teams, ensuring alignment within the Azure cloud platform.
 Designed and implemented cross-region replication strategies in Spanner, ensuring fault tolerance and minimizing data loss
in case of system failures.
 Worked on converting existing SAS reports to Power BI within the Azure environment.
 Created reports based on statistical analysis of data from various time frames and divisions using PowerPivot, Power Query,
Power View, and Power BI within the Azure cloud platform.
 Developed and deployed Power BI reports to provide comprehensive insights into banking data, including customer
transactions, financial trends, and risk analysis.
 Implemented row-level security and dynamic parameterized filtering in Power BI to ensure data privacy and enable
personalized data views for different stakeholders within the banking organization.

Environment: Microsoft Azure, RDBMS (SQL Server, DB2, Oracle), NoSQL systems, Power BI, Azure services (such as
Azure VMs, Azure Blob Storage, Azure Data Factory, Azure Functions, Databricks), SQL, Python, SAS.

Spirit Airlines, Dania Beach, FL February 2019 to May 2020


SQL BI Developer/ETL Developer
Responsibilities:
 Created ETL SSIS packages to move the data from source to SQL Server staging Database and from database tables to Flat
files.
 Experienced in setting up automated security patching and updates for databases to address known vulnerabilities and
ensure compliance.
 Worked extensively with SSIS packages for ETL processes, leveraging various data transformations to extract, transform,
and load data from heterogeneous sources to SQL Server databases.
 Hands-on Experience with SSIS Package to Import/Export for transferring data from heterogeneous database sources to SQL
Server. Utilized SSIS for efficient data integration and ETL processes.
 Implemented advanced SQL optimization techniques in DB2 UDB to improve query performance and reduce execution time.
 Created machine learning models using Python libraries like Scikit-Learn, TensorFlow, and Keras for predictive analytics and
data classification tasks.
 Experienced in using NoSQL databases for use cases such as caching, session management, user profiling, and content
management, ensuring fast and reliable data access.
 Configured and managed Oracle Real Application Clusters (RAC) to provide high availability, load balancing, and fault
tolerance for mission-critical applications.
 Developed scripts and automation tools to simplify the setup and monitoring of DMS tasks, improving the overall efficiency
of the database migration process.
 Integrated PL/SQL programs with front-end applications, enabling seamless interaction between user interfaces and
backend data operations.
 Implemented automated scaling policies for Aurora clusters based on workload demands, ensuring optimal resource
utilization and cost efficiency.
 Experienced in implementing Infrastructure as Code (IaC) practices with Terraform and CloudFormation, integrated with
CI/CD workflows for automated infrastructure provisioning.
 Involved in creating SQL stored procedures, Functions, Views, and Indexes in Microsoft SQL Server using appropriate
business rules, and standards for better performance. Imported data from SQL Server Database to Power BI to generate
reports.
 Created packages by using multiple transformations in SSIS to meet the business requirements by extracting data from
heterogeneous source systems, transforming, and finally loading into the Staging Database.
 Implemented advanced data partitioning strategies in PostgreSQL to manage large datasets efficiently and improve query
performance.
 Utilized performance tuning tools like SQL Profiler, Query Store, and Explain Plans to diagnose and resolve slow queries,
ensuring efficient data retrieval and processing.
 Automated the provisioning and configuration of container environments using Infrastructure as Code (IaC) tools like
Terraform and Ansible.
 Configured DB2 UDB HADR (High Availability Disaster Recovery) for synchronous replication and automatic failover in
distributed environments.
 Utilized Oracle Data Pump for efficient data import/export operations, supporting seamless data migration across
environments.
 Gained experience with AWS cloud services like EC2, RDS, MSK, and Lambda for hosting and managing data pipelines and
ETL processes in the cloud environment.
 Configured and optimized Aurora read replicas to offload read-heavy workloads and improve overall database performance.
 Utilized Git hooks and CI/CD jobs to enforce code quality checks, including linting, unit testing, and static code analysis,
improving codebase quality.
 Created DAX Queries to generate computed columns in Power BI. Generated computed tables in Power BI by using DAX.
 Involved in creating new stored procedures and optimizing existing queries and stored procedures.
 Used Power BI, and Power Pivot to develop data analysis prototypes, and used Power View and Power Map to visualize
reports.
 Managed Oracle RDBMS in both on-premises and cloud environments, including configuration, monitoring, and
troubleshooting to ensure smooth operations.
 Monitored and logged containerized applications using tools such as Prometheus, Grafana, and ELK stack to ensure
operational visibility and troubleshooting.
 Conducted thorough performance testing and troubleshooting post-migration to ensure optimal performance of the target
database after using DMS.
 Implemented web scraping solutions with Python using libraries such as BeautifulSoup and Scrapy to extract and process
data from various web sources.
 Published Power BI Reports in the required organizations and made Power BI Dashboards available to Web clients and
mobile apps. Explore data in a variety of ways and across multiple visualizations using Power BI.
 Designed and created complex PL/SQL packages to encapsulate reusable code, improving modularity and reducing
maintenance efforts.
 Utilized PostgreSQL's JSONB data type and built-in JSON functions to store and query JSON data within relational databases.
 Used Power BI Gateways to keep the dashboards and reports up to date. Published reports and dashboards using Power BI.
 Coordinate with business users in gathering business requirements and translating them to technical specifications.
 Collaborated with the QA team to ensure adequate testing of software both before and after completion, maintained
quality procedures, and ensured that appropriate documentation was in place. Prepared questionnaire based on the System
Analysis Performed for all entities.
 Documented and maintained database system specifications, diagrams, and connectivity charts. Documented the
application functional specification document.

Environment: SQL Server Database, Power BI, Power Pivot, Power View, Power Map, SSIS (SQL Server Integration
Services), AWS cloud services (EC2, RDS, MSK, Lambda), Microsoft SQL Server, Heterogeneous data sources, Web clients,
Mobile apps.
Optimal solutions Pvt ltd, India October 2015 to August 2018
Database Engineer
Responsibilities:
 Loaded datasets from multiple sources (Excel, SQL Server Databases, Files, Views) into Tableau for analytics, visualization,
and reporting.
 Supported and enhanced enterprise data platforms, building and maintaining optimal data pipelines from data sources.
 Engaged in requirement-gathering discussions with business process architects/data architects and translated business
requirements into analytics solutions.
 Implemented advanced Tableau features such as parameters, sets, groups, and hierarchies to create dynamic and
interactive dashboards.
 Utilized Oracle PL/SQL for developing complex queries, stored procedures, and triggers to meet business reporting and data
processing needs.
 Implemented security best practices in NoSQL environments, including authentication, authorization, data encryption, and
auditing.
 Enabled blue-green deployments and canary releases for containerized applications to minimize downtime and mitigate
risks during updates.
 Developed test scripts and automated testing frameworks in Python using tools like PyTest and Unittest to ensure
application quality and reliability.
 Improved application and database interaction by optimizing stored procedures, functions, and triggers, resulting in faster
response times and reduced resource consumption.
 Used ProxySQL for query filtering and rewriting, improving overall query performance and enforcing query restrictions
without altering application code.
 Implemented and managed scalable, globally-distributed databases using Google Cloud Spanner, ensuring high availability
and consistency across regions.
 Developed advanced PL/SQL scripts to implement business logic, data processing, and automation tasks, enhancing system
functionality and efficiency.
 Configured continuous deployment (CD) pipelines to enable zero-downtime deployments, rolling updates, and canary
releases, ensuring smooth production rollouts.
 Using Azure Data Factory, designed and deployed ETL pipelines on Azure Blob Storage in a data lake.
 Mastered the art of creating joins, relationships, data blending, calculated fields, Level-of-Detail (LOD) expressions, and
much more in Tableau to ensure seamless dashboard experiences.
 Migrated enterprise-level Oracle databases to Google Cloud Spanner, optimizing schemas and queries to leverage
Spanner’s distributed architecture.
 Developed analytics solutions ranging from data storage, ETL (extract/transform/load), and data modeling (conceptual,
logical, and physical) for business consumption (reporting and visualization) primarily using Tableau.
 Worked with Azure SQL Data Warehouse and Azure Blob Storage for integrating data from multiple source systems, which
include loading nested JSON formatted data into Azure SQL tables.
 Ensured solution compliance with solution design, best practices, technical architecture, design standards, technology
roadmaps, and business requirements.
 Monitored and tuned ProxySQL configurations for high-throughput environments, achieving lower query latency and
increased connection management efficiency.
 Leveraged Azure Databricks for scalable data engineering and analytics processing.
 Designed and developed complex calculations and calculated fields in Tableau to derive key performance indicators (KPIs)
and metrics for business analysis.
 Utilized Azure Synapse Analytics for high-performance analytics and data warehousing needs.
 Proficient in creating visual summaries to understand the structural properties of the orders XMLs using Python packages
(Matplotlib and Seaborn).
 Utilized Tableau Prep Builder for data preparation tasks, including data cleaning, shaping, and blending, to ensure high-
quality data for analysis.
 Deployed and managed Azure Data Lake Storage for storing large volumes of structured and unstructured data.
 Used Agile systems and strategies to provide quick and feasible solutions, based on agile systems, to the organization.
 Connected Tableau with Python using TABPY to provide fast visual analytics solutions with data mapping.
 Created Reports in Tableau using various features such as visuals (Charts, Graphs, KPIs, Maps, etc.) for insightful analytics.
 Worked on Row-level Security, Dynamic parameterized filtering, and generated trends/insights through the forecasting
feature in the Analytics tab in Tableau.
 Integrated Azure Machine Learning for advanced analytics, predictive modeling, and building machine learning pipelines.
 Innovated and deployed Tableau BI Dashboard reporting solutions tailored for diverse groups.
 Utilized Python for data manipulation and preprocessing tasks in conjunction with Tableau for comprehensive analytics
solutions.
 Leveraged Azure Cosmos DB for globally distributed database applications requiring low-latency and high availability.
 Proficient Python packages (NumPy, SciPy & Pandas) to perform exploratory data analysis (EDA) to gain an overall picture
of the data.

Environment: Tableau, ETL, Microsoft Azure, Azure Data Factory, Azure SQL Data Warehouse, Azure Blob Storage, Azure
Databricks, Azure Synapse Analytics, Azure Functions, Azure Data Lake Storage, Python, TABPY, PySpark.

Commvault Systems, India June 2013 – September 2015


ETL- SQL Developer
Responsibilities:
 Engineered and deployed a scalable ETL framework employing Sqoop, Pig, and Hive to facilitate efficient extraction,
transformation, and loading of data from diverse sources, ensuring uninterrupted data availability.
 Leveraged Hive to establish external tables and crafted reusable scripts within the Hadoop Distributed File System (HDFS),
streamlining table ingestion and repair processes across the project.
 Executed robust ETL tasks utilizing Spark and Scala to seamlessly migrate data from Oracle to MySQL tables,
 maintaining data integrity throughout the process.
 Utilized Spark's versatile features, including RDDs, Data Frames, and Spark SQL, alongside Spark-Cassandra
 Connector APIs, to address varied data requirements such as migration and report generation.
 Engineered a high-performance Spark Streaming application to enable real-time sales analytics, facilitating prompt
decision-making.
 Implemented PL/SQL cursors (explicit and implicit) and ref cursors for efficient data fetching and processing, enabling
smooth data handling in backend systems.
 Migrated legacy applications to containerized environments, improving scalability, portability, and manageability.
 Conducted thorough data analysis of source data, managed data type modifications efficiently, and utilized various file
formats including Excel sheets and CSV files to generate dynamic reports.
 Devised optimized solutions using PySpark based on detailed analysis of SQL scripts, ensuring efficient data processing and
transformation.
 Utilized PL/SQL collections, such as associative arrays, nested tables, and VARRAYs, to process multiple records in a single
operation, improving performance.
 Employed Sqoop for streamlined data extraction from multiple sources into HDFS, promoting seamless data integration.
 Orchestrated data imports from diverse sources, executed transformations employing Hive and MapReduce, and
seamlessly loaded processed data into HDFS.
 Facilitated smooth data transfer and integration by extracting data from MySQL databases into HDFS using Sqoop.
 Implemented efficient automation for deployments through YAML scripts, accelerating build and release processes.
 Leveraged a suite of Apache tools including Hive, Pig, HBase, Spark, Zookeeper, Flume, Kafka, and Sqoop to optimize data
processing and management.
 Developed data classification algorithms employing MapReduce design patterns, enhancing processing efficiency and
accuracy.
 Implemented advanced techniques such as combiners, partitioning, and distributed cache to optimize the performance of
MapReduce jobs.
 Ensured comprehensive source code management and version control through Git and GitHub repositories,
promoting efficient collaboration and traceability of code changes.

Environment: Sqoop, Pig, HDFS, GitHub, Apache Cassandra, ZooKeeper, Flume, Kafka, Apache Spark, Scala, Hive, Hadoop,
Cloudera, HBase, MySQL, YAML, JIRA, Git, GitHub

You might also like