Nivas Sr. Database developer
Nivas Sr. Database developer
Technical Summary:
Work Experience
First Horizon Bank, Memphis, TN January 2023 – Till Date
Sr.Database Developer
Responsibilities:
Worked independently in the development, testing, implementation, and maintenance of systems of moderate-to-large size
and complexity.
Led end-to-end data center migration projects, ensuring minimal downtime and seamless transition of infrastructure,
applications, and databases.
Used the Agile methodology to build the different phases of Software development life cycle (SDLC).
Extensively worked using AWS services along with wide and in depth understanding of each one of them.
Create and developed data load and scheduler process for ETL jobs using Matillion ETL package.
Extensive experience in designing, developing, and managing Oracle RDBMS solutions for high-performance, reliable, and
scalable database environments.
Proficient in database development and administration using PostgreSQL, including schema design, query optimization, and
performance tuning.
Developed and implemented performance optimization strategies, such as database compression, partitioning, and load
balancing, for high-traffic enterprise systems.
Handled Business logics by backend Python programming to achieve optimal results. using Tableau as a front-end BI tool
and MS SQL Server as a back-end database to design and develop dashboards, workbooks, and complex aggregate
calculations.
Responsible for ETL (Extract, Transform, and Load) processes to bring data from multiple sources into a single warehouse
environment.
Configured role-based access control (RBAC) and fine-grained access control (FGAC) to restrict data access based on user
roles and responsibilities.
Designed and implemented PL/SQL scripts for data processing tasks, such as data validation, transformation, and migration,
ensuring high performance and data integrity.
Developed data models optimized for performance, scalability, and flexibility, leveraging NoSQL structures such as
document stores, key-value pairs, and column-family databases.
Configured and managed NoSQL clusters to ensure high availability, fault tolerance, and scalability across distributed
environments.
Proficient in writing dynamic SQL and PL/SQL blocks to handle complex business logic and automate tasks within Oracle
databases.
Proficient in working with IBM's Universal Database (UDB) platform, including DB2 UDB, for managing relational databases
in enterprise environments.
Designed and enforced database security policies, including password management, user account auditing, and secure
authentication protocols (e.g., Kerberos, LDAP, SAML).
Designed and deployed VPCs in AWS to create isolated network environments for secure application hosting.
Provided the technical support for debugging, code fix, platform issues, missing data points, unreliable data source
connections and big data transit issues.
Performed comprehensive performance tuning in Cloud SQL by optimizing query execution plans, indexing strategies, and
resource utilization to enhance database efficiency.
Developed and maintained PostgreSQL databases, ensuring data consistency, integrity, and security in compliance with
organizational requirements.
Designed and executed backup and disaster recovery strategies for Cloud SQL instances, ensuring data integrity and high
availability across environments.
Developed and optimized PL/SQL stored procedures, functions, triggers, and packages to enhance database performance
and maintainability.
Developed and executed migration strategies for servers, databases, applications, and storage from one data center to
another.
Developed error-handling and exception-management frameworks in PL/SQL to ensure robust and reliable execution of
database operations.
Conducted pre-migration assessments and impact analysis to identify dependencies, risks, and potential challenges.
Extensive experience in using Git for version control, including branching, merging, and managing code repositories for
efficient collaboration and project management.
Investigated data sources to identify new data elements needed for data integration.
Experience in database administration tasks such as installation, configuration, backup, and recovery of UDB instances to
ensure data availability and integrity.
Created complex SQL queries and scripts to extract, aggregate and validate data from MS SQL, Oracle, and flat files using
Informatica and loaded them into a single data warehouse repository.
Carried out various mathematical operations for calculation purpose using python libraries.
Extensive experience in managing Aurora database clusters, including configuration, monitoring, and tuning to ensure
optimal performance and reliability.
Designed and documented Use Cases, Activity Diagrams, Sequence Diagrams, OOD (Object Oriented Design) using Visio.
Created complex program unit using PL/SQL Records, Collection types.
Developed Data Migration and Cleansing rules for the Integration Architecture (OLTP, ODS, DW).
Created several Databricks Spark jobs with Pyspark to perform several tables to table operations.
Prepared Dashboards using calculations, parameters, calculated fields, groups, sets and hierarchies in Tableau.
Environment: Hadoop3.0, Agile, Amazon Web Services, Elastic Map Reduce cluster, Scala, Pyspark, EC2s, Cloud Formation,
Amazon S3, Amazon Redshift , Python, MS Visio, JIRA, MySQL, HDFS, Kafka1.1, GIT, EC2, S3, Spark, OLTP, ODS, MongoDB,
Tableau
Discovery Insurance Company, Kinston, North Carolina August 2021 – December 2022
Sr.Database Engineer
Responsibilities: -
Worked on Kafka producers and Consumers to stream the data from external rest APIs to Kafka topics.
Utilized Spark-Streaming to consume the data from Kafka topics and write the processed streams to different databases and
HDFS using Pyspark.
Used Spark's in-memory capabilities for handling large datasets efficiently. Implemented effective joins, transformations, and
other operations using Spark.
Implemented backup and recovery strategies for Aurora databases, ensuring data integrity and disaster recovery
preparedness.
Implemented ProxySQL as a high-performance MySQL/MariaDB proxy to manage database connections, enhancing
scalability and reducing connection overhead.
Implemented data sharding, replication, and partitioning strategies in NoSQL systems to enhance data distribution,
performance, and redundancy.
Extensive experience in developing scalable and efficient applications using Python, focusing on data processing,
automation, and web development.
Optimized PL/SQL code for performance, including tuning SQL queries, indexing, and using bulk operations to improve data
processing speeds.
Conducted regular security audits and vulnerability assessments on databases to identify and mitigate potential security
risks.
Developed data pipelines using Python libraries such as Pandas, NumPy, and PySpark to handle data extraction,
transformation, and loading (ETL) processes.
Designed and implemented PL/SQL-based ETL processes, extracting and transforming data from multiple sources into enter-
prise data warehouses.
Developed Power BI data models utilizing advanced DAX calculations for complex business logic and analysis.
Developed Spark code using Scala and Spark-SQL/Streaming for faster testing and processing of data.
AWS CI/CD Data pipeline and AWS Data Lake using EC2, AWS Glue, AWS Lambda.
Implemented row-level security (RLS) and dynamic security models in Power BI to restrict data access based on user roles
and permissions.
Implemented Git branching strategies (e.g., Git Flow, feature branches) to facilitate parallel development, code review, and
seamless integration.
Automated data migration workflows using scripts and tools like Ansible, Terraform, and PowerShell, reducing manual
efforts and errors.
Administered and optimized Oracle databases, including tuning SQL queries, managing database schemas, and automating
backup and recovery processes.
Designed and implemented database migration strategies using DMS, including pre-migration assessments, data replication
setups, and real-time monitoring.
Configured database firewalls and network security settings to prevent unauthorized access and ensure compliance with
regulatory requirements (e.g., GDPR, HIPAA, PCI-DSS).
Performed thorough pre-migration assessments, including dependency analysis and database performance benchmarking,
to ensure a seamless migration from Aurora to CloudSQL.
Implemented Network Access Control Lists (NACLs) and Security Groups to enforce granular security policies for controlling
inbound and outbound traffic.
Designed and optimized schemas for NoSQL databases, considering access patterns, query requirements, and data storage
efficiency.
Implemented database triggers in PL/SQL to enforce business rules, maintain data integrity, and automate real-time data
synchronization.
Utilized monitoring tools like Stackdriver and Cloud Monitoring to track database performance metrics in Cloud SQL,
proactively addressing latency or throughput issues.
Implemented advanced features of PostgreSQL such as partitioning, replication, and indexing to optimize database
performance and scalability.
Integrated Git repositories with CI/CD tools to automate code builds, run automated tests, and deploy applications across
multiple environments.
Optimized post-migration database configurations on CloudSQL to achieve performance parity or improvements, addressing
specific workloads and queries.
Designed and implemented advanced Oracle database solutions, including RAC (Real Application Clusters) for high
availability and scalability, and Data Guard for disaster recovery.
Configured ProxySQL for load balancing, query caching, and failover handling in high-availability database clusters,
significantly improving uptime and performance.
Implemented automation scripts in Python to streamline repetitive tasks, including data integration, file processing, and
system monitoring.
Configured and managed container orchestration platforms such as Kubernetes to automate deployment, scaling, and
management of containerized applications.
Sound knowledge of developing ETL processes in AWS Glue to migrate data from external sources into AWS Redshift.
Hands-on experience in Converting existing AWS infrastructure to serverless architecture with AWS Lambda and Kinesis,
deploying with Terraform and AWS CloudFormation.
Designed and implemented data refresh schedules and incremental data loading strategies in Power BI Service to ensure
data accuracy and timeliness.
Managed cross-functional teams and coordinated with cloud architects, DBAs, and developers to ensure a successful Aurora
to CloudSQL migration.
Conducted performance tuning and optimization of Postgres databases, analyzing query execution plans and database
statistics to improve overall system performance.
Designed and implemented data migration strategies to migrate on-premises databases to Amazon Aurora, minimizing
downtime and ensuring a seamless transition.
Established VPC peering connections to enable secure communication between different VPCs within the same or different
AWS accounts.
Proficient in creating advanced dashboards and Visual Insights using a Power BI environment.
Presented key performance indicators (KPIs), trends, and patterns through visualization.
Integrated Power BI with other Microsoft services such as SharePoint Online and Teams for seamless collaboration and
reporting distribution.
Utilized database activity monitoring (DAM) tools to track user actions, detect suspicious behavior, and respond to security
incidents.
Developed and maintained database objects such as tables, views, and stored procedures in DB2 UDB to support
application functionality and business processes.
Extensive experience in writing complex SQL queries, stored procedures, and triggers in PostgreSQL to support application
functionality and business requirements.
Developed and optimized stored procedures and functions using T-SQL.
Designed and implemented complex SSIS packages for data migration and integration.
Assisted in Microsoft SQL Server management and database maintenance.
Used different AWS DATA Migration Services and Schema Conversion Tool along with Matillion ETL tool.
Knowledge of Modifying and maintaining SQL Server stored procedures, views, and SSIS packages.
Created documentation concerning common manual EDI errors & the appropriate resolution process.
Fine-tuning spark applications/jobs to improve the pipelines' efficiency and overall processing time.
Environment: Amazon Web Services, Elastic Map Reduce cluster, Scala, Pyspark, EC2s, Cloud Formation, Amazon S3,
Amazon Redshift, Dynamo DB, Matillion, Cloud Watch, .Net, C#, MS SQL Server 2012/2008 R2, SQL Server Integration
Services (SSIS), MS SQL Server Reporting Services (SSRS), Power BI, MS SQL Server Analysis Services (SSAS), MS Access
2007, MS Excel.
Environment: Microsoft Azure, RDBMS (SQL Server, DB2, Oracle), NoSQL systems, Power BI, Azure services (such as
Azure VMs, Azure Blob Storage, Azure Data Factory, Azure Functions, Databricks), SQL, Python, SAS.
Environment: SQL Server Database, Power BI, Power Pivot, Power View, Power Map, SSIS (SQL Server Integration
Services), AWS cloud services (EC2, RDS, MSK, Lambda), Microsoft SQL Server, Heterogeneous data sources, Web clients,
Mobile apps.
Optimal solutions Pvt ltd, India October 2015 to August 2018
Database Engineer
Responsibilities:
Loaded datasets from multiple sources (Excel, SQL Server Databases, Files, Views) into Tableau for analytics, visualization,
and reporting.
Supported and enhanced enterprise data platforms, building and maintaining optimal data pipelines from data sources.
Engaged in requirement-gathering discussions with business process architects/data architects and translated business
requirements into analytics solutions.
Implemented advanced Tableau features such as parameters, sets, groups, and hierarchies to create dynamic and
interactive dashboards.
Utilized Oracle PL/SQL for developing complex queries, stored procedures, and triggers to meet business reporting and data
processing needs.
Implemented security best practices in NoSQL environments, including authentication, authorization, data encryption, and
auditing.
Enabled blue-green deployments and canary releases for containerized applications to minimize downtime and mitigate
risks during updates.
Developed test scripts and automated testing frameworks in Python using tools like PyTest and Unittest to ensure
application quality and reliability.
Improved application and database interaction by optimizing stored procedures, functions, and triggers, resulting in faster
response times and reduced resource consumption.
Used ProxySQL for query filtering and rewriting, improving overall query performance and enforcing query restrictions
without altering application code.
Implemented and managed scalable, globally-distributed databases using Google Cloud Spanner, ensuring high availability
and consistency across regions.
Developed advanced PL/SQL scripts to implement business logic, data processing, and automation tasks, enhancing system
functionality and efficiency.
Configured continuous deployment (CD) pipelines to enable zero-downtime deployments, rolling updates, and canary
releases, ensuring smooth production rollouts.
Using Azure Data Factory, designed and deployed ETL pipelines on Azure Blob Storage in a data lake.
Mastered the art of creating joins, relationships, data blending, calculated fields, Level-of-Detail (LOD) expressions, and
much more in Tableau to ensure seamless dashboard experiences.
Migrated enterprise-level Oracle databases to Google Cloud Spanner, optimizing schemas and queries to leverage
Spanner’s distributed architecture.
Developed analytics solutions ranging from data storage, ETL (extract/transform/load), and data modeling (conceptual,
logical, and physical) for business consumption (reporting and visualization) primarily using Tableau.
Worked with Azure SQL Data Warehouse and Azure Blob Storage for integrating data from multiple source systems, which
include loading nested JSON formatted data into Azure SQL tables.
Ensured solution compliance with solution design, best practices, technical architecture, design standards, technology
roadmaps, and business requirements.
Monitored and tuned ProxySQL configurations for high-throughput environments, achieving lower query latency and
increased connection management efficiency.
Leveraged Azure Databricks for scalable data engineering and analytics processing.
Designed and developed complex calculations and calculated fields in Tableau to derive key performance indicators (KPIs)
and metrics for business analysis.
Utilized Azure Synapse Analytics for high-performance analytics and data warehousing needs.
Proficient in creating visual summaries to understand the structural properties of the orders XMLs using Python packages
(Matplotlib and Seaborn).
Utilized Tableau Prep Builder for data preparation tasks, including data cleaning, shaping, and blending, to ensure high-
quality data for analysis.
Deployed and managed Azure Data Lake Storage for storing large volumes of structured and unstructured data.
Used Agile systems and strategies to provide quick and feasible solutions, based on agile systems, to the organization.
Connected Tableau with Python using TABPY to provide fast visual analytics solutions with data mapping.
Created Reports in Tableau using various features such as visuals (Charts, Graphs, KPIs, Maps, etc.) for insightful analytics.
Worked on Row-level Security, Dynamic parameterized filtering, and generated trends/insights through the forecasting
feature in the Analytics tab in Tableau.
Integrated Azure Machine Learning for advanced analytics, predictive modeling, and building machine learning pipelines.
Innovated and deployed Tableau BI Dashboard reporting solutions tailored for diverse groups.
Utilized Python for data manipulation and preprocessing tasks in conjunction with Tableau for comprehensive analytics
solutions.
Leveraged Azure Cosmos DB for globally distributed database applications requiring low-latency and high availability.
Proficient Python packages (NumPy, SciPy & Pandas) to perform exploratory data analysis (EDA) to gain an overall picture
of the data.
Environment: Tableau, ETL, Microsoft Azure, Azure Data Factory, Azure SQL Data Warehouse, Azure Blob Storage, Azure
Databricks, Azure Synapse Analytics, Azure Functions, Azure Data Lake Storage, Python, TABPY, PySpark.
Environment: Sqoop, Pig, HDFS, GitHub, Apache Cassandra, ZooKeeper, Flume, Kafka, Apache Spark, Scala, Hive, Hadoop,
Cloudera, HBase, MySQL, YAML, JIRA, Git, GitHub