0% found this document useful (0 votes)
15 views

Sai Terraform

The document provides details about the person's technical skills and experience in cloud technologies like AWS, devops tools, databases, and programming languages. They have 7 years of work experience in roles like systems engineer, big data devops engineer, working with technologies such as AWS, Linux, Puppet, Docker, Terraform and databases like Oracle.

Uploaded by

prasant.k
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Sai Terraform

The document provides details about the person's technical skills and experience in cloud technologies like AWS, devops tools, databases, and programming languages. They have 7 years of work experience in roles like systems engineer, big data devops engineer, working with technologies such as AWS, Linux, Puppet, Docker, Terraform and databases like Oracle.

Uploaded by

prasant.k
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

SAI.

Technical Proficiency
● DevOps tools: Puppet, HPOO, Docker, GIT, Jenkins,Ansible,BITBUCKET,Ansible
Tower,kubernates,chef,terraform, terragrunt, packer
● Database:Oracle DBA 11g,Cloning,RAC
● Tools:commvault (RMAN backup’s), Netcool, Splunk, BSM, BLADELOGIC server
Automation,Goldengate
● Ticketing Tools:CA Service desk, JIRA.
● Other Software Expertise:websphere, shell scripting and python scripting and
JBOSS,YAML scripting.JSON.
● AWS: Glue, Gluecatalog database, Crawler, Lambda, stepfunction ,SNS, SQS,Elastic
Bean stalk, S3, EC2 instances, VPC, ECS, Fragate, Cloud watch, Cloud Formation,
Terraform, boto3, AWSCLI, Datalake,Redshift,Attunity
Replicate,Talend,CodeCommit,Codebuild,Codedeploy.EMR,EKS,IAM
● AWS Bigdata components: glue, glue catalog data base,RDS, EMR, datapipelines
● GCP: vminstances, firewalls, gkc{kubernates),dataflow
● Azure: vms and networking
● Bigdata : Batch processing, migration of datalake from on premises to aws ,altryx

Summary

Cloud activities:
● creation of ec2 instance, security groups and preparing an application servers with
cloudformation templates and ansible
● creation of datalake which includes procuring of s3, glue catalog database, glue job,
dynamodb, crawlers, and creation of stepfunctions and lambdas for each layer of ETL
● working on full end to end procuring and deployment of ETL flow using s3 and glue
using terraform and terragrunt
● Designing the data flow on aws based on requirements
● creation of CICD pipeline using code build
● creating specfiles for code build actions
● creating apigateways and integration with applications and route53
● accessing lambdas, RDBMS, sns,sqs, secretmanager, sftp services using terraform
● worked on redshift procuring server and doing admin tasks
● worked on GCP for creation of gks (kubernates) cluster using terraform

DevOps and Deployment Tools:


● Deployments using BladeLogic and Puppet on databases and application servers
●Creating playbooks for ansible and configurating servers using ansible.
●Developing Server Builds using ansible and operating it through jenkins
●Building Flows using Orchestration tools like HPOO
●Creating Audit work on Instances and servers
●Supporting Bridge calls in the outage situations and raising required tickets using
ticketing tools like CA and JIRA
● Creating Dashboards using Splunk and monitoring splunk logs.
● Creating Flows by using HPOO which will deploy the code and displays results directly.
● Troubleshooting the things on respective servers based on alerts in Nagios and sitescope.
● Installing the respective patches w.r.t products in all environments.
● Applying patches on jboss servers using Jenkins and BladeLogic
● Creating tickets in CAservicedesk based on priority and business impact to deploy the
changes
● Troubleshooting the clients based on environmental issues
● Shell scripting and managing it through Bladelogic.
● Creating Containers using Docker Environment
● Creating JSON files for AWS building and deploying using Elastic beanstalk
● Creating Infrastructure using Clloudformation and Ansible
ORACLE DBA:
● Having overall knowledge on oracle Database Architecture
● Making changes to the database through change management after approval from client.
● Cloning Database using RMAN.
● Refresh using expdp and impdp utilities.
● Resolving generic errors by creating service request to oracle support.
● Worked on Linux, UNIX and Solaris.
● Installing Oracle Database software on Linux in lower environments like IAT and UAT
(only)
● Database Point In Time Recovery using RMAN
● Managing the database space by adding the data files and tempfiles if required.
● Storage management by closely monitoring the critical filesystems.

Total Work Experience

● Total year of experience 7 years


● Current designation as systems Engineer at Zapiot
JOB Profile

Project#1 : 24th June 2019 – present

Employer : EPAM Systems


Client : ALCON(foresight)
Designation : Bigdata Devops Engineer
Platforms : AWS,Linux,Devops,terraform, terragrunt,Bigdata,batch
Environments: DEV,TEST,QA,PROD,Shared

Key Responsibilities:

Migration of Datalake:
● Migrated successfully Datalake from On premises to AWS successfully of batch
processing
● Maintenance of Datalake and procuring infra for new Dataproducucts
● Creating CICD pipelines for all dataproducts
● Creating s3,glue and databases and dynamodbs for all dataproducts as required
● Scheduling the runs and maintaining the sns and sqs setup using lambdas to maintain
dependency of data products
● Deployment of all services at a single base repo using terraform
● Creating views in Athena
● Worked on redshift and lambdas to dump data into redshift
● Handled admin tasks for redshift and created users and provided required .so that
business can be able to view data in redshift
● Created apigateways and integration with application and data layer
● Undertaken full devops activities in over all 12-15 accounts with respect to products

Maintenance and Deployments:


● Deploying new dataproducts from lower to higher environment
● Creating terraform modules based on requirements
● Terraform updations on all layers as required
● Terragrunt implementation of hl and accessing terraform
● Taking care of repositories of all dataproducts.
● Creating ec2 instances and installation of things as required
● Acting as point of contact for support for issue so of total environment in prod
● Coordinating with developers as per there requirements as per ETL as trusted , enriched
and business layers

JOB Profile

Project from Aug 2018 – June-2019

Employer : Capgemini pvt ltd


Designation : Assosicate Consultant
Platforms : Linux, UNIX, RDS,Attunity replicate ,ansible

Key Responsibilities:
Migration of DataBase:
● Our main key role is to migrate the data from one DB to another DB using attunity
replicate
● Converted the total migration from normal manual to automation using ansible playbooks
using boto3 and winrm modules
● By using the scripts we can migrate whatever data we required through ansible.
● Migration of Bladelogic jobs to Ansible using Ansible Tower
● Creating infrastruchure on AWS using cloudformation and Ansible

JOB Profile

Project from Nov 2018 – June 2019

Employer : Capgemini pvt ltd


Designation : Assosicate Consultant
Client: Hartford
Platforms : Linux, UNIX, Ansible Tower, Cloudformation,BladeLogic

Key Responsibilities:

● Migration of Bladelogic jobs to ansible and creating templates in ansible tower


● Creating aws infrastruchure using ansible and cloudformation
● Working on different application servers like IIS,JBOSS etc.
● Establishing Talend Env in AWS using cloudformation based on Talend standards.

JOB Profile

Project#1Apr 2015 – July 27th 2018

Employer : ADP Pvt. Ltd.


Client : Multiple Clients
Designation : Member Technical
Platforms : Linux, UNIX, solaris,Oracle11g/10g,DevOps
Environments: DIT,FIT,IAT, UAT, PROD

Key Responsibilities:

Problem Management:
● Providing Support in technical Application Management as problem manager.
● Taking backups for database and restoring database and dealing with transaction logs.
● Attending Bridge, Outage calls and driving efficiently to provide solutions to save SLA.
● Finding Rootcauses for the bridges and outages happened.
● Following up with respective teams and technologies in generating solutions for the
rootcauses.
Deployments and Designing:
● Deploying EARS and scripts on respective servers and instances for nearly 20 products
which were hosted on Websphere,tomcat etc which were hosted in different
Environments of IAT,UAT,PROD
● Worked on Blade logic ,Puppet ,Docker and Splunk
● Developing playbooks of yaml for ansible
● Implementing changes on websphere and Oracle instances.
● Using HPOO Implementing the deployments using a central repository in Central
● Deploying the containers in IAT, UAT and PROD using DOCKER.and Jenkins
● Checking out the alerts using NETCOOL for DB, Sitescope for Application.
● Providing the 24*7 Troubleshoot support for application and database using Devops
Tools
● Releasing of Emergency HF using DevOps tools.
● Creating HPOO flow to automate the things for implementation
● Creating builds in Jenkins and storing in GitHub and implementing in Bladelogic and
docker
● Creating queries for splunk
● Customizing the jenkinsfile based on requirements
● Creating python scripts for api calls
● Creating AWS stacks using cloudformation and ansible in POCs

Design and Automations:


● Reduced the building of process server from 14 days to 20 mins using api calls and
ansible playbooks implementation
● Integration of splunk with ansible tower for Automatic reduction of alerts by executing
playbooks directly on effected servers
● Installation of splunk agents automatically using Jenkins jobs so that all the respective
products monitoring brought under one roof of splunk index
● Creating ansible playbooks for creating EC2 instancs and building application layer on
top of it
Extra-Curricular Activities and Achievements:

● Secured first prize in hackathon of devops


● Secured impact award for alcon project in epam

Educational Qualification

Degree / Standard Year of Passing Percentage University/Board


Bachelor of Technology June2014 78.5 Jawaharlal Nehru
in Electrical and Technological University
Electronics Engineering Kakinada
Intermediate April 2010 92.1 Intermediate Board of
Education
SSC April 2008 86.5 SSC, Andhra Pradesh

You might also like