Devops Notes
Devops Notes
o Agile Model divides tasks into time boxes to provide specific functionality
for the release. Each build is incremental in terms of functionality, with
the final build containing all the attributes. The division of the entire
project into small parts helps minimize the project risk and the overall
project delivery time.
Here are the important stages involved in the Agile Model process in the SDLC
life cycle:
3.Develop/Iteration: The real work begins at this stage after the software
development team defines and designs the requirements. Product,
design, and development teams start working, and the product will
undergo different stages of improvement using simple and minimal
functionality.
4.Test: This phase of the Agile Model involves the testing team. For
example, the Quality Assurance team checks the system’s performance
and reports bugs during this phase.
6.Feedback: After releasing the product, the last step of the Agile Model is
feedback. In this phase, the team receives feedback about the product
and works on correcting bugs based on the received feedback.
Compared to Waterfall, Agile cycles are short. There may be many such cycles in
a project. The phases are repeated until the product is delivered.
Individuals and interactions are given priority over processes and tools.
Adaptive, empowered, self-organizing team.
Focuses on working software rather than comprehensive documentation.
Agile Model in software engineering aims to deliver complete customer
satisfaction by rapidly delivering valuable software.
Welcome changes in requirements, even late in the development phase.
Daily co-operation between businesspeople and developers.
Priority is customer collaboration over contract negotiation.
It enables you to satisfy customers through early and frequent delivery.
A strong emphasis is placed on face-to-face communication.
Developing working software is the primary indicator of progress.
Promote sustainable development pace.
A continuous focus is placed on technical excellence and sound design.
An improvement review is conducted regularly by the team.
1.Scrum
o Scrum Master: The scrum can set up the master team, arra
o Scrum Team: The team manages its work and organizes the
eXtreme Programming(XP)
Crystal:
Using Crystal methodology is one of the most straightforward and most flexible
characteristics.
connected, and teams have been given the right to make decisio
1. Time Boxing
2. MoSCoW Rules
3. Prototyping
FDD describes the small steps of the work that should be obtaine
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
1. Scrum can easily be considered to be the most popular agile framework. The
term ‘scrum’ is much considered synonymously to ‘agile’ by most practitioners.
But that is a misconception. Scrum is just one of the frameworks by which you
can implement agile.
2.The goal of Scrum is to improve communication, teamwork, and speed of
development.
3.The Scrum methodology is one of many agile methodologies, all of which
turn the historically predictive or waterfall project and product development
triangle on its head.
Thus, Scrum is a methodology to exhibit agile.
Yes. The scrum comply with agile manifesto principles as explained below:
-:) The Agile Manifesto is core to the Scrum methodology and can be
considered the underlying mindset behind all agile methodologies. The Scrum
methodology overlays on the Agile Manifesto to put how-tos to the whats.
-:) In other words, Scrum is one of the many how-to guides for developing
outcomes in an agile mindset. Here’s a breakdown of the Agile Manifesto
coupled with a few core Scrum principles;
1. Individuals and interactions over processes and tools.
2. Working software over comprehensive documentation
3. Customer collaboration over contract negotiation
4.Welcome changing requirements, even late in development. Agile processes
harness change for the customer’s competitive advantage.
5.Agile processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace indefinitely.
Waterfall Model
The waterfall model can be defined as a sequential process in the
development of a system or software that follows a top-down approach.
This model was a straight forward and linear model. The waterfall model
had various phases such as Requirement Definition, Software Design,
Implementation, Testing, Deployment, and Maintenance.
This model was only suitable for projects which had stable requirements.
By stable, mean that requirements will not change with the time. But in
today’s world, this is a very unlikely thing because requirements keep on
changing from time to time. These were a few drawbacks of the waterfall
model.
AGILE Methodology
Benefits of DevOps
Speed
Move at high velocity so you can innovate for customers faster, adapt to changing markets
better, and grow more efficient at driving business results. The DevOps model enables your
developers and operations teams to achieve these results. For
example, microservices and continuous delivery let teams take ownership of services and then
release updates to them quicker.
Rapid Delivery
Increase the frequency and pace of releases so you can innovate and improve your product
faster. The quicker you can release new features and fix bugs, the faster you can respond to
your customers’ needs and build competitive advantage. Continuous integration and continuous
delivery are practices that automate the software release process, from build to deploy.
Reliability
Ensure the quality of application updates and infrastructure changes so you can reliably deliver
at a more rapid pace while maintaining a positive experience for end users. Use practices
like continuous integration and continuous delivery to test that each change is functional and
safe. Monitoring and logging practices help you stay informed of performance in real-time.
Scale
Operate and manage your infrastructure and development processes at scale. Automation and
consistency help you manage complex or changing systems efficiently and with reduced risk. For
example, infrastructure as code helps you manage your development, testing, and production
environments in a repeatable and more efficient manner.
Improved Collaboration
Build more effective teams under a DevOps cultural model, which emphasizes values such as
ownership and accountability. Developers and operations teams collaborate closely, share many
responsibilities, and combine their workflows. This reduces inefficiencies and saves time (e.g.
reduced handover periods between developers and operations, writing code that takes into
account the environment in which it is run).
Security
Move quickly while retaining control and preserving compliance. You can adopt a DevOps model
without sacrificing security by using automated compliance policies, fine-grained controls, and
configuration management techniques. For example, using infrastructure as code and policy as
code, you can define and then track compliance at scale.
1) Continuous Development
This phase involves the planning and coding of the software. The vision of
the project is decided during the planning phase. And the developers
begin developing the code for the application. There are no DevOps tools
that are required for planning, but there are several tools for maintaining
the code.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software
development practice in which the developers require to commit changes
to the source code more frequently. This may be on a daily or weekly
basis. Then every commit is built, and this allows early detection of
problems if they are present. Building code is not only involved
compilation, but it also includes unit testing, integration testing,
code review, and packaging.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is
essential to ensure that the code is correctly used on all the servers.
3) Continuous Testing
This phase, where the developed software is continuously testing for
bugs. For constant testing, automation testing tools such as TestNG,
JUnit, Selenium, etc are used. These tools allow QAs to test multiple
code-bases thoroughly in parallel to ensure that there is no flaw in the
functionality. In this phase, Docker Containers can be used for simulating
the test environment.
Automation testing saves a lot of time and effort for executing the tests
instead of doing this manually. Apart from that, report generation is a big
plus. The task of evaluating the test cases that failed in a test suite gets
simpler. Also, we can schedule the execution of the test cases at
predefined times. After testing, the code is continuously integrated with
the existing code.
4) Continuous Delivery and Monitoring
Monitoring is a phase that involves all the operational factors of the entire
DevOps process, where important information about the use of the
software is recorded and carefully processed to find out trends and
identify problem areas. Usually, the monitoring is integrated within the
operational capabilities of the software application.
5) Continuous Feedback
The application development is consistently improved by analyzing the
results from the operations of the software. This is carried out by placing
the critical phase of constant feedback between the operations and the
development of the next version of the current software application.
7) Continuous Operations
All DevOps operations are based on the continuity with complete
automation of the release process and allow the organization to
accelerate the overall time to market continuingly.
It is clear from the discussion that continuity is the critical factor in the
DevOps in removing steps that often distract the development, take it
longer to detect issues and produce a better version of the product after
several months. With DevOps, we can make any software product more
efficient and increase the overall count of interested customers in your
product.
ITIL basics
o ITIL (Information Technology Infrastructure Library) is a set of
principles for IT service management. It is a highly structured model
built to boost productivity and offer statistics for IT teams. ITIL is a
subset of ITSM; it primarily focuses on procedures for delivering,
managing, and improving IT services to businesses and
customers. It improves predictability and IT service delivery
efficiency.
1) Service strategy
All managers follow instructions to develop a service
strategy that ensures the company can manage all
associated costs and risks. There are multiple roles
involved in service strategy, and they can define as
follows.
A) Business relationship manager
B) Finance manager
C) IT steering group (ISG)
D) Demand manager
E) Service Strategy manager
F) Service Portfolio manager
2) Service operation
Service operation includes technical support teams and
application management that respond when an issue
has an impact on the business.
3) Service design
It involves when the service’s architecture is
developed, and the business needs are translated into
technical requirements.
4) Service transition
In this stage, all assets are controlled to deliver a
complete service for testing and integration.
5) Service improvement
It is a reflective approach that involves four stages to
check the services are always in line with the demands
of the business.
ITIL also comes to the table with many benefits that can improve
the DevOps mindset. With its highly understood processes and
well-defined principles, ITIL can provide a different set of
approaches as DevOps continues to evolve across the enterprise.
At the end of the day, ITIL and DevOps can happily co-exist. By
utilizing the best practices of each, organizations can find the
balance of traditional procedures with forward-thinking agility and
speed.
Benefits of blue/green:
o Traditional deployments with in-place upgrades make it difficult to
validate your new application version in a production deployment while
also continuing to run the earlier version of the application.
o Blue/green deployments provide a level of isolation between your blue
and green application environments. This helps ensure spinning up a
parallel green environment does not affect resources underpinning your
blue environment. This isolation reduces your deployment risk.
o After you deploy the green environment, you have the opportunity to
validate it. You might do that with test traffic before sending production
traffic to the green environment, or by using a very small fraction of
production traffic, to better reflect real user traffic. This is called canary
analysis or canary testing. If you discover the green environment is not
operating as expected, there is no impact on the blue environment. You
can route traffic back to it, minimizing impaired operation or downtime
and limiting the blast radius of impact.
o Blue/green deployments also work well with continuous integration and
continuous deployment (CI/CD) workflows, in many cases limiting their
complexity. During the deployment, you can scale out the green
environment as more traffic gets sent to it and scale the blue
environment back in as it receives less traffic. Once the deployment
succeeds, you decommission the blue environment and stop paying for
the resources it was using.
Scrum:
Scrum artifacts
Scrum artifacts are important information used by the scrum team that
helps define the product and what work to be done to create the product.
1. Product Backlog
2. Sprint Backlog
3. Increment (or Sprint Goal)
Scrum master: The person who leads the team guiding them to
comply with the rules and processes of the methodology. The Scrum
Master is in charge of keeping Scrum up to date, providing coaching,
mentoring and training to the teams in case it needs it.
Kanban
o Kanban is a popular framework used to
implement agile and DevOps software development.
o It requires real-time communication of capacity and full
transparency of work. Work items are represented visually on
a kanban board, allowing team members to see the state of every
piece of work at any time.
o K anban is a visual system for managing work as it moves
through a process. Kanban visualizes both the process (the
workflow) and the actual work passing through that process.
o The goal of Kanban is to identify potential bottlenecks in your
process and fix them, so work can flow through it cost-
effectively at an optimal speed or throughput.
1. Planning flexibility
3. Fewer bottlenecks
4. Visual metrics
5. Continuous delivery
Delivery pipeline
Agility – Promote agile ways of working with small teams that deploy
frequently.
Flexible scaling – If a microservice reaches its load capacity, new
instances of that service can rapidly be deployed to the accompanying
cluster to help relieve pressure.
Continuous deployment – We now have frequent and faster release
cycles. Before we would push out updates once a week and now we can
do so about two to three times a day.
Highly maintainable and testable – Teams can experiment with new
features and roll back if something doesn’t work. This makes it easier to
update code and accelerates time-to-market for new features. Plus, it is
easy to isolate and fix faults and bugs in individual services.
Independently deployable – Since microservices are individual units
they allow for fast and easy independent deployment of individual
features.
Technology flexibility – Microservice architectures allow teams the
freedom to select the tools they desire.
High reliability – You can deploy changes for a specific service, without
the threat of bringing down the entire application.
Happier teams – teams who work with microservices are a lot happier,
since they are more autonomous and can build and deploy themselves
without waiting weeks for a pull request to be approved.
CI:
CD:
Conclusion:
Benefits of CI/CD:
1. Smaller code changes
2. Fault isolations
3. Faster mean time to resolution (MTTR)
4. More test reliability
5. Faster release rate
6. Smaller backlog
7. Customer satisfaction
8. Increase team transparency and Accountability
9. Reduce costs
10.Easy maintenance and updates
1. Unit tests
o Unit tests are very low-level and close to the source of an application. They
consist in testing individual methods and functions of the classes,
components, or modules used by your software.
o Unit tests are generally quite cheap to automate and can run very quickly by a
continuous integration server.
2. Integration tests
o Integration tests verify that different modules or services used by your
application work well together.
o For example, it can be testing the interaction with the database or making
sure that microservices work together as expected.
o These types of tests are more expensive to run as they require multiple parts
of the application to be up and running.
3. Functional tests
o Functional tests focus on the business requirements of an application. They
only verify the output of an action and do not check the intermediate states of
the system when performing that action.
o There is sometimes a confusion between integration tests and functional tests
as they both require multiple components to interact with each other. The
difference is that an integration test may simply verify that you can query the
database while a functional test would expect to get a specific value from the
database as defined by the product requirements.
4. End-to-end tests
o End-to-end testing replicates a user behaviour with the software in a complete
application environment. It verifies that various user flows work as expected
and can be as simple as loading a web page or logging in or much more
complex scenarios verifying email notifications, online payments, etc...
o End-to-end tests are very useful, but they're expensive to perform and can be
hard to maintain when they're automated. It is recommended to have a few
key end-to-end tests and rely more on lower-level types of testing (unit and
integration tests) to be able to quickly identify breaking changes.
5. Acceptance testing
o Acceptance tests are formal tests that verify if a system satisfies business
requirements. They require the entire application to be running while testing
and focus on replicating user behaviour.
o But they can also go further and measure the performance of the system and
reject changes if certain goals are not met.
6. Performance testing
o Performance tests evaluate how a system performs under a particular workload. These tests
help to measure the reliability, speed, scalability, and responsiveness of an application.
o For instance, a performance test can observe response times when executing a high number
of requests, or determine how a system behaves with a significant amount of data.
o It can determine if an application meets performance requirements, locate bottlenecks,
measure stability during peak traffic, and more.
7. Smoke testing
o Smoke tests are basic tests that check the basic functionality of an application. They are
meant to be quick to execute, and their goal is to give you the assurance that the major
features of your system are working as expected.
o Smoke tests can be useful right after a new build is made to decide whether or not you can
run more expensive tests, or right after a deployment to make sure that they application is
running properly in the newly deployed environment.
Mercurial and Bazaar are both version control systems used for software development and source
code management.
Ultimately, the choice between Mercurial and Bazaar will depend on your
specific needs and workflow.
2. The personal public repositories are cloned to their local computers for development.
3. After the development is complete, developers push changes to their personal public
repositories.
4. Developers file merge requests to the project maintainer for merge to the project public
repository.
5. The project maintainer pulls changes to the local computer and reviews the code. If the
code is approved, it is pushed to the project public repository.
Advantages
Code collaboration is easier. Developers can share their code by pushing it to their
personal public repositories for others to pull, unlike some workflows where developers
cannot see others' work until it is merged into the project repository.
Disadvantages
It takes more steps and time before the code of developers gets merged into the project
repository.
Coupling
Coupling refers to the degree of dependency between two modules.
We always want low coupling between modules.
Again, we can see coupling as another aspect of the principle of the separation of concerns.
Systems with high cohesion and low coupling would automatically have separation of
concerns, and vice versa
Database Migrations:
Modifying Existing
Databases
Database migrations, rather than simple builds, suddenly become
necessary when a production database system must be upgraded in-situ.
You can’t just build a new version of the database: you have to preserve
the data that is there, and you need to do the upgrade in a way that
doesn’t disrupt the service.
The two approaches are not mutually exclusive: Tools used to migrate a
database using the state-based approach occasionally need ‘help’
defining the correct route for migrations that affect existing data. Tools
that use the migrations approach often need a way to “back fill” the state
of each object, at each version, so that we can see easily how the object
has changed over time.
1.State-based database
migrations
Second one is based on versioning object CREATE scripts, commonly referred to
as the state-based approach,
When using the state-based technique, we store in the VCS the source
DDL scripts to CREATE each database object. Each time we modify a
database object, we commit to the VCS the latest creation script for that
object. In other words, we are versioning the current state of each object
in the database.
An alternative scheme uses the date and time at which the script was
generated to create a script sequence that still has ordering but does not
need sequential integers, for example 20151114100523_CreateDatabase.sql,
where the ordering is defined by a concatenation of year, month, day,
hour, minute, and second in 24-hour format.
The chief benefit of three-tier architecture is that because each tier runs
on its own infrastructure, each tier can be developed simultaneously by a
separate development team, and can be updated or scaled as needed
without impacting the other tiers.
Presentation tier
The presentation tier is the user interface and communication layer of the
application, where the end user interacts with the application. Its main
purpose is to display information to and collect information from the user.
This top-level tier can run on a web browser, as desktop application, or a
graphical user interface (GUI), for example. Web presentation tiers are
usually developed using HTML, CSS and JavaScript Or React. Desktop
applications can be written in a variety of languages depending on the
platform.
Application tier
The application tier, also known as the logic tier or middle tier, is the heart
of the application. In this tier, information collected in the presentation
tier is processed - sometimes against other information in the data tier -
using business logic, a specific set of business rules. The application tier
can also add, delete or modify data in the data tier.
The application tier is typically developed using Python, Java, Perl, PHP or
Ruby, and communicates with the data tier using API calls.
Data tier
The data tier, sometimes called database tier, data access tier or back-
end, is where the information processed by the application is stored and
managed. This can be a relational database management system such
as PostgreSQL, MySQL, MariaDB, Oracle, DB2, Informix or Microsoft SQL
Server, or in a NoSQL Database server such as
Cassandra, CouchDB or MongoDB.
1.Centralized VCS
Centralized version control system (CVCS) uses a central server to store
all files and enables team collaboration. It works on a single repository to
which users can directly access a central server.
The repository in the above diagram indicates a central server that could
be local or remote which is directly connected to each of the
programmer’s workstation.
Every programmer can extract or update their workstations with the data
present in the repository or can make changes to the data or commit in
the repository. Every operation is performed directly on the repository.
2.Distributed VCS
These systems do not necessarily rely on a central server to store all the
versions of a project file.
As you can see in the above diagram, every programmer maintains a local
repository on its own, which is actually the copy or clone of the central
repository on their hard drive. They can commit and update their local
repository without any interference.
They can update their local repositories with new data from the central
server by an operation called “pull” and affect changes to the main
repository by an operation called “push” from their local repository.
The act of cloning an entire repository into your workstation to get a local
repository gives you the following advantages:
All operations (except push & pull) are very fast because the tool only
needs to access the hard drive, not a remote server. Hence, you do not
always need an internet connection.
Committing new change-sets can be done locally without manipulating the
data on the main repository. Once you have a group of change-sets ready,
you can push them all at once.
Since every contributor has a full copy of the project repository, they can
share changes with one another if they want to get some feedback before
affecting changes in the main repository.
If the central server gets crashed at any point of time, the lost data can be
easily recovered from any one of the contributor’s local repositories.
What Is Git?
Git is a free, open-source distributed version control system(DVCS) tool
designed to handle everything from small to very large projects with
speed and efficiency. It was created by Linus Torvalds in 2005 to develop
Linux Kernel. Git has the functionality, performance, security and
flexibility that most teams and individual developers need. It also serves
as an important distributed version-control DevOps tool. :-)
Git provides with all the Distributed VCS facilities to the user. Git
repositories are very easy to find and access. You will know how flexible
and compatible Git is with your system when you go through the features
mentioned below:
3.Scalable:
Git is very scalable. So, if in future, the number of collaborators increase
Git can easily handle this change. Though Git represents an entire
repository, the data stored on the client’s side is very small as Git
compresses all the huge data through a lossless compression technique.
4.Reliable:
Since every contributor has its own local repository, on the events of a
system crash, the lost data can be recovered from any of the local
repositories. You will always have a backup of all your files.
5.Secure:
Git uses the SHA1 (Secure Hash Function) to name and identify objects
within its repository. Every file and commit are check-summed and
retrieved by its checksum at the time of checkout. The Git history is
stored in such a way that the ID of a particular version (a commit in Git
terms) depends upon the complete development history leading up to
that commit. Once it is published, it is not possible to change the old
versions without it being noticed.
6.Economical:
In case of CVCS, the central server needs to be powerful enough to serve
requests of the entire team. For smaller teams, it is not an issue, but as
the team size grows, the hardware limitations of the server can be a
performance bottleneck.
In case of DVCS, developers don’t interact with the server unless they
need to push or pull changes. All the heavy lifting happens on the client
side, so the server hardware can be very simple indeed.
8.Easy Branching:
Branch management with Git is very simple. It takes only few seconds to
create, delete, and merge branches. Feature branches provide an isolated
environment for every change to your codebase. When a developer wants
to start working on something, no matter how big or small, they create a
new branch. This ensures that the master branch always contains
production-quality code.
9.Distributed development:
Git gives each developer a local copy of the entire development history,
and changes are copied from one such repository to another. These
changes are imported as additional development branches, and can be
merged in the same way as a locally developed branch.
The diagram above shows the entire life cycle of Devops starting from
planning the project to its deployment and monitoring. Git plays a vital
role when it comes to managing the code that the collaborators contribute
to the shared repository. This code is then extracted for performing
continuous integration to create a build and test it on the test server and
eventually deploy it on the production.
Tools like Git enable communication between the development and the
operations team. Commit messages in Git play a very important role in
communicating among the team. The bits and pieces that we all deploy
lies in the Version Control system like Git. To succeed in DevOps, you
need to have all of the communication in Version Control. Hence, Git plays
a vital role in succeeding at DevOps.
Some companies that use Git for version control are: Facebook, Yahoo,
Zynga, Quora, Twitter, eBay, Salesforce, Microsoft and many more.
Docker Compose
Compose is a tool for defining and running complex applications with Docker.
Docker Compose is a tool for defining and running multi-container Docker
applications. It’s just that.
With Docker compose, we will run more than one containers together and dependent
another one. For example, you can define a container for database, web server and
cache with depending on DB container. In this example, we will use Redis.
Docker compose can be used in many different requirements and/or in many different ways.
1.Development environments
Plan
Identifying the work to be done is the first step in the DevOps toolchain. This allows tasks
to be prioritized and tracked.
Build
Enabling developers to easily create feature branches, review code, merge branches, and
fix bugs allows for a smooth development cycle.
Monitor
Monitoring your application and production server performance, as well as managing
incidents, is critical to the smooth operation of your software.
Operate
Ensuring the released system can scale automatically as needed is one of the ways to
guarantee smooth system operations.
Continuous feedback
Distilling and sharing information empowers organizations to develop accurate insights
into how well the software is received and used.
What is the difference between GitLab vs
GitHub?
Chef
Chef is a tool used for Configuration Management and is closely
competing with Puppet..
Infrastructure configuration
Application deployment
Configurations are managed across your network
Configuration Management
There are broadly two ways to manage your configurations namely Push
and Pull configurations.
DEFINITION
SaltStack
o SaltStack, also known as Salt, is a open-source configuration
management and orchestration tool. It uses a central repository to
provision new servers and other IT infrastructure, to make changes to
existing ones, and to install software in IT environments, including
physical and virtual servers, as well as the cloud.
o SaltStack automates repeated system administrative and code
deployment tasks, eliminating manual processes in a way that can
reduce errors that occur when IT organizations configure systems.
o Salt is used in DevOps organizations because it pulls developer
code and configuration information from a central code repository,
such as GitHub or Subversion, and pushes that content remotely out
to servers. Salt users can write their own scripts and programs.
o Salt is a very powerful automation framework. Salt architecture is
based on the idea of executing commands remotely.
o Salt is designed to allow users to explicitly target and issue
commands to multiple machines directly. Salt is based around the
idea of a Master, which controls one or more Minions.
Communications between a master and minions occur over
the ZeroMQ message bus.
Benefits of SaltStack
Being simple as well as a feature-rich system, Salt provides many benefits
and they can be summarized as below −
Robust − Salt is powerful and robust configuration
management framework and works around tens of thousands
of systems.
Authentication − Salt manages simple SSH key pairs for
authentication.
Secure − Salt manages secure data using an encrypted
protocol.
Fast − Salt is very fast, lightweight communication bus to
provide the foundation for a remote execution engine.
Virtual Machine Automation − The Salt Virt Cloud Controller
capability is used for automation.
Infrastructure as data, not code − Salt provides a simple
deployment, model driven configuration management and
command execution framework.
o Fault tolerance
o Flexible
o Scalable Configuration Management
o Parallel Execution model
o Python API
o Easy to Setup.
What Is Ansible?
o Ansible is a popular IT automation engine that automates tasks that are either
repetitive or complex like configuration management, cloud provisioning,
software deployment, and intra-service orchestration.
o Ansible is used for the multi-tier deployments and it models all of IT
infrastructure into one deployment instead of handling each one separately.
There are no agents and no custom security architecture is required to be
used in the Ansible architecture.
o The deployment is simple plain English like language that is used in Ansible
called YAML which stands for “YAML Ain’t Markup Language.”
o To work with Ansible is very easy; it pushes out small programs called
“Ansible Modules” to your nodes to connect. It can deploy and connect using
the SSH agent to execute the modules and then removes it when finished.
Ansible has over 750 + modules built-in it.
o Ansible can use the inventory and variable information from other sources
such as Rackspace, EC2, and Openstack, etc.
o If you need to write your code then also you can use Ansible in languages
such as Python, Ruby, and Bash, etc which return JSON. You can write your
modules, API, and Plugins.
o Playbooks are the simple and powerful automation language used to
orchestrate multiple infrastructures in one goes. This can be done in Ansible.
The steps for other Linux versions same steps will be used for this.
Name: We use Docker to create the containers and users can give a new and
unique name to these containers. Docker can also give a default name to the
Docker.
It: It stands for interactive. This terminal gets connected to virtual TTY and so
the running processes get interacted to the output terminal.
Busybox: The base image is used to create the container. It is like a zip file
that contains the necessary files to deploy and develop the application.
Echo: It is a command that usually executes the commands that are
contained in the busybox.
//sudo Docker ps
7). Command to Kill Running Containers
Following command can be used to stop a container:
#example
A running container is stopped through this command and the container is kept in
cache even after deletion. The same command is executed again by the following
command:
Docker ps
All running containers can be enlisted by following the above command. While to
display, running and non-running containers can be checked by the following
command:
Docker ps –a