0% found this document useful (0 votes)
24 views

Lec 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Lec 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 57

INT 331

FUNDAMENTALS OF DEVOPS
Microservices
• Microservices are an architectural approach to
develop software applications as a collection
of small, independent services that
communicate with each other over a network.
• Instead of building a monolithic application
where all the functionality is tightly integrated
into a single codebase.
• Microservices break down the application into
smaller, loosely coupled services.
How do Microservices work
• Microservices work by breaking down a
complex application into smaller,
independent pieces that communicate
and work together, providing
flexibility, scalability, and easier
maintenance, much like constructing a
city from modular, interconnected
components.
• Modular Structure:
– Microservices architecture breaks down
large, monolithic applications into smaller,
independent services.
– Each service is a self-contained module
with a specific business capability or
function.
– This modular structure promotes flexibility,
ease of development, and simplified
maintenance.
• Independent Functions:
– Each microservice is designed to handle a
specific business function or feature.
– For example, one service may manage user
authentication, while another handles
product catalog functions.
– This independence allows for specialized
development and maintenance of each
service.
• Communication:
– Microservices communicate with each other
through well-defined Application
Programming Interfaces (APIs).
– APIs serve as the interfaces through which
services exchange information and
requests.
– This standardized communication enables
interoperability and flexibility in integrating
services.
• Flexibility:
– Microservices architecture supports the use
of diverse technologies for each service.
– This means that different programming
languages, frameworks, and databases can
be chosen based on the specific
requirements of each microservice.
– Teams have the flexibility to use the best
tools for their respective functions.
• Independence and Updates:
– Microservices operate independently,
allowing for updates or modifications to one
service without affecting the entire system.
– This decoupling of services reduces the risk
of system-wide disruptions during updates,
making it easier to implement changes and
improvements.
– Also Microservices contribute to system
resilience by ensuring that if one service
encounters issues or failures, it does not
bring down the entire system.
• Scalability:
– Microservices offer scalability by allowing
the addition of instances of specific
services.
– If a particular function requires more
resources, additional instances of that
microservice can be deployed to handle
increased demand.
– This scalability is crucial for adapting to
varying workloads.
• Continuous Improvement:
– The modular nature of microservices
facilitates continuous improvement.
– Development teams can independently
work on and release updates for their
respective services.
– This agility enables the system to evolve
rapidly and respond to changing
requirements or user needs.
Components of Microservices Architecture

• Microservices: These are the individual, self-


contained services that encapsulate specific business
capabilities. Each microservice focuses on a distinct
function or feature.
• API gateways:The API Gateway is a central entry
point for external clients to interact with the
microservices. It manages requests, handles
authentication, and routes requests to the
appropriate microservices.
• Service registry and recovery: This component
keeps track of the locations and network addresses of
all microservices in the system. Service discovery
ensures that services can locate and communicate
with each other dynamically.
• Load Balancer: Load balancers distribute incoming
network traffic across multiple instances of microservices.
This ensures that the workload is evenly distributed,
optimizing resource utilization and preventing any single
service from becoming a bottleneck.
• Containerization: Containers, such as Docker,
encapsulate microservices and their dependencies.
Orchestration tools, like Kubernetes, manage the
deployment, scaling, and operation of containers, ensuring
efficient resource utilization.
• Event Bus/Message Broker: An event bus or message
broker facilitates communication and coordination
between microservices. It allows services to publish and
subscribe to events, enabling asynchronous
communication and decoupling.
• Centralized Logging and Monitoring: Centralized logging
and monitoring tools help track the performance and health of
microservices. They provide insights into system behavior,
detect issues, and aid in troubleshooting.
• Database per Microservice: Each microservice typically has
its own database, ensuring data autonomy. This allows services
to independently manage and scale their data storage according
to their specific requirements.
• Caching: Caching mechanisms can be implemented to improve
performance by storing frequently accessed data closer to the
microservices. This reduces the need to repeatedly fetch the
same data from databases.
• Fault Tolerance and Resilience Components: Implementing
components for fault tolerance, such as circuit breakers and
retry mechanisms, ensures that the system can gracefully
handle failures in microservices and recover without impacting
overall functionality.
Microservice
Aspect VS Monolithic
Microservices
Architecture
architecture
Monolithic Architecture

Decomposed into small, Single, tightly integrated


Architecture Style independent services. codebase.

Development Team Small, cross-functional teams Larger, centralized


for each microservice. development team.
Structure
Independent scaling of Scaling involves replicating the
Scalability individual services. entire application.

Independent deployment of Whole application is deployed


Deployment services. as a single unit.

Resource Efficient use of resources as


Resources allocated based on
services can scale
the overall application’s needs.
Utilization independently.

Development Slower development and


Faster development and
deployment due to the entire
deployment cycles.
Speed codebase.

Easier to adopt new


Limited flexibility due to a
Flexibility technologies for specific
common technology stack.
services.

Maintenance can be complex


Easier maintenance of smaller,
Maintenance focused codebases.
for a large, monolithic
codebase.
Example
• Amazon’s online store is like a giant
puzzle made of many small, specialized
pieces called microservices. Each
microservice does a specific job to make
sure everything runs smoothly.
Together, these microservices work
behind the scenes to give you a great
shopping experience.
• User Service: Manages user accounts, authentication, and
preferences. It handles user registration, login, and profile
management, ensuring a personalized experience for users.
• Search Service: Powers the search functionality on the
platform, enabling users to find products quickly. It indexes
product information and provides relevant search results based
on user queries.
• Catalog Service: Manages the product catalog, including
product details, categories, and relationships. It ensures that
product information is accurate, up-to-date, and easily
accessible to users.
• Cart Service: Manages the user’s shopping cart, allowing them
to add, remove, and modify items before checkout. It ensures a
seamless shopping experience by keeping track of selected
items.
• Wishlist Service: Manages user wishlists, allowing them to
save products for future purchase. It provides a convenient way
for users to track and manage their desired items.
• Order Taking Service: Accepts and processes orders placed
by customers. It validates orders, checks for product availability,
and initiates the order fulfillment process.
• Order Processing Service: Manages the processing and
fulfillment of orders. It coordinates with inventory, shipping, and
payment services to ensure timely and accurate order delivery.
• Payment Service: Handles payment processing for orders. It
securely processes payment transactions, integrates with
payment gateways, and manages payment-related data.
• Logistics Service: Coordinates the logistics of order delivery. It
calculates shipping costs, assigns carriers, tracks shipments,
and manages delivery routes.
• Warehouse Service: Manages inventory across warehouses. It
tracks inventory levels, updates stock availability, and
coordinates stock replenishment.
• Notification Service: Sends notifications to users regarding
their orders, promotions, and other relevant information. It
keeps users informed about the status of their interactions with
the platform.
• Recommendation Service: Provides personalized product
recommendations to users. It analyzes user behavior and
preferences to suggest relevant products, improving the user
experience and driving sales.
Role of Microservices in
DevOps

• Continuous Integration/Continuous
Deployment (CI/CD):
– In a microservices architecture, each service can be
independently developed, tested, and deployed.
CI/CD pipelines are crucial for efficiently managing
the constant updates and releases associated with
microservices.
– DevOps practices emphasize CI/CD pipelines, which
involve automating the building, testing, and
deployment of software.
• Continuous Monitoring and Logging
– Microservices architecture requires robust
monitoring to track the health and
interactions between various services,
aiding in early issue detection and
resolution. DevOps emphasizes continuous
monitoring and logging for real-time
insights into application performance.
Benefits of using Microservices Architecture

• Modularity and Decoupling:


– Independent
Development: Microservices are
developed and deployed independently,
allowing different teams to work on
different services simultaneously.
– Isolation of Failures: Failures in one
microservice do not necessarily affect
others, providing increased fault isolation.
• Scalability:
– Granular Scaling: Each microservice can
be scaled independently based on its
specific resource needs, allowing for
efficient resource utilization.
– Elasticity: Microservices architectures can
easily adapt to varying workloads by
dynamically scaling individual services.
• Technology Diversity:
– Freedom of Technology: Each microservice can
be implemented using the most appropriate
technology stack for its specific requirements,
fostering technological diversity.
• Autonomous Teams:
– Team Empowerment: Microservices often enable
small, cross-functional teams to work
independently on specific services, promoting
autonomy and faster decision-making.
– Reduced Coordination Overhead: Teams can
release and update their services without requiring
extensive coordination with other teams.
• Rapid Deployment and Continuous
Delivery:
– Faster Release Cycles: Microservices can
be developed, tested, and deployed
independently, facilitating faster release
cycles.
– Continuous Integration and
Deployment (CI/CD): Automation tools
support continuous integration and
deployment practices, enhancing
development speed and reliability.
• Easy Maintenance:
– Isolated Codebases: Smaller, focused
codebases are easier to understand,
maintain, and troubleshoot.
– Rolling Updates: Individual microservices
can be updated or rolled back without
affecting the entire application.
Challenges of using Microservices
Architecture
• Complexity of Distributed Systems: Microservices introduce
the complexity of distributed systems. Managing communication
between services, handling network latency, and ensuring data
consistency across services can be challenging.
• Increased Development and Operational Overhead: The
decomposition of an application into microservices requires
additional effort in terms of development, testing, deployment,
and monitoring. Teams need to manage a larger number of
services, each with its own codebase, dependencies, and
deployment process.
• Inter-Service Communication Overhead: Microservices need
to communicate with each other over the network. This can
result in increased latency and additional complexity in
managing communication protocols, error handling, and data
transfer.
• Data Consistency and Transaction
Management: Maintaining data consistency across
microservices can be challenging. Implementing distributed
transactions and ensuring data integrity becomes complex, and
traditional ACID transactions may not be easily achievable.
• Deployment Challenges: Coordinating the deployment of
multiple microservices, especially when there are dependencies
between them, can be complex. Ensuring consistency and
avoiding service downtime during updates require careful
planning.
• Monitoring and Debugging Complexity: Monitoring and
debugging become more complex in a microservices
environment. Identifying the root cause of issues may involve
tracing requests across multiple services, and centralized
logging becomes crucial for effective debugging.
Real-World Examples of Companies using Microservices
Architecture

• Amazon: Initially, Amazon was a monolithic application but


when microservice came into existence, Amazon was the first
platform to break its application into small components, thereby
adapting microservice. Due to its ability to change individual
features and resources, the site’s functionality improved to a
massive extent.

• Netflix: Netflix is one such company that uses microservices


with APIs. In 2007, when Netflix started its move towards
movie-streaming service, it suffered huge service outages and
challenges, then came the microservice architecture which was
a blessing to the platform.

• Uber: When Uber switched from monolithic nature to a


microservice, it experienced a smooth way. Using microservice
architecture, the webpage views and searches increased to a
greater extent.
Technologies that enables microservices
architecture
• Docker is a containerization platform
that allows developers to package
applications and their dependencies into
lightweight, portable containers. These
containers encapsulate everything
needed to run the application, including
code, runtime, libraries, and system
tools, ensuring consistency across
different environments.
• Kubernetes:
– Kubernetes is an open-source container orchestration platform
originally developed by Google. It automates the deployment,
scaling, and management of containerized applications, providing
features for container scheduling, service discovery, load
balancing, and more.

• Service mesh technologies like Istio and Linkerd


provide a dedicated infrastructure layer for
handling service-to-service communication, traffic
management, and observability in microservices
architectures. They offer features like load
balancing, service discovery, circuit breaking, and
metrics collection.
– API gateways such as Kong and Tyk serve as entry points for
external clients to access microservices-based applications.
They provide functionalities like routing, authentication, rate
limiting, and request/response transformations.
• Serverless Computing:
– While not exclusive to microservices,
serverless platforms like AWS Lambda,
Azure Functions, and Google Cloud
Functions can be used to deploy individual
microservices without managing the
underlying infrastructure, further
decoupling and scaling services.
What Is Container(ization)
• Containerization is a process of packaging your
application together with its dependencies into
one package (a container). Such a package can
then be run pretty much anywhere, no matter
if it’s an on-premises server, a virtual machine
in the cloud, or a developer’s laptop. By
abstracting the infrastructure, containerization
allows you to make your application truly
portable and flexible.
• Before we dive into explaining containers in
the context of DevOps, let’s first talk about
containers themselves.
• Containers are lightweight piece of software
that contains all the code, libraries, and
dependencies that the application needs to
run. Containers do not have their own
operating system, they get resources from the
host operating system. Because they do not
have an operating system of their own they
are lightweight. They are also easily portable
as they contain all the libraries and the
dependencies to run the application.
Problems of Traditional Applications
• Traditionally, to install and run any software on any server, you need to
meet several requirements. The software needs to support your
operating system, and you probably need to have a few libraries and
tools installed to run the software. But that’s not all.

• All these requirements probably need to be in a specific version and


accessible at the specific path. Also, sometimes you may need to have
proper permissions for certain directories and files. Overall, there are
quite a few checkboxes you need to tick before you can successfully
run the software.

• These requirements create certain problems. First, what if you already


have some of the tools or libraries installed, but in an unsupported
version? You’d have to upgrade or downgrade them, hoping it won’t
break any existing software. The problems don’t end once requirements
are met and the application is running correctly, though.
• What if you want to run the application in another cloud environment?
You’ll have to start the process all over again. Containers are meant to
solve these problems.
What Is Docker
• Docker is a containerization platform
that allows you to package code and
dependencies into a Docker imagethat
can be run on any machine.
• Docker allows your application to be
separated from your infrastructure. The
image that you created is portable, so
you can run it on any machine that has
Docker installed.
• Docker is the containerization platform
that is used to package your application
and all its dependencies together in the
form of containers to make sure that
your application works seamlessly in
any environment which can be
developed or tested or in production.
Docker is a tool designed to make it
easier to create, deploy, and run
applications by using containers.
• Docker is the world’s leading software container
platform. It was launched in 2013 by a company
called Dotcloud, Inc which was later renamed Docker,
Inc. It is written in the Go language. It has been just
six years since Docker was launched yet communities
have already shifted to it from VMs. Docker is
designed to benefit both developers and system
administrators making it a part of many DevOps
toolchains. Developers can write code without
worrying about the testing and production
environment. Sysadmins need not worry about
infrastructure as Docker can easily scale up and scale
down the number of systems. Docker comes into play
at the deployment stage of the software development
cycle.
Docker Architecture
• Docker architecture consists of Docker client, Docker Daemon running on Docker
Host, and Docker Hub repository. Docker has client-server architecture in which
the client communicates with the Docker Daemon running on the Docker Host
using a combination of REST APIs, Socket IO, and TCP. If we have to build the
Docker image, then we use the client to execute the build command to Docker
Daemon then Docker Daemon builds an image based on given inputs and saves
it into the Docker registry. If you don’t want to create an image then just execute
the pull command from the client and then Docker Daemon will pull the image
from the Docker Hub finally if we want to run the image then execute the run
command from the client which will create the container.
DOCKER ARCHITECTURE

There is one more command docker push which will push the
image into the Registry.
Components of Docker
• Docker Clients and Servers– Docker
has a client-server architecture. The
Docker Daemon/Server consists of all
containers. The Docker Daemon/Server
receives the request from the Docker
client through CLI or REST APIs and thus
processes the request accordingly.
Docker client and Daemon can be
present on the same host or different
host.
• Docker Images– Docker images are used to build docker
containers by using a read-only template. The foundation of
every image is a base image eg. base images such as –
ubuntu14.04 LTS, and Fedora 20. Base images can also be
created from scratch and then required applications can be
added to the base image by modifying it thus this process of
creating a new image is called “committing the change”.

• Docker File– Dockerfile is a text file that contains a series of


instructions on how to build your Docker image. This image
contains all the project code and its dependencies. The set of
commands that you need to use in your Docker File is FROM,
CMD, ENTRYPOINT, VOLUME, ENV, and many more.
• Docker Registries– Docker Registry is a storage
component for Docker images. We can store the
images in either public/private repositories so that
multiple users can collaborate in building the
application.
• Docker Containers– Docker Containers are runtime
instances of Docker images. Containers contain the
whole kit required for an application, so the
application can be run in an isolated way. For eg.-
Suppose there is an image of Ubuntu OS with NGINX
SERVER when this image is run with the docker run
command, then a container will be created and
NGINX SERVER will be running on Ubuntu OS.
What are Kubernetes
Containers?
• Kubernetes is an open-source container orchestration
framework that was originally developed by Google.
Container orchestration is automation. It can facilitate
you to deploy the identical application across
different environments like physical machines, virtual
machines cloud environments, or perhaps hybrid
deployment environments and makes it easier for the
management, scaling, and networking of containers.
• The original name for Kubernetes
(originating from Greek) within Google
was Project 7. Within the year 2014,
Kubernetes was released for the first
time and made open-sourced too after
using it to run production workloads at
scale for quite a decade. Also, pure
open-source Kubernetes is free and
might be downloaded from its
repository on GitHub.
• It is an container orcherstration platform by which
you can automate the deployment of the application ,
scaling of the application by depending on the traffic.
Containers are light in weight which can be portable
very easily form one server to the another server very
easily following makes ideal for running containerized
applications in production.
• Load Balancing
• Service Discovery
• Self-healing
• Horizontal scaling
Architecture of Kubernetes
• All the big application like Netflix, Amazon, Hotstar how
all these work like when we are watching Hotstar.
Milions of people are watching it at the same time very
smoothly. How they are feeling the smooth experience
without lagging. The approach we are following that is
Kubernetes. All these applications are working on
Kubernetes.
• It is open source container reposit which is used for
container Orchestration ie. To manage the containers,
their deployment like auto scaling, creation, deletion
and automation.
• Suppose my application is running on conatiners , eg 4
conatiners suppose my application become popular and
a lot of people are watching my website due to which I
need more conatainer in the same server. Hence a time
comes when the capacity of the server if over to hold
• Now all the responsibilities will be taken
by Kubernetes by multiple servers
• Control playing is Kubernetes master
node.
• Understanding of Master Node
• Kube-apiserver: a frontend of the cluster that allows
you to interact with the Kuberneetes API and connects
to the etcd database.
• Kube-scheduler: schedules pods on specific nodes
supported labels, taints, and tolerations set for pods
• etcd: a database, stores all cluster data which includes
job scheduling info, pod details, stage information, etc.
• Kube – controller – manager: manages the current
state of the cluster. Suppose at any point of time if any
pods got destroyed du to any reason the, in that case it
will generate that much pods.
• cloud – controller – manager: interacts with outside
cloud manager
• Understanding of Worker Node
• We wouldn’t get anywhere without Worker Nodes, though.
These Worker Nodes are the Nodes where your applications
operate. The Worker Nodes communicate back with the Master
Node. Communication to a Worker Node is handled by the
Kubelet Process.
• kubelet: passes requests to the container engine to ensure that
pods are available
• Kube-proxy: runs on every node and uses iptables to provide
an interface to connect to Kubernetes components
• container – runtime: take care of actually running container
• network agent: implements a software-defined networking
solution
• Containers of an application are tightly
coupled together in a Pod. By definition,
a Pod is the smallest unit that can be
scheduled as deployment in Kubernetes.
Once Pods have been deployed, and are
running, the Kubelet process
communicates with the Pods to check on
state and health, and therefore the
Kube-proxy routes any packets to the
Pods from other resources that might be
wanting to communicate with them.
Aspect Docker Containers Kubernetes

Container orchestration platform


Container management tool for
for automating deployment,
Definition building, running, and managing
scaling, and management of
containers
containerized applications

To orchestrate and manage


To package applications with their
Purpose dependencies into containers
containerized applications across a
cluster of machines

Uses Docker CLI or Docker


Uses kubectl and manifests (YAML
Compose for managing individual
Deployment containers or multi-container
files) for managing containers at
scale within clusters
applications

Automated scaling using


Manual scaling using Docker CLI
Scaling commands or Docker Compose
Kubernetes controllers like
Deployments and StatefulSets

Docker networking for linking Advanced networking with built-in


Networking containers within the same host or service discovery and
network load balancing
• Advantages Of Kubernetes Containers
• The following are the advantages of kubernetes containers:
• Scalability: Kubernetes allows for easy scaling of applications by
increasing or decreasing the number of replicas of a particular service.
• High availability: Kubernetes provides features such as self-healing
and automatic failover, which help ensure that applications remain
available even in the event of a node failure.
• Portability: Kubernetes is designed to be platform-agnostic, which
means that applications can be deployed on any infrastructure, whether
it be on-premises, in the cloud, or at the edge.
• Automation: Kubernetes automates many of the tasks associated with
deploying and managing applications, such as rolling updates, service
discovery, and load balancing.
• Flexibility: Kubernetes allows for the use of multiple orchestration
patterns, such as blue-green deployment, canary releases, and A/B
testing, which gives developers more flexibility in how they deploy their
applications.
• Complexity: Kubernetes can be complex to set up and manage,
especially for organizations that are new to container
orchestration.
• Steep learning curve: There is a steep learning curve for
understanding how to use Kubernetes effectively, and for
troubleshooting issues that may arise.
• Limited native support for certain technologies:
Kubernetes does not natively support certain technologies, such
as Windows containers, which can create challenges for
organizations that use these technologies.
• Networking complexity: Kubernetes networking can be
complex, especially when working with multiple clusters or when
trying to integrate with existing network infrastructure.
• Higher resource requirements: running a kubernetes
cluster can consume more resources than running a traditional
application, which can make it more expensive to operate.

You might also like