Full Stack
Full Stack
Deployment Process:
The CI/CD build process flow diagram for an online application typically
involves multiple stages and components to automate the building, testing,
and deployment of the application. Below is a simplified representation of
the CI/CD process flow.
• Source Code (Version Control): This is the central repository where
developers store and manage the application's source code. Popular
version control systems include Git, SVN, etc. Developers push code
changes to this repository.
• CI Server (Continuous Integration): The CI server monitors the version
control system for code changes. Whenever a new commit is pushed or a
pull request is submitted, the CI server is triggered. Its primary purpose is
to automate the integration of code changes into a shared repository and
perform various automated tasks.
• Automated Build (Build Server): Upon triggering, the CI server initiates
an automated build process. It compiles the source code, gathers
dependencies, and generates a build artifact (e.g., executable, binary, or
container image). This artifact represents the built application.
• Automated Unit Tests and Code Analysis: After the build, the CI server
runs automated unit tests to check the functionality and correctness of
the application. Additionally, it may perform static code analysis to
identify potential issues, bugs, or code style violations.
• Automated Testing Environment: This is an isolated environment where the
application is deployed for automated testing. It simulates the production
environment but may have fewer resources. Automated integration tests,
regression tests, and other tests are conducted here.
• Deployment to Staging Environment: If all the previous stages (build and
automated tests) are successful, the application is deployed to a staging
environment. The staging
2. Continuous Deployment:
Static code analysis tools (e.g., ESLint, SonarQube) check code for
coding standards, security vulnerabilities, and code quality.
Integrate static code analysis into your CI/CD pipeline to catch issues early.
Why Containers?
What is Docker?
Docker Components
These are the Docker Components:
1. DOCKER CLIENT The Docker client enables users to interact with
Docker. Docker runs in a client-server architecture that means the docker
client can connect to the docker host locally or remotely.
Docker client and host (daemon) can run on the same host or can run on
different hosts and communicate through sockets or a RESTful API.
The Docker client is the primary way that many Docker users interact with
Docker. When we use commands such as docker run, the client sendsthese
commands to docker daemon, which carries them out. The docker command
uses the Docker API. The Docker client can communicate with more than one
daemon. We can communicate with the docker client using the Docker CLI.
We have some commands through which we can communicate with the
Docker client. Then the docker client passes those commands to the Docker
daemon. docker build ... docker run ... docker push ..etc.
1. goto https://docs.docker.com/desktop/install/windows-install/
2. click on Docker for Desktop for windows and download the file.
3. double click on Docker Desktop Installer , Install it, restart your pc
4. system Requirements
1. Windows 10 64-bit: Home or Pro 21H2 (build 19045) or higher (to
check your windows version press win+R on keyboard)
2. Type windows features in search bar and enable Virtual Machine platform.
3. Restart your PC
4. Install WSL version 1.1.3.0 or later.
Steps to install wsl
1. check for Windows 10 version 1607 or later
2. Virtualization capabilities enabled in
BIOS/UEFI settings how to enable it?
shift + shutdown -> UEFI setting -> BIOS settings -> enable
VIrtualization technology.
3. open cmd and run as administrator and paste this
command dism.exe /online /enable-feature
/featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
4. use command wsl --update
open cmd and use
command wsl --version
5. Open Docker desktop, it is ready to use
Orchestration
Orchestration in the context of containerization and container
management refers to the automated and coordinated management of
containers and their associated
resources.
It involves tasks such as container deployment, scaling, load balancing,
health monitoring, and more.
Orchestration ensures that containers are deployed and managed efficiently
and reliably in a distributed environment.
KUBERNETES
Features of K8s
1. Orchestration (cluster of any number of containers running on different
network)
2. Autoscaling (vertical+horizontal)
3. Auto healing
4. Load Balancing
5. Platform Independent
6. Fault tolerance
7. Rollback (going back to previous versions)
8. Health Monitoring of Containers
Logging and monitoring Inbuilt tool is peasant for Used third party tools
monitoring like splunk
Working with kubernetes OR Kubernetes Working Model
b) etcd Cluster
● Stores meta data and status of cluster
● etcd is consistent and highly available (key-value store)
● Source of touch for cluster state (i,e is information about state of cluster)
c) Kube-Scheduler
● When users make request for the creation and management of
pods, kube scheduler is going to take action on these request
● Handles pod creation and management
● Kube-scheduler match/assign any node to create and run pods
● A scheduler watches for newly created pods that have no node assigned
● For every pod that the scheduler discovers, the scheduler becomes
responsible for finding best node for that pod to run
● Scheduler gets the information for hardware configuration from
configuration files and schedules the pods on nodes accordingly
d) controller-management
● Make sure actual state of cluster matches with desired state
● Two possible choices for controller manager
1. If K8s on cloud, then it will be Cloud-controller-manager
2. If K8s on non-cloud, then it will be kube-controller-manager
a) kube -proxy
● Assigns ip address to pod
● It is required to assign ip address to pods dynamically
● Kube-proxy runs on each pod and this make sure that each pod
will get ts own unique ip address
b) kubelets
● Agent running on the node
● Listens to kubernetes master (eg: pod creation request)
● Uses Port 10255 as default
● Sends success or failure reports to master
c) Container engine
(Docker)
● Works with kubelets
● Pulling images
● Strat / stop container
● Exposing containers on ports specified in manifest
d) POD
● Smallest unit in kubernetes
● Pod is a group of one or more containers that are deployed
together on the same host
● A cluster is a group of nodes
● A cluster has at least one master and one worker node
● In kubernetes the control unit is the pod , not container
● Consists of one or more tightly coupled containers
● Pod runs on node, which is controlled by master
● K8s only knows about pods (does not know about individual containers)
● Can not start container without pod
● One pod usually contains one container
Deployment strategies
Deployment strategies are approaches used in software development and
release processes to ensure smooth and controlled transitions from one
version of an
application to another. Two common deployment strategies are Blue-Green
Deployment and Canary Deployment.
Blue-Green Deployment:
Blue-Green Deployment is a deployment strategy that involves maintaining two
separate environments: the "Blue" environment (the currently running version) and
the "Green"
environment (the new version). The process typically unfolds as follows:
c. After deploying and thoroughly testing the new version in the Green
environment, you switch the traffic from the Blue environment to the Green
environment. This means that users start interacting with the new version.
d. If any issues or problems arise after the switch, you can quickly revert
back to the Blue environment, which still contains the previous version.
Canary Deployment allows you to catch issues early and reduce the
impact of any potential problems by limiting the exposure of the new
version initially. It's especially useful when you want to test a new release
in a real-world environment without
affecting all users at once.
Both Blue-Green Deployment and Canary Deployment are effective strategies for
minimizing deployment risks and ensuring a smooth transition to new versions of
your application. The choice between them depends on your specific requirements,
infrastructure, and the level of control you need over the deployment process.
Disaster recovery
Disaster recovery (DR) is a set of strategies, policies, and procedures designed to
ensure an organization can recover its IT systems and data after a disaster
or disruptive event. Disasters can take various forms, including natural
disasters (e.g., hurricanes,
earthquakes), human-made disasters (e.g., cyberattacks, data breaches), or
even hardware and software failures. An effective disaster recovery plan is
crucial for business continuity and minimizing downtime.
Here are the key elements of a disaster recovery plan and the types of disaster
BIA involves assessing the criticality of various systems, applications, and data to
the organization.
It helps prioritize recovery efforts and allocate resources effectively.
Risk Assessment:
Identify potential risks and threats that could disrupt your IT systems and
data. Analyze the impact of these risks on your organization.
Recovery Objectives:
Define recovery time objectives (RTO) and recovery point objectives (RPO)
for each system and application.
RTO is the maximum acceptable downtime, and RPO is the acceptable data loss.
Appoint and train a disaster recovery team responsible for executing the
recovery plan. Assign roles and responsibilities within the team.
Communication Plan:
Load balancing
The load balancer evenly distributes incoming requests among the available
servers in the server pool. This prevents any single server from becoming a
bottleneck and
ensures that resources are utilized efficiently.
Scalability:
Load balancing facilitates horizontal scalability, allowing organizations to add
or remove servers from the server pool based on demand. This helps
accommodate varying levels of traffic and ensures optimal performance
during peak periods.
Health Monitoring:
Load balancers continuously monitor the health and status of servers in the
pool. If a server becomes unavailable or experiences issues, the load
balancer automatically redirects traffic to healthy servers, avoiding
disruptions.
Session Persistence:
In some cases, it's essential to maintain session persistence, ensuring that a
user's requests are consistently directed to the same server. Load balancers
can manage session persistence by using techniques such as cookie-based
affinity or IP address affinity.
SSL Termination:
Load balancers can offload the SSL/TLS encryption and decryption process
from the backend servers, known as SSL termination. This helps improve
server efficiency and performance.
Content-based Routing:
Load balancers can make routing decisions based on the content of the
incoming requests, directing specific types of traffic to designated
servers. This is useful for applications with different service
requirements.
Application
Monitoring:
Application monitoring is essential for several reasons:
1. Performance Optimization:
Identify and address performance bottlenecks to ensure optimal user
experience.
2. Fault Detection:
Detect and diagnose issues promptly to minimize downtime and disruptions.
3. Capacity Planning:
Analyze resource usage trends to plan for scalability and resource allocation.
4. User Experience:
Ensure that end-users have a seamless and satisfactory experience
with the application.
5. Security:
Monitor for security threats and vulnerabilities to protect sensitive data.
3. Infrastructure Monitoring:
Monitors the underlying infrastructure, including servers, databases, and
network components, to ensure they are operating efficiently.
4. Transaction Tracing:
Traces the flow of transactions across the application, helping identify
bottlenecks and performance issues.
5. Log Analysis:
Analyzes logs for error detection, troubleshooting, and gaining
insights into application behavior.
2. New Relic:
Offers end-to-end monitoring, transaction tracing, and real-time analytics.
Provides insights into application performance, user
experience, and infrastructure.
3. Dynatrace:
Utilizes AI-driven monitoring for automatic problem detection and root
cause analysis.
Offers full-stack monitoring and supports various technologies.
4. AppDynamics:
Focuses on application and business performance monitoring.
Provides real-time visibility into application performance and user experience.
5. Datadog:
Offers cloud-based monitoring for infrastructure, applications, and
logs. Supports integrations with a wide range of technologies.
6. Splunk:
Known for log analysis and monitoring of machine data.
Provides customizable dashboards and supports a wide array of data sources.