0% found this document useful (0 votes)
3 views

presentation-notes

The document outlines a cloud-native architecture that leverages containers, DevSecOps, and microservices to enhance application deployment and management. Key components include automated CI/CD processes, real-time monitoring with Grafana and Prometheus, and a robust technical architecture that ensures scalability and security. The system employs token-based authentication and integrates with various authentication providers for secure access.

Uploaded by

ziaur06
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

presentation-notes

The document outlines a cloud-native architecture that leverages containers, DevSecOps, and microservices to enhance application deployment and management. Key components include automated CI/CD processes, real-time monitoring with Grafana and Prometheus, and a robust technical architecture that ensures scalability and security. The system employs token-based authentication and integrates with various authentication providers for secure access.

Uploaded by

ziaur06
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Slide 7: Cloud Native Architecture (5 Minutes)

Overview:
Cloud-native is a modern software approach for building, deploying, and managing applications
in cloud environments. Our system is designed using this architecture to take full advantage of
the cloud's capabilities.

Key Components:

1. Containers: Lightweight, portable environments for running applications consistently


across different environments.
2. DevOps / DevSecOps: Integrates development, operations, and security to enable faster,
more secure delivery.
3. CI/CD: Automated processes for Continuous Integration and Continuous Delivery,
ensuring rapid development and deployment cycles.
4. Microservices: Breaking down applications into smaller, independent services that can
be developed, deployed, and scaled independently.

Benefits:

 Scalability: Applications can dynamically scale based on demand.


 Resilience: High availability and fault tolerance through distributed architecture.
 Maintainability: Simplified updates and easier debugging, enabling faster innovation.

All these processes are fully automated—from development and testing to deployment and
hardware provisioning—ensuring a seamless and efficient workflow.

Slide 8: Containers (5 Minutes)

Overview:
Containers offer a lightweight and portable solution for packaging and deploying applications
and services. They ensure that applications behave the same way across different environments,
such as development, staging, and production. This consistency is achieved by bundling the
application along with its dependencies, such as runtime environments and system libraries, into
a single unit.

The Challenge in the Past:


In earlier days, software deployment was more complex. Teams were often separated into
Development and Operations. The development team would write and test the software in their
environment, but when the operations team attempted to deploy it, issues often arose due to
differences in environments. These environmental discrepancies could include:

 Operating Systems (OS): Different team environments might use different versions or
configurations of the OS.
 Runtime Environments (e.g., JVM): Differences in versions or configurations of Java
Virtual Machine (JVM) or other runtime tools could lead to inconsistent behavior.

How Containers Solve This Problem:


Containers solve this problem by encapsulating everything the application needs—OS, runtime,
libraries, and dependencies—into a single, immutable package. This means that:

 Consistency: The application behaves exactly the same in development, testing, staging,
and production environments.
 Portability: Containers can run on any machine that supports containerization, whether
it's a developer's laptop, a test server, or a cloud platform.
 Isolation: Each container is isolated from others, ensuring that dependencies and
configurations do not conflict.

Benefits:

 Simplified Deployment: No more surprises when moving from one environment to


another.
 Efficiency: Developers and operations teams can work in parallel with minimal friction.
 Scalability: Containers can easily be scaled to meet demand, and orchestrators like
Kubernetes can automate much of this scaling process.

By using containers, we can ensure a smooth, reliable, and consistent deployment pipeline from
development to production.

Slide 9: DevSecOps (5 Minutes)

DevSecOps (Development, Security, and Operations) is a framework that integrates security into all
phases of the software development lifecycle (SDLC). Unlike traditional development models where
security is addressed at the end, DevSecOps embeds security from the start, ensuring that applications
are built, tested, and deployed with security best practices.

Development Phase

This phase focuses on code development, version control, and security integration.

🔹 Developer → Git Repository

 A developer writes code and pushes it to a Git repository (GitLab in this case).
 Git acts as the version control system to manage code changes.

🔹 GitLab CI/CD Pipeline


 GitLab automates the software delivery process using Continuous Integration (CI).
 Security is integrated through automated static code analysis, build verification, and
automated testing.

🔹 Security Checks in CI/CD

 Static Code Analysis: Scans the source code for vulnerabilities.


 Build: Compiles and packages the code.
 Automated Tests: Runs security and functionality tests.
 Build Image: Creates a Docker container for deployment.

🔹 Image Scanning & Docker Registry

 The Docker container image is pushed to the Docker Registry.


 Image Scanning checks for security vulnerabilities in dependencies, configurations, and
libraries.

Continuous Deployment Phase (Client premise)

This phase focuses on secure deployment and monitoring.

🔹 Image Scanning Before Deployment

 Before deployment, the image is pulled from the Docker Registry and re-scanned for
vulnerabilities.

🔹 Kubernetes Deployment

 The application is deployed in a Kubernetes cluster, ensuring scalability and


automation.
 It goes through three environments before production:
1. Test – Ensures the application functions correctly.
2. Stage – A pre-production environment for final validation.
3. Production – The live environment where users access the application.

Slide 10-11: Monitoring - 3 minutes

The operations team utilizes Grafana and Prometheus for real-time system monitoring and
performance analysis. Prometheus collects key metrics such as CPU usage, memory
consumption, and API response times, while Grafana visualizes these metrics through
interactive dashboards.
This monitoring setup enhances operational efficiency by enabling administrators to detect
bottlenecks, track resource utilization, and gain insights into system performance trends.
Additionally, automated alerts are triggered when a service reaches predefined threshold levels,
allowing for proactive issue resolution and minimizing downtime. From the Grafana
dashboard, administrators can quickly identify performance anomalies, analyze historical trends,
and optimize system resources to ensure seamless operations.

Slide 12: Microservices Architecture – 2 minutes

The system follows a Microservices Architecture, where different functionalities are broken
down into independent, loosely coupled services. Each service is responsible for a specific task
and communicates with others via APIs.

One of the key advantages of this architecture is that each service can be scaled independently
based on demand, ensuring efficient resource utilization. This allows the system to handle
varying loads without affecting overall performance.

Technologies like Docker and Kubernetes are used for containerization and orchestration,
enabling seamless scaling, fault isolation, and efficient deployment. This enhances agility,
maintainability, and resilience, making the system robust and adaptable.

Slide 14 – Technical Architecture - 5 Minutes

(1) Introduction - The Big Picture:

 "This diagram illustrates the high-level technical architecture of our system. It's designed
to give a clear understanding of how the different components interact and support our
business functions."
 "We've organized it into three main sections: The Consumer, the Outer Architecture, and
the Inner Architecture, along with some key supporting elements."

(2) Consumer - The Starting Point:

At the top, we have the 'Consumer'. This represents any entity that interacts with our system.
This could be our customers using mobile apps or web browsers, or even other systems that need
to communicate with us."

(3) Outer Architecture - The Gatekeepers:

 "Next, we move to the 'Outer Architecture'. Think of this as the entry point and security
layer for our system."
 API Gateway: "All external requests first go through the API Gateway. This acts as a
central hub, managing and routing traffic, ensuring security, and potentially handling
transformations."
 Auth (IAM): "Before any request is processed, it needs to be authenticated. The 'Auth'
component handles this, verifying the identity of the consumer."
 Load Balancer: "Once authenticated, the Load Balancer distributes the traffic across
multiple instances of our services. This ensures high availability and prevents overload."

(4) Inner Architecture - The Core Logic:

 "The 'Inner Architecture' is where the core business logic resides. This is where the actual
processing happens."
 Microservices (Deposit, CIS, Financing, Cash etc.): "We use a microservices
architecture. Each box represents a separate service responsible for a specific function:
Deposit, CIS (Customer Information System), Financing, and Cash. This allows for
independent development and scaling."
 Events Published/Subscribed: "Services communicate with each other asynchronously
using events. When something happens in one service (e.g., a deposit is made), it
publishes an event. Other services that are interested in that event subscribe to it and take
action accordingly. This promotes loose coupling and scalability."

(5) Supporting Elements

 Database: "At the heart of it all is the Database, where our data is stored persistently. All
the services rely on the database for reading and writing information."
 Messaging Channels: Our system employs asynchronous messaging via Kafka to enable
communication between services. This is a key aspect of our CQRS (Command Query
Responsibility Segregation) architecture. CQRS separates write/update operations
(Commands) from read operations (Queries), allowing us to optimize each independently.
Kafka serves as the central message broker. Both Commands and Queries are published
asynchronously to Kafka topics. This asynchronous approach provides several benefits:
loose coupling between services, independent scalability of command and query
processing, and increased resilience.
 Command Processing: When a Command is published to Kafka, the responsible service
consumes it and performs the following steps:

1. Synchronous Database Update: The service immediately and synchronously executes


the command, writing or updating the necessary data in the database. This ensures data
consistency and immediate reflection of the change.
2. Asynchronous Event Publication: After successfully updating the database, the service
asynchronously publishes an event to Kafka. This event signals the completion of the
command and includes relevant information about the change.
3. Downstream Processing: Other services, such as those handling notifications, auditing,
tracing, or other business logic, subscribe to these events. They consume the events
asynchronously and independently, performing their respective tasks without impacting
the initial command processing. This decoupled approach allows for efficient scaling and
maintenance of these secondary functions.

User Activity Logging:


Critically, Kafka facilitates comprehensive user activity logging. All Commands and Queries
passing through Kafka create an auditable trail, providing valuable data for analysis, monitoring,
and security purposes. This centralized logging ensures a complete record of all user interactions
with the system.

Operational Capabilities: Finally, we have 'Operational Capabilities' which are crucial for
running and maintaining the system. This includes automation for deployments, monitoring to
track performance and identify issues, and other essential tools."

Slide 14 – Deployment Architecture - 3 Minutes


This architecture leverages the power of Kubernetes to deploy and manage application across a grid of
clusters. The external L7 load balancer acts as the entry point, intelligently routing traffic to the
appropriate services within the clusters. This approach promotes scalability, fault tolerance, and
efficient resource utilization, which is essential for modern, cloud-native applications.

Slide 15 – Deployment Architecture - 3 Minutes


This diagram represents a physical deployment architecture, illustrating the physical components and
their interconnections within the infrastructure. It shows how the system is physically laid out across
different environments and how the various tiers interact.

 MN (Management Node/Master Node): In a Kubernetes context MN-1, MN-2, and MN-3


would represent the control plane or master nodes of the Kubernetes cluster. These nodes are
responsible for managing the cluster, including scheduling containers, maintaining the desired
state, and handling API requests. Having multiple MN nodes provides high availability for the
Kubernetes control plane.

 WN (Worker Node): WN-1, WN-2, and WN-3 would be the worker nodes of the Kubernetes
cluster. These nodes are where the actual application containers run. They receive instructions
from the master nodes and execute the containerized workloads. Again, multiple WN nodes are
used for scaling and resilience.

 Database Clustering (RAC): The RAC database (DB-1, DB-2) indicates a specific physical
configuration for high availability, where multiple physical database servers work together.
 Storage (Persistence Volume): The Persistence Volume suggests a connection to a physical
storage system (SAN) where data is stored persistently.
Slide 22: authentication 1 minutes

This slide details the system's authentication process. It employs token-based(JWT) authentication,
allowing secure access to protected resources. The system also supports Single Sign-On (SSO) for user
convenience and integrates with standard authentication providers like LDAP and Active Directory, as
well as its own internal database.

You might also like