presentation-notes
presentation-notes
Overview:
Cloud-native is a modern software approach for building, deploying, and managing applications
in cloud environments. Our system is designed using this architecture to take full advantage of
the cloud's capabilities.
Key Components:
Benefits:
All these processes are fully automated—from development and testing to deployment and
hardware provisioning—ensuring a seamless and efficient workflow.
Overview:
Containers offer a lightweight and portable solution for packaging and deploying applications
and services. They ensure that applications behave the same way across different environments,
such as development, staging, and production. This consistency is achieved by bundling the
application along with its dependencies, such as runtime environments and system libraries, into
a single unit.
Operating Systems (OS): Different team environments might use different versions or
configurations of the OS.
Runtime Environments (e.g., JVM): Differences in versions or configurations of Java
Virtual Machine (JVM) or other runtime tools could lead to inconsistent behavior.
Consistency: The application behaves exactly the same in development, testing, staging,
and production environments.
Portability: Containers can run on any machine that supports containerization, whether
it's a developer's laptop, a test server, or a cloud platform.
Isolation: Each container is isolated from others, ensuring that dependencies and
configurations do not conflict.
Benefits:
By using containers, we can ensure a smooth, reliable, and consistent deployment pipeline from
development to production.
DevSecOps (Development, Security, and Operations) is a framework that integrates security into all
phases of the software development lifecycle (SDLC). Unlike traditional development models where
security is addressed at the end, DevSecOps embeds security from the start, ensuring that applications
are built, tested, and deployed with security best practices.
Development Phase
This phase focuses on code development, version control, and security integration.
A developer writes code and pushes it to a Git repository (GitLab in this case).
Git acts as the version control system to manage code changes.
Before deployment, the image is pulled from the Docker Registry and re-scanned for
vulnerabilities.
🔹 Kubernetes Deployment
The operations team utilizes Grafana and Prometheus for real-time system monitoring and
performance analysis. Prometheus collects key metrics such as CPU usage, memory
consumption, and API response times, while Grafana visualizes these metrics through
interactive dashboards.
This monitoring setup enhances operational efficiency by enabling administrators to detect
bottlenecks, track resource utilization, and gain insights into system performance trends.
Additionally, automated alerts are triggered when a service reaches predefined threshold levels,
allowing for proactive issue resolution and minimizing downtime. From the Grafana
dashboard, administrators can quickly identify performance anomalies, analyze historical trends,
and optimize system resources to ensure seamless operations.
The system follows a Microservices Architecture, where different functionalities are broken
down into independent, loosely coupled services. Each service is responsible for a specific task
and communicates with others via APIs.
One of the key advantages of this architecture is that each service can be scaled independently
based on demand, ensuring efficient resource utilization. This allows the system to handle
varying loads without affecting overall performance.
Technologies like Docker and Kubernetes are used for containerization and orchestration,
enabling seamless scaling, fault isolation, and efficient deployment. This enhances agility,
maintainability, and resilience, making the system robust and adaptable.
"This diagram illustrates the high-level technical architecture of our system. It's designed
to give a clear understanding of how the different components interact and support our
business functions."
"We've organized it into three main sections: The Consumer, the Outer Architecture, and
the Inner Architecture, along with some key supporting elements."
At the top, we have the 'Consumer'. This represents any entity that interacts with our system.
This could be our customers using mobile apps or web browsers, or even other systems that need
to communicate with us."
"Next, we move to the 'Outer Architecture'. Think of this as the entry point and security
layer for our system."
API Gateway: "All external requests first go through the API Gateway. This acts as a
central hub, managing and routing traffic, ensuring security, and potentially handling
transformations."
Auth (IAM): "Before any request is processed, it needs to be authenticated. The 'Auth'
component handles this, verifying the identity of the consumer."
Load Balancer: "Once authenticated, the Load Balancer distributes the traffic across
multiple instances of our services. This ensures high availability and prevents overload."
"The 'Inner Architecture' is where the core business logic resides. This is where the actual
processing happens."
Microservices (Deposit, CIS, Financing, Cash etc.): "We use a microservices
architecture. Each box represents a separate service responsible for a specific function:
Deposit, CIS (Customer Information System), Financing, and Cash. This allows for
independent development and scaling."
Events Published/Subscribed: "Services communicate with each other asynchronously
using events. When something happens in one service (e.g., a deposit is made), it
publishes an event. Other services that are interested in that event subscribe to it and take
action accordingly. This promotes loose coupling and scalability."
Database: "At the heart of it all is the Database, where our data is stored persistently. All
the services rely on the database for reading and writing information."
Messaging Channels: Our system employs asynchronous messaging via Kafka to enable
communication between services. This is a key aspect of our CQRS (Command Query
Responsibility Segregation) architecture. CQRS separates write/update operations
(Commands) from read operations (Queries), allowing us to optimize each independently.
Kafka serves as the central message broker. Both Commands and Queries are published
asynchronously to Kafka topics. This asynchronous approach provides several benefits:
loose coupling between services, independent scalability of command and query
processing, and increased resilience.
Command Processing: When a Command is published to Kafka, the responsible service
consumes it and performs the following steps:
Operational Capabilities: Finally, we have 'Operational Capabilities' which are crucial for
running and maintaining the system. This includes automation for deployments, monitoring to
track performance and identify issues, and other essential tools."
WN (Worker Node): WN-1, WN-2, and WN-3 would be the worker nodes of the Kubernetes
cluster. These nodes are where the actual application containers run. They receive instructions
from the master nodes and execute the containerized workloads. Again, multiple WN nodes are
used for scaling and resilience.
Database Clustering (RAC): The RAC database (DB-1, DB-2) indicates a specific physical
configuration for high availability, where multiple physical database servers work together.
Storage (Persistence Volume): The Persistence Volume suggests a connection to a physical
storage system (SAN) where data is stored persistently.
Slide 22: authentication 1 minutes
This slide details the system's authentication process. It employs token-based(JWT) authentication,
allowing secure access to protected resources. The system also supports Single Sign-On (SSO) for user
convenience and integrates with standard authentication providers like LDAP and Active Directory, as
well as its own internal database.