TL;DR
Week 8 focused entirely on Jenkins and CI/CD automation. Built upon Week 7's Docker containerization to create production-ready pipelines that automate everything from code commit to production deployment. Key achievements include multi-branch pipelines, Docker integration, AWS deployments, and enterprise-grade automation.
The CI/CD Revolution
After mastering Docker containerization in Week 7, Week 8 was about answering the crucial question: "How do we automate the entire journey from code to production?" The answer lies in Continuous Integration and Continuous Deployment with Jenkins.
Why CI/CD Matters
Before automation:
bash
Manual deployment process (Week 7)
git pull origin main
docker build -t myapp:v1.0 .
docker tag myapp:v1.0 registry.com/myapp:v1.0
docker push registry.com/myapp:v1.0
ssh production-server
docker pull registry.com/myapp:v1.0
docker stop myapp-container
docker run -d --name myapp-container registry.com/myapp:v1.0
After Jenkins automation:
bash
Just push code - Jenkins handles everything else!
git push origin main
π Automatic: build β test β deploy β monitor
Jenkins Fundamentals
Architecture Deep Dive
Jenkins operates on a master-agent architecture:
text
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Jenkins UI β β Jenkins Master β β Build Agents β
β (Web Portal) βββββΆβ (Controller) βββββΆβ (Executors) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βββββββββββββββββββ
β Plugin System β
βββββββββββββββββββ
Installation Options Explored
- Traditional Server Installation:
bash
Ubuntu/Debian installation
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list
sudo apt update && sudo apt install jenkins
- Docker Deployment (Perfect Week 7 Integration):
text
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins-master
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
- JAVA_OPTS=-Djenkins.install.runSetupWizard=false
volumes:
jenkins_home:
Pipeline Development Journey
From Freestyle to Pipeline as Code
Freestyle Jobs: GUI-based, limited flexibility
Pipeline Jobs: Code-based, version controlled, infinitely flexible
Basic Pipeline Structure
groovy
pipeline {
agent any
tools {
nodejs "nodejs-16"
dockerTool "docker-latest"
}
environment {
DOCKER_REGISTRY = credentials('docker-registry-url')
APP_NAME = 'my-nodejs-app'
}
stages {
stage('Checkout') {
steps {
checkout scm
sh 'git log --oneline -5'
}
}
stage('Install Dependencies') {
steps {
sh 'npm ci'
sh 'npm audit --audit-level high'
}
}
stage('Run Tests') {
parallel {
stage('Unit Tests') {
steps {
sh 'npm run test:unit'
publishTestResults testResultsPattern: 'test-results.xml'
}
}
stage('Lint Code') {
steps {
sh 'npm run lint'
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'lint-results',
reportFiles: 'index.html',
reportName: 'ESLint Report'
])
}
}
}
}
stage('Build Application') {
steps {
sh 'npm run build'
archiveArtifacts artifacts: 'dist/**/*', fingerprint: true
}
}
}
post {
always {
cleanWs()
}
success {
echo 'Pipeline completed successfully!'
}
failure {
emailext (
subject: "Build Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: """Build failed.
Project: ${env.JOB_NAME}
Build Number: ${env.BUILD_NUMBER}
Build URL: ${env.BUILD_URL}
Please check the console output for more details.""",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
}
}
Docker Integration Mastery
Building on Week 7's container expertise, Jenkins + Docker integration was seamless:
Docker-in-Docker Pipeline
groovy
pipeline {
agent {
docker {
image 'docker:dind'
args '--privileged -v /var/run/docker.sock:/var/run/docker.sock'
}
}
environment {
DOCKER_BUILDKIT = "1"
COMPOSE_DOCKER_CLI_BUILD = "1"
}
stages {
stage('Build Docker Image') {
steps {
script {
def image = docker.build(
"${env.DOCKER_REGISTRY}/${env.APP_NAME}:${env.BUILD_NUMBER}",
"--build-arg BUILD_NUMBER=${env.BUILD_NUMBER} ."
)
// Run security scan
sh "docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image ${image.id}"
return image
}
}
}
stage('Push to Registry') {
steps {
script {
docker.withRegistry('https://your-registry.com', 'registry-credentials') {
def image = docker.image("${env.DOCKER_REGISTRY}/${env.APP_NAME}:${env.BUILD_NUMBER}")
image.push()
image.push("latest")
// Tag for environment
if (env.BRANCH_NAME == 'main') {
image.push("production")
} else {
image.push("staging")
}
}
}
}
}
}
}
Multi-Stage Docker Builds in Pipeline
Leveraging Week 7's optimized Dockerfiles:
text
Multi-stage build optimized for Jenkins
FROM node:16-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
FROM node:16-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build && npm run test
FROM node:16-alpine AS production
RUN addgroup -g 1001 -S nodejs && adduser -S nextjs -u 1001
WORKDIR /app
COPY --from=dependencies /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY --from=build /app/package*.json ./
USER nextjs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["npm", "start"]
Advanced Pipeline Features
Multi-Branch Pipeline Strategy
Perfect for modern Git workflows:
groovy
pipeline {
agent any
stages {
stage('Feature Branch Build') {
when {
not { anyOf { branch 'main'; branch 'develop' } }
}
steps {
sh 'echo "Building feature branch: ${env.BRANCH_NAME}"'
sh 'npm run build'
sh 'docker build -t ${APP_NAME}:${BRANCH_NAME}-${BUILD_NUMBER} .'
}
}
stage('Deploy to Staging') {
when { branch 'develop' }
steps {
deployToEnvironment('staging')
}
}
stage('Deploy to Production') {
when { branch 'main' }
steps {
input message: 'Deploy to production?', ok: 'Deploy',
submitterParameter: 'DEPLOYER'
deployToEnvironment('production')
}
}
}
}
def deployToEnvironment(environment) {
sh """
docker-compose -f docker-compose.${environment}.yml pull
docker-compose -f docker-compose.${environment}.yml up -d
sleep 30
curl -f http://${environment}.myapp.com/health
"""
}
Shared Libraries Implementation
Created reusable pipeline components:
vars/buildAndPushImage.groovy:
groovy
def call(Map config) {
script {
def image = docker.build("${config.registry}/${config.appName}:${config.tag}")
// Run tests in container
image.inside {
sh config.testCommand ?: 'npm test'
}
// Security scan
sh "docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image ${image.id}"
// Push to registry
docker.withRegistry("https://${config.registry}", config.credentials) {
image.push()
if (config.pushLatest) {
image.push("latest")
}
}
return image
}
}
Usage in Jenkinsfile:
groovy
stage('Build & Push') {
steps {
buildAndPushImage([
registry: 'my-registry.com',
appName: 'my-app',
tag: env.BUILD_NUMBER,
credentials: 'registry-creds',
pushLatest: env.BRANCH_NAME == 'main',
testCommand: 'npm run test:integration'
])
}
}
Credential Management Patterns
Different Credential Types:
groovy
pipeline {
agent any
stages {
stage('Deploy with Multiple Secrets') {
steps {
withCredentials([
usernamePassword(credentialsId: 'dockerhub-creds',
usernameVariable: 'DOCKER_USER',
passwordVariable: 'DOCKER_PASS'),
string(credentialsId: 'api-key', variable: 'API_KEY'),
file(credentialsId: 'kubeconfig', variable: 'KUBECONFIG'),
sshUserPrivateKey(credentialsId: 'ssh-key',
keyFileVariable: 'SSH_KEY',
usernameVariable: 'SSH_USER')
]) {
sh 'docker login -u $DOCKER_USER -p $DOCKER_PASS'
sh 'curl -H "Authorization: Bearer $API_KEY" api-endpoint'
sh 'kubectl --kubeconfig=$KUBECONFIG get pods'
sh 'ssh -i $SSH_KEY $SSH_USER@server "deploy-script.sh"'
}
}
}
}
}
Real-World Project: Complete E-commerce CI/CD
Built an end-to-end pipeline for a microservices e-commerce application:
Project Architecture
text
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Frontend β β Backend β β Database β
β (React) β β (Node.js) β β (MongoDB) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β β β
βββββββββββββββββββββΌββββββββββββββββββββ
β
βββββββββββββββββββ
β Jenkins Pipeline β
βββββββββββββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ
β β β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Staging β β Testing β β Production β
β Environment β β Environment β β Environment β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
Complete Pipeline Implementation
groovy
@Library('shared-library') _
pipeline {
agent any
parameters {
choice(
name: 'DEPLOY_ENV',
choices: ['staging', 'production'],
description: 'Environment to deploy to'
)
booleanParam(
name: 'RUN_PERFORMANCE_TESTS',
defaultValue: false,
description: 'Run performance tests after deployment'
)
}
environment {
APP_NAME = 'ecommerce-app'
DOCKER_REGISTRY = '123456789012.dkr.ecr.us-east-1.amazonaws.com'
AWS_REGION = 'us-east-1'
SLACK_CHANNEL = '#deployments'
}
stages {
stage('Preparation') {
steps {
cleanWs()
checkout scm
script {
env.GIT_COMMIT_SHORT = sh(
script: 'git rev-parse --short HEAD',
returnStdout: true
).trim()
env.BUILD_TAG = "${env.BUILD_NUMBER}-${env.GIT_COMMIT_SHORT}"
}
}
}
stage('Quality Gates') {
parallel {
stage('Frontend Tests') {
steps {
dir('frontend') {
sh 'npm ci'
sh 'npm run test:coverage'
sh 'npm run lint'
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'coverage',
reportFiles: 'index.html',
reportName: 'Frontend Coverage Report'
])
}
}
}
stage('Backend Tests') {
steps {
dir('backend') {
sh 'npm ci'
sh 'npm run test:unit'
sh 'npm run test:integration'
sh 'npm audit --audit-level moderate'
publishTestResults testResultsPattern: 'test-results.xml'
}
}
}
stage('Security Scans') {
steps {
sh 'docker run --rm -v $(pwd):/app securecodewarrior/semgrep --config=auto /app'
sh 'npm audit --audit-level high'
}
}
}
}
stage('Build Images') {
parallel {
stage('Frontend Image') {
steps {
script {
buildAndPushImage([
context: './frontend',
appName: "${APP_NAME}-frontend",
registry: env.DOCKER_REGISTRY,
tag: env.BUILD_TAG,
credentials: 'ecr:us-east-1:aws-credentials',
additionalTags: [env.DEPLOY_ENV]
])
}
}
}
stage('Backend Image') {
steps {
script {
buildAndPushImage([
context: './backend',
appName: "${APP_NAME}-backend",
registry: env.DOCKER_REGISTRY,
tag: env.BUILD_TAG,
credentials: 'ecr:us-east-1:aws-credentials',
additionalTags: [env.DEPLOY_ENV]
])
}
}
}
}
}
stage('Database Migration') {
when {
anyOf {
branch 'main'
params.DEPLOY_ENV == 'production'
}
}
steps {
script {
sh """
docker run --rm \
--network host \
-e DATABASE_URL=\${${params.DEPLOY_ENV.toUpperCase()}_DATABASE_URL} \
${DOCKER_REGISTRY}/${APP_NAME}-backend:${BUILD_TAG} \
npm run migrate
"""
}
}
}
stage('Deploy to Environment') {
steps {
script {
deployToECS([
cluster: "${params.DEPLOY_ENV}-cluster",
serviceName: "${APP_NAME}-service",
taskDefinition: "${APP_NAME}-${params.DEPLOY_ENV}",
imageTag: env.BUILD_TAG,
region: env.AWS_REGION
])
}
}
}
stage('Health Checks') {
steps {
script {
retry(5) {
sleep 30
sh """
curl -f https://${params.DEPLOY_ENV}.${APP_NAME}.com/health
curl -f https://${params.DEPLOY_ENV}.${APP_NAME}.com/api/health
"""
}
}
}
}
stage('Performance Tests') {
when {
params.RUN_PERFORMANCE_TESTS == true
}
steps {
sh """
docker run --rm \
-e TARGET_URL=https://${params.DEPLOY_ENV}.${APP_NAME}.com \
loadimpact/k6 run - < performance-tests/load-test.js
"""
}
}
}
post {
always {
cleanWs()
}
success {
slackSend(
channel: env.SLACK_CHANNEL,
color: 'good',
message: """β
Deployment Successful!
Project: ${env.JOB_NAME}
Environment: ${params.DEPLOY_ENV}
Version: ${env.BUILD_TAG}
Build: ${env.BUILD_URL}"""
)
}
failure {
slackSend(
channel: env.SLACK_CHANNEL,
color: 'danger',
message: """β Deployment Failed!
Project: ${env.JOB_NAME}
Environment: ${params.DEPLOY_ENV}
Version: ${env.BUILD_TAG}
Build: ${env.BUILD_URL}
Please check the build logs for details."""
)
// Automatic rollback for production failures
script {
if (params.DEPLOY_ENV == 'production') {
rollbackDeployment([
cluster: 'production-cluster',
serviceName: "${APP_NAME}-service"
])
}
}
}
}
}
Performance and Optimization
Build Time Optimization Results
Optimization Technique Before After Improvement
Docker Layer Caching 15 min 5 min 67%
Parallel Stages 20 min 8 min 60%
Build Agents 25 min 10 min 60%
Optimized Dependencies 12 min 4 min 67%
Resource Management
groovy
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: builder
image: node:16-alpine
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
command:
- cat tty: true """ } } // ... rest of pipeline } Integration Ecosystem Week 8 perfectly integrated all previous modules:
Git Integration (Week 3)
Webhooks: Automatic pipeline triggers
Branch Strategies: Environment-based deployments
Version Tagging: Automated release management
Build Tools (Week 4)
Maven/Gradle: Automated artifact creation
Dependency Management: Vulnerability scanning
Test Execution: Automated quality gates
Cloud Infrastructure (Week 5)
Server Provisioning: Dynamic environment creation
Deployment Automation: Cloud-native deployments
Monitoring Integration: Health checks and alerts
Nexus Repository (Week 6)
Artifact Publishing: Automated releases
Docker Registry: Container image management
Version Control: Artifact lifecycle management
Docker Containers (Week 7)
Image Building: Automated containerization
Registry Operations: Push/pull automation
Container Deployment: Orchestrated releases
Monitoring and Observability
Pipeline Monitoring
groovy
stage('Deploy with Monitoring') {
steps {
script {
// Deploy
deployApplication()
// Set up monitoring
sh """
curl -X POST prometheus-pushgateway:9091/metrics/job/deployment \
-d 'deployment_status{env="${params.DEPLOY_ENV}",version="${env.BUILD_TAG}"} 1'
"""
// Health check with metrics
retry(10) {
script {
def response = sh(
script: "curl -s -o /dev/null -w '%{http_code}' https://${params.DEPLOY_ENV}.app.com/health",
returnStdout: true
).trim()
if (response != '200') {
error("Health check failed with status: ${response}")
}
}
sleep 15
}
}
}
}
Security Best Practices
Security-First Pipeline Design
groovy
pipeline {
agent any
stages {
stage('Security Scans') {
parallel {
stage('SAST Scan') {
steps {
sh 'docker run --rm -v $(pwd):/app sonarqube-scanner'
}
}
stage('Dependency Check') {
steps {
sh 'npm audit --audit-level moderate'
sh 'docker run --rm -v $(pwd):/app owasp/dependency-check --scan /app'
}
}
stage('Container Scan') {
steps {
script {
def image = docker.build("temp:${env.BUILD_NUMBER}")
sh "docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image ${image.id}"
}
}
}
}
}
stage('Secrets Detection') {
steps {
sh 'docker run --rm -v $(pwd):/app zricethezav/gitleaks detect --source=/app'
}
}
}
}
Challenges and Solutions
Challenge 1: Build Performance
Problem: Long build times affecting developer productivity
Solution:
Implemented parallel execution
Optimized Docker builds with multi-stage patterns
Used Jenkins build agents for distribution
Implemented intelligent caching strategies
Challenge 2: Environment Consistency
Problem: "Works on my machine" issues across environments
Solution:
Containerized all applications (Week 7 foundation)
Environment-specific configurations
Infrastructure as Code practices
Automated environment provisioning
Challenge 3: Security Integration
Problem: Security checks as afterthoughts
Solution:
Shift-left security testing
Automated vulnerability scanning
Policy-as-code implementation
Compliance automation
Key Metrics and Results
Deployment Success Metrics
Before Jenkins CI/CD:
Average deployment time: 3-4 hours
Success rate: 70%
Rollback time: 2-3 hours
Manual interventions: 15+ per deployment
After Jenkins CI/CD:
Average deployment time: 15 minutes
Success rate: 95%
Rollback time: 5 minutes
Manual interventions: 1-2 per deployment
Developer Productivity Impact
Metric Before After Improvement
Time to Production 2-3 days 30 minutes 95%
Bug Detection Time Days Minutes 99%
Feedback Loop Hours Minutes 95%
Deploy Frequency Weekly Multiple daily 500%
Tools and Technologies Mastered
Core Jenkins:
Jenkins Pipeline (Declarative & Scripted)
Multi-branch pipelines
Shared libraries
Blue Ocean UI
Jenkins Configuration as Code (JCasC)
Integration Tools:
Docker integration
AWS CLI and SDKs
Kubernetes CLI
Git webhooks
Slack notifications
Quality & Security:
SonarQube integration
OWASP dependency check
Trivy container scanning
ESLint and testing frameworks
What's Next: Week 9 Preview
Week 9 focuses on AWS Services - taking our automated pipelines to the cloud scale! The Jenkins foundation will integrate perfectly with:
AWS Native Services:
AWS CodePipeline and CodeBuild
ECS and EKS deployments
CloudFormation integration
Auto Scaling and Load Balancing
Advanced Topics:
Multi-region deployments
Blue-green deployment strategies
Canary releases with traffic splitting
Disaster recovery automation
Key Takeaways
CI/CD is Cultural Transformation: It's not just about tools, it's about changing how teams work
Automation Reduces Risk: Consistent, repeatable processes eliminate human error
Integration Amplifies Value: Jenkins works best when integrated with other DevOps tools
Security Must Be Built-In: Security should be part of every pipeline stage
Monitoring Enables Continuous Improvement: You can't improve what you don't measure
Connect with My Journey
Following my DevOps transformation? Let's connect and share experiences!
π LinkedIn: https://www.linkedin.com/in/iamdevdave/
π Hashnode: https://jenkinsindevops.hashnode.dev/week-8-jenkins-cicd-mastery-automating-the-containerized-world
Week 8 has been transformative - from manual deployments to fully automated CI/CD pipelines. The combination of Week 7's containerization with Week 8's automation creates a powerful foundation for modern software delivery.
Next week, we're scaling this automation in the AWS cloud. The journey from local Docker containers to enterprise-grade cloud deployments continues!
Top comments (0)