This project sets up an Amazon EKS (Elastic Kubernetes Service) cluster using Terraform and Terragrunt. The infrastructure is designed to be scalable and cost-efficient, leveraging Karpenter for node auto-scaling, along with a well-structured VPC and IAM policies to ensure security and high availability.
-
Terragrunt and Terraform:
- Ensure that
terragruntandterraformare installed on your system. - Modify
/infra/account.hclto match the correct AWS account ID before deploying.
- Ensure that
-
AWS Credentials:
- Export valid AWS credentials in your environment:
export AWS_ACCESS_KEY_ID="your-access-key-id" export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
- Export valid AWS credentials in your environment:
-
kubectl and Helm:
- Install
kubectlandhelmto interact with the Kubernetes cluster. - Ensure you have AWS CLI configured to fetch EKS authentication tokens.
- Install
The project follows a modular approach with the following structure:
eks-infra-live/
|└── infra/
|└── production/
|└── env.hcl
|└── us-east-1/
|├── region.hcl
|└── services/
|├── service.hcl
|└── eks/
|├── terragrunt.hcl
|└── modules/
|├── aws/
|└── eks/
|└── vpc/
|└── karpenter/
-
infra/production/us-east-1/services/eks/terragrunt.hcl:- Main entry point to deploy the EKS infrastructure.
-
modules/aws/eks:- Contains Terraform configuration to deploy an Amazon EKS cluster.
-
modules/aws/vpc:- Defines the network infrastructure, including subnets, NAT gateway, and route tables.
-
modules/aws/karpenter:- Deploys Karpenter for node auto-scaling.
Navigate to the EKS service folder and apply Terragrunt:
cd infra/production/us-east-1/services/eks/
terragrunt applyThis command initializes and applies the Terraform configurations defined for the EKS service.
- EKS Cluster: Deployed with a managed control plane.
- VPC and Subnets: Includes private, public, and intra subnets.
- IAM Policies and Roles: Provides required permissions to EKS, Karpenter, and worker nodes.
- EKS Add-ons:
- CoreDNS
- Kube-Proxy
- VPC CNI
- EKS Pod Identity Agent
- Karpenter is used instead of traditional Managed Node Groups for dynamic scaling.
- Instance types are optimized based on workload demand.
- Uses Spot and On-Demand instances to optimize cost.
- Deployed using Helm with the following configuration:
nodeSelector: karpenter.sh/controller: 'true' dnsPolicy: Default settings: clusterName: ${module.eks.cluster_name} clusterEndpoint: ${module.eks.cluster_endpoint} interruptionQueue: ${module.karpenter.queue_name} webhook: enabled: false
- IAM roles and policies grant Karpenter permissions to scale EC2 nodes dynamically.
-
Amazon CloudWatch:
- Logs stored under
/aws/eks/<cluster-name>/cluster. - Logs are retained for 90 days.
- Logs stored under
-
Security Groups:
- Configured to allow secure communication between cluster components.
-
Verify Cluster Access:
aws eks --region us-east-1 update-kubeconfig --name <cluster-name> kubectl get nodes
-
Check Karpenter Logs:
kubectl logs -n kube-system deployment/karpenter
-
Deploy a Sample Workload:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx
kubectl apply -f deployment.yaml
- The project uses Terragrunt to simplify multi-environment deployments.
- Karpenter replaces traditional AWS Auto Scaling Groups, making scaling more efficient.
- Ensure IAM permissions are correctly configured for Karpenter to manage EC2 instances.
- Change
/infra/account.hclto match the correct AWS account before applying.
To destroy the infrastructure, run:
terragrunt destroyThis will remove the EKS cluster, VPC, and all associated resources.