How to Set Up and Deploy the Application Load Balancer Controller
Last Updated :
01 Apr, 2025
If you're running applications on Kubernetes using Amazon EKS (Elastic Kubernetes Service), you need a way to efficiently distribute traffic across your services. The AWS Application Load Balancer Controller makes this process much easier by automatically managing the distribution of traffic to your applications, helping you scale seamlessly and ensure high availability.
This guide will walk you through the process of setting up and deploying the ALB Ingress Controller on your Kubernetes cluster. Whether you're new to Kubernetes or looking to optimize your cloud setup, this article will help you get started quickly and effectively.
What is the AWS Application Load Balancer Controller?
The AWS ALB Controller is a tool that helps Kubernetes clusters automatically manage AWS Application Load Balancers (ALBs). It acts as an Ingress Controller, meaning it helps route external traffic into your Kubernetes cluster based on defined rules.
Why is this important? When you have multiple services running in Kubernetes, you need a way to efficiently route traffic between them. The AWS ALB Controller automates that by configuring the ALB for you, saving time and ensuring your traffic flows smoothly.
Step-by-Step Guide to Setting Up and Deploying the Application Load Balancer Controller in Kubernetes
Setting up an Application Load Balancer (ALB) in Kubernetes is key to distributing traffic evenly across your services, improving performance, and maintaining high availability. If you're using Amazon EKS (Elastic Kubernetes Service), the AWS ALB Ingress Controller makes this process much simpler.
Here’s a straightforward guide to setting up and deploying the AWS ALB Ingress Controller in your Kubernetes cluster.
Prerequisites
Before starting, ensure that you have the following:
- A working Amazon EKS cluster.
- kubectl installed and configured for your cluster.
- The necessary IAM permissions to manage resources in EKS.
- Helm 3 installed for managing Kubernetes applications.
- The AWS CLI set up and configured on your local machine.
Step 1: Install the AWS Load Balancer Controller using Helm
The AWS Load Balancer Controller is installed via Helm, a tool for managing Kubernetes applications. Here’s how you can install it:
1.1. Add the AWS Helm Repository
First, add the official AWS Helm repository to your setup:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
1.2. Install the Controller
Once the repository is added, you can install the AWS Load Balancer Controller with this command:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=<your-cluster-name> \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--namespace kube-system
Make sure to replace <your-cluster-name>
with your actual EKS cluster name. This command installs the ALB Ingress Controller into the kube-system namespace, and uses an existing service account (which we will create in the next step).
Step 2: Create the IAM Policy and Role
To allow the AWS Load Balancer Controller to interact with AWS resources like load balancers and security groups, you need to create an IAM policy and attach it to a service account.
2.1. Create the IAM Policy
Start by creating an IAM policy with the required permissions:
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam-policy.json
Here’s an example iam-policy.json
file with the required permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:*",
"ec2:Describe*",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:DeleteSecurityGroup"
],
"Resource": "*"
}
]
}
2.2. Create the IAM Role and Attach the Policy
Once the policy is created, attach it to an IAM role using the following command:
eksctl create iamserviceaccount \
--region <region> \
--name aws-load-balancer-controller \
--namespace kube-system \
--cluster <your-cluster-name> \
--attach-policy-arn arn:aws:iam::<account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve \
--override-existing-serviceaccounts
Replace <region>
, <your-cluster-name>
, and <account-id>
with your actual values.
Step 3: Create and Apply the Ingress Resource
Now that the controller is installed, the next step is to define the routing rules for your Application Load Balancer using an Ingress resource.
3.1. Create the Ingress Resource
Here’s an example YAML file for creating an Ingress resource. This defines how the ALB should route traffic to your service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: default
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/backend-protocol: HTTP
spec:
rules:
- host: <your-domain>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <your-service-name>
port:
number: 80
In this example, replace <your-domain>
with your domain name and <your-service-name>
with the name of the service you want to expose.
3.2. Apply the Ingress Resource
Once you’ve created the YAML file, apply it to your Kubernetes cluster:
kubectl apply -f example-ingress.yaml
Step 4: Verify the ALB Deployment
After applying the Ingress resource, verify the status of the ALB by running:
kubectl get ingress
You should see the external DNS name of the ALB in the output. If everything is set up correctly, the ALB will be managing traffic to your backend service.
Step 5: Test the Setup
To test, navigate to your domain in a browser. The ALB should route the traffic to your backend service, and the load will be distributed across your Kubernetes pods. If you see the expected result, then the setup is working correctly.
Conclusion
Congratulations, you’ve successfully set up and deployed the AWS Application Load Balancer Controller in your Kubernetes cluster. With this setup, you can now manage and distribute external traffic efficiently across your services, ensuring better scalability, high availability, and a smooth user experience.
Similar Reads
How To Deploy A Container To Azure Container Instances ?
In layman's terms, how will it feel when a developer develops an app and a member of the testing team tries to test the application on his/her OS? They will have to install the packages, and libraries required to run the app. Well, in this case, the container makes it easy and faster to rapidly depl
8 min read
How to Configure an Azure Load Balancer?
In this article, we will see how to set up a Load Balancer in Azure. Load Balancer is a component that splits or divides network traffic across multiple application servers. It helps in the high availability of applications. In this article, we will configure a public load balancer that is accessibl
6 min read
How To Use Azure Application Gateway for Web Application Firewall and Load Balancing?
Microsoft Azure offers an application gateway solution that enables users to control inbound traffic to their web apps. For improved security and speed, it provides load balancing capabilities and a web application firewallIn this article, we'll walk you through using Azure Application Gateway for l
3 min read
Load Balancing Flask Application using Nginx and Docker
Load balancing means efficiently distributing the incoming traffic to different server instances. Nginx is open-source software that can be used to apply load balancing to backend systems. Nginx also can be serve services such as reverse proxy, caching, web server, etc.Docker is a tool that gives a
4 min read
How To Integrate AWS Auto Scaling With Application Load Balancer?
On learning how to bring efficiency to your AWS infrastructure will enhance the workflow and cost management. In this article, we will guide you on integrating AWS Auto Scaling and Application Load Balancer. Exploring the seamless setup process, empowering your applications to effortlessly scale bas
7 min read
Integrating AWS Lambda with Application Load Balancer
With the growing acceptance of the serverless architecture within businesses, AWS Lambda is a computing service that is being applied broadly for running functions without provisioning or managing servers. However, maintaining balance in the traffic reaching these Lambda functions is one critical fa
6 min read
How To Build And Deploy Spring Boot Application In Tomcat For DevOps ?
Spring Boot is a famous Java framework for building stand-alone applications. Now if we are going to build web applications using Spring Boot then it may not be useful to build stand-alone applications as we need to deploy the application on the web. Spring Boot by default packages the application i
9 min read
How to Use AWS Fargate to Deploy Containerized Applications without Managing Servers?
Fargate provides a server-less architecture, where we do not need to worry about the backend infrastructure on which our application gets deployed. Fargate provides of a simple provisioned infrastructure that is entirely managed and taken care by aws. Application deployment becomes easy and efficien
4 min read
How to Deploy Spring Boot Application in Kubernetes ?
The Spring Boot framework provides an assortment of pre-configured templates and tools to make the development of Java-based applications simpler. With little configuration, it enables developers to quickly design production-ready, stand-alone apps. Kubernetes, commonly referred to as K8s, is an ope
7 min read
How To Deploy Python Application In Kubernetes ?
In today's IT world we are moving from the monolithic to microservice architecture to make our applications highly available and scalable to bring fault tolerance. In this transformation, containerization i.e., containerizing the application are a fundamental aspect of this micro services. In this a
6 min read