Setting Up ELK Stack in Kubernetes
Setting Up ELK Stack in Kubernetes
Introduction
This guide explains how to deploy the ELK Stack (Elasticsearch, Logstash, Kibana) in a
Kubernetes cluster as separate pods for centralized logging.
1. Deploy Elasticsearch
Create a StatefulSet for Elasticsearch, as it requires persistent storage.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.6.0
ports:
- containerPort: 9200
env:
- name: discovery.type
value: single-node
resources:
limits:
memory: "2Gi"
cpu: "1"
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
2. Deploy Logstash
Create a Deployment for Logstash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:8.6.0
ports:
- containerPort: 5044
volumeMounts:
- name: logstash-config
mountPath: /usr/share/logstash/pipeline
volumes:
- name: logstash-config
configMap:
name: logstash-config
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
data:
logstash.conf: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}
3. Deploy Kibana
Create a Deployment for Kibana.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:8.6.0
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
spec:
selector:
matchLabels:
name: filebeat
template:
metadata:
labels:
name: filebeat
spec:
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.6.0
volumeMounts:
- name: config
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
volumes:
- name: config
configMap:
name: filebeat-config
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
data:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
output.logstash:
hosts: ["logstash:5044"]
5. Expose Kibana
Expose Kibana as a Service so you can access it from outside.
apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 32000
selector:
app: kibana
http://<Kubernetes_Node_IP>:32000
Final Thoughts
✅ Elasticsearch: Stores logs.
✅ Logstash: Processes logs.
✅ Kibana: Visualizes logs.
✅ Filebeat: Collects logs from application pods.
This setup provides centralized logging in Kubernetes. If you need Helm charts or auto-
scaling for ELK, let me know! 🚀