0% found this document useful (0 votes)
12 views

Setting Up ELK Stack in Kubernetes

This guide outlines the steps to deploy the ELK Stack (Elasticsearch, Logstash, Kibana) in a Kubernetes cluster, including the creation of StatefulSets and Deployments for each component. It also details the configuration of Filebeat for log collection and the exposure of Kibana as a Service for external access. The setup provides a centralized logging solution within Kubernetes, with options for further enhancements like Helm charts or auto-scaling.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Setting Up ELK Stack in Kubernetes

This guide outlines the steps to deploy the ELK Stack (Elasticsearch, Logstash, Kibana) in a Kubernetes cluster, including the creation of StatefulSets and Deployments for each component. It also details the configuration of Filebeat for log collection and the exposure of Kibana as a Service for external access. The setup provides a centralized logging solution within Kubernetes, with options for further enhancements like Helm charts or auto-scaling.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Setting Up ELK Stack in Kubernetes

Introduction
This guide explains how to deploy the ELK Stack (Elasticsearch, Logstash, Kibana) in a
Kubernetes cluster as separate pods for centralized logging.

1. Deploy Elasticsearch
Create a StatefulSet for Elasticsearch, as it requires persistent storage.

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.6.0
ports:
- containerPort: 9200
env:
- name: discovery.type
value: single-node
resources:
limits:
memory: "2Gi"
cpu: "1"
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi

2. Deploy Logstash
Create a Deployment for Logstash.

apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:8.6.0
ports:
- containerPort: 5044
volumeMounts:
- name: logstash-config
mountPath: /usr/share/logstash/pipeline
volumes:
- name: logstash-config
configMap:
name: logstash-config

Create ConfigMap for Logstash:

apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
data:
logstash.conf: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}

3. Deploy Kibana
Create a Deployment for Kibana.

apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:8.6.0
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"

4. Deploy Filebeat (Log Collector)


Use Filebeat to collect logs from your application pods and forward them to Logstash.

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
spec:
selector:
matchLabels:
name: filebeat
template:
metadata:
labels:
name: filebeat
spec:
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.6.0
volumeMounts:
- name: config
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
volumes:
- name: config
configMap:
name: filebeat-config

Create ConfigMap for Filebeat:

apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
data:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
output.logstash:
hosts: ["logstash:5044"]

5. Expose Kibana
Expose Kibana as a Service so you can access it from outside.

apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 32000
selector:
app: kibana

Now, you can access Kibana at:

http://<Kubernetes_Node_IP>:32000

Final Thoughts
✅ Elasticsearch: Stores logs.
✅ Logstash: Processes logs.
✅ Kibana: Visualizes logs.
✅ Filebeat: Collects logs from application pods.

This setup provides centralized logging in Kubernetes. If you need Helm charts or auto-
scaling for ELK, let me know! 🚀

You might also like