Setting Up the EFK Stack on Kubernetes with Elastic Cloud on Kubernetes (ECK)
The EFK Stack (Elasticsearch, Fluentd, and Kibana) is a powerful logging and monitoring solution commonly used in Kubernetes environments. Elasticsearch serves as the search and analytics engine, Fluentd (or Fluent Bit) acts as the log forwarder, and Kibana is the visualisation layer.
In this guide, we’ll walk through how to deploy the EFK stack on Kubernetes using Elastic Cloud on Kubernetes (ECK). ECK simplifies the deployment and management of Elasticsearch and Kibana, addressing operational challenges like upgrades, scaling, and configuration management by automating these tasks with Kubernetes operators.
Prerequisites & Assumptions
Before we begin, make sure you have the following tools set up:
- Helm: A package manager for Kubernetes, which simplifies application deployment and management.
- kubectl: The Kubernetes command-line tool, necessary for interacting with your Kubernetes cluster.
You’ll also need access to a Kubernetes cluster where you have sufficient privileges to deploy resources.
Step-by-Step Guide to Deploy the EFK Stack on Kubernetes with ECK
1. Install the ECK Operator & Fluent Bit
First, we need to install the Elastic Cloud on Kubernetes (ECK) Operator and the Fluent Bit agent for log forwarding.
Create Namespace for Observability
Create a new namespace where all observability resources will reside.
kubectl create ns observer
Add Helm Repositories
Add the Elastic and Fluent Bit Helm repositories to your Helm configuration:
helm repo add elastic https://helm.elastic.co
helm repo add fluent https://fluent.github.io/helm-charts
Install the ECK Operator
The ECK Operator manages the deployment of Elasticsearch and Kibana. Install the operator with Helm.
helm install elastic-operator elastic/eck-operator -n observer --kubeconfig /path/to/your/kubeconfig # Replace with your kubeconfig path
2. Deploy Elasticsearch Cluster
Now, let’s deploy an Elasticsearch cluster that Fluent Bit will forward logs to. This deployment will use a simple Elasticsearch configuration with one node.
Create an Elasticsearch YAML File
Save the following YAML configuration as elasticsearch.yaml
:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart-es
namespace: observer
spec:
version: 8.17.1 # Specify your desired Elasticsearch version
nodeSets:
- name: default
count: 1 # Single-node cluster (scale as needed)
config:
node.store.allow_mmap: false # Important for certain environments
http:
service:
spec:
type: LoadBalancer # Use NodePort if LoadBalancer is not available
Apply the Elasticsearch Configuration
Deploy Elasticsearch by applying the configuration:
kubectl apply -f elasticsearch.yaml
3. Deploy Kibana Instance
Next, deploy Kibana, the UI for visualising logs from Elasticsearch.
Create a Kibana YAML File
Save the following YAML configuration as kibana.yaml
:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart-kb
namespace: observer
spec:
version: 8.17.1 # Specify your desired Kibana version
count: 1 # Single instance
elasticsearchRef:
name: quickstart-es
http:
service:
spec:
type: NodePort # Use LoadBalancer if preferred
Apply the Kibana Configuration
Deploy Kibana by applying the configuration:
kubectl apply -f kibana.yaml
4. Configure Fluent Bit for Log Forwarding
Fluent Bit will act as the log forwarder, sending logs from your Kubernetes nodes to Elasticsearch.
Configure Fluent Bit Values File
You will need to create a configuration for Fluent Bit, specifying how it will connect to Elasticsearch. Create a Fluent Bit values file (e.g., fluentbit-values.yaml
) and populate the output.conf section:
data:
output.conf: |
[OUTPUT]
Name es
Match *
Host <elasticsearch_service_ip> # Replace with Elasticsearch service IP
Port 9200
tls.verify Off # Security risk, enable in production
tls.debug 3
Index fluentbit-forwarder
Logstash_Format Off
tls.ca_file /fluent-bit/tls/tls.crt
tls.crt_file /fluent-bit/tls/tls.crt
HTTP_User elastic
HTTP_Passwd <elasticsearch_password> # Replace with actual password
tls On
Suppress_Type_Name On
Make sure to replace <elasticsearch_service_ip>
and <elasticsearch_password>
with the appropriate values from your deployment.
quickstart-es-es-elastic-user # Grab the pasword and be sure to Base64 Decode
quickstart-es-es-http # Grab the External IP from the LoadBalancer service
Install Fluent Bit with Helm
Now, install Fluent Bit using Helm, referencing the values file you created:
helm install fluent-bit fluent/fluent-bit -f fluentbit-values.yaml -n observer
5. Mount Certificates via Secret in Fluent Bit DaemonSet
To securely connect Fluent Bit to Elasticsearch over HTTPS, you need to mount the Elasticsearch TLS certificates into the Fluent Bit container.
Update Fluent Bit DaemonSet
Modify the Fluent Bit DaemonSet manifest to mount the necessary certificates stored in the quickstart-es-http-certs-public
secret:
volumeMounts:
- mountPath: /fluent-bit/tls
name: tls-certs
readOnly: true # Ensure the certificates are read-only
volumes:
- name: tls-certs
secret:
secretName: quickstart-es-http-certs-public
This configuration mounts the certificates at /fluent-bit/tls
within the Fluent Bit container. Ensure the tls.ca_file
and tls.crt_file
paths in your Fluent Bit configuration point to these mounted files (e.g., /fluent-bit/tls/tls.crt
).
Restart Fluent Bit Pods
Once the changes are made, restart the Fluent Bit pods:
kubectl rollout restart daemonset fluent-bit -n observer
6. Access Kibana UI
Once everything is up and running, you can access Kibana to visualise and query your logs.
Port-Forward Kibana (If Necessary)
If Kibana is exposed via a ClusterIP service, use kubectl port-forward
to access it locally:
kubectl port-forward svc/quickstart-kb-http 5601:5601 -n obs-system --kubeconfig /path/to/your/kubeconfig
Get External IP or NodePort of Kibana
To access Kibana from outside the cluster, retrieve the external IP or NodePort of the Kibana service:
kubectl get svc quickstart-kb-http -n obs-system --kubeconfig /path/to/your/kubeconfig
Then, open the Kibana UI in your browser at:
http://<kibana_ip>:<kibana_port>/login
7. Test Elasticsearch Query
To ensure that logs are being ingested into Elasticsearch, you can test a query using curl
:
kubectl exec -it quickstart-es-default -- bash --kubeconfig /path/to/your/kubeconfig
curl -k https://<elasticsearch_service_ip>:9200/fluentbit-forwarder/_search?pretty=true -u "elastic:<elasticsearch_password>"
Security Considerations (Caveats)
While the setup above is functional, there are some important security risks that need to be addressed in a production environment:
- TLS Configuration: In this example,
tls.verify
is turned off, which exposes the system to man-in-the-middle (MITM) attacks. This should be enabled with proper certificate validation in production. - Credentials in ConfigMap: Storing sensitive information like passwords in plain text within ConfigMaps is a major security risk. Instead, use Kubernetes Secrets or a dedicated secrets management solution to securely manage credentials.
These issues will need to be addressed as part of securing the deployment in a real-world scenario.
Conclusion
Deploying the EFK Stack on Kubernetes using Elastic Cloud on Kubernetes (ECK) is a powerful way to centralise log management, enabling better observability and faster troubleshooting. While this guide focused on a basic setup, you can expand and secure the deployment based on your production needs. By leveraging the operator pattern, ECK streamlines Elasticsearch and Kibana management, reducing operational overhead and improving scalability.
If you enjoyed this read, let me know in the comments section or shoot me a message on LinkedIn: https://www.linkedin.com/in/saedf/