Installation with external etcd¶
This guide walks you through the steps required to set up Cilium on Kubernetes using an external etcd. Use of an external etcd provides better performance and is suitable for larger environments. If you are looking for a simple installation method to get started, refer to the section Installation with managed etcd.
When do I need to use a kvstore?¶
Unlike the section Quick Installation, this guide explains how to configure Cilium to use an external kvstore such as etcd. If you are unsure whether you need to use a kvstore at all, the following is a list of reasons when to use a kvstore:
- If you want to use the Multi-Cluster (Cluster Mesh) functionality.
- If you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events.
- If you do not want Cilium to store state in Kubernetes custom resources (CRDs).
Make sure your Kubernetes environment is meeting the requirements:
- Kubernetes >= 1.12
- Linux kernel >= 4.9
- Kubernetes in CNI mode
- Mounted eBPF filesystem mounted on all worker nodes
- Recommended: Enable PodCIDR allocation (
--allocate-node-cidrs) in the
Refer to the section Requirements for detailed instruction on how to prepare your Kubernetes environment.
You will also need an external etcd version 3.1.0 or higher.
When using an external kvstore, the address of the external kvstore needs to be
configured in the ConfigMap. Download the base YAML and configure it with
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.9.9 \ --namespace kube-system \ --set etcd.enabled=true \ --set "etcd.endpoints=http://etcd-endpoint1:2379" \ --set "etcd.endpoints=http://etcd-endpoint2:2379" \ --set "etcd.endpoints=http://etcd-endpoint3:2379"
If you do not want Cilium to store state in Kubernetes custom resources (CRDs),
Optional: Configure the SSL certificates¶
Create a Kubernetes secret with the root certificate authority, and client-side key and certificate of etcd:
kubectl create secret generic -n kube-system cilium-etcd-secrets \ --from-file=etcd-client-ca.crt=ca.crt \ --from-file=etcd-client.key=client.key \ --from-file=etcd-client.crt=client.crt
Adjust the helm template generation to enable SSL for etcd and use https instead of http for the etcd endpoint URLs:
helm install cilium cilium/cilium --version 1.9.9 \ --namespace kube-system \ --set etcd.enabled=true \ --set etcd.ssl=true \ --set "etcd.endpoints=https://etcd-endpoint1:2379" \ --set "etcd.endpoints=https://etcd-endpoint2:2379" \ --set "etcd.endpoints=https://etcd-endpoint3:2379"
Validate the Installation¶
Verify that Cilium pods were started on each of your worker nodes
kubectl --namespace kube-system get ds cilium NAME DESIRED CURRENT READY NODE-SELECTOR AGE cilium 4 4 4 <none> 3m2s kubectl -n kube-system get deployments cilium-operator NAME READY UP-TO-DATE AVAILABLE AGE cilium-operator 2/2 2 2 2m6s
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as
environment variable. Subsequent commands reference this environment variable.
Enable Hubble for Cluster-Wide Visibility¶
Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI as follows on your existing installation:
Once the Hubble UI pod is started, use port forwarding for the
service. This allows opening the UI locally on a browser:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
And then open http://localhost:12000/ to access the UI.
Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:
Similarly to the UI, use port forwarding for the
hubble-relay service to
make it available locally:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
In a separate terminal window, run the
hubble status command specifying the
Hubble Relay address:
$ hubble --server localhost:4245 status Healthcheck (via localhost:4245): Ok Current/Max Flows: 5455/16384 (33.29%) Flows/s: 11.30 Connected Nodes: 4/4
If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:
hubble --server localhost:4245 observe
If you encounter any problem at this point, you may seek help on Slack.
Hubble CLI configuration can be persisted using a configuration file or
environment variables. This avoids having to specify options specific to a
particular environment every time a command is run. Run
config for more information.
For more information about Hubble and its components, see the Observability section.