Installation with external etcd

This guide walks you through the steps required to set up Cilium on Kubernetes using an external etcd. Use of an external etcd provides better performance and is suitable for larger environments.

Should you encounter any issues during the installation, please refer to the Troubleshooting section and/or seek help on Cilium Slack.

When do I need to use a kvstore?

Unlike the section Cilium Quick Installation, this guide explains how to configure Cilium to use an external kvstore such as etcd. If you are unsure whether you need to use a kvstore at all, the following is a list of reasons when to use a kvstore:

  • If you are running in an environment where you observe a high overhead in state propagation caused by Kubernetes events.

  • If you do not want Cilium to store state in Kubernetes custom resources (CRDs).

  • If you run a cluster with more pods and more nodes than the ones tested in the Scalability report.

Requirements

Make sure your Kubernetes environment is meeting the requirements:

  • Kubernetes >= 1.16

  • Linux kernel >= 4.19.57 or equivalent

  • Kubernetes in CNI mode

  • Mounted eBPF filesystem mounted on all worker nodes

  • Recommended: Enable PodCIDR allocation (--allocate-node-cidrs) in the kube-controller-manager (recommended)

Refer to the section Requirements for detailed instruction on how to prepare your Kubernetes environment.

You will also need an external etcd version 3.4.0 or higher.

Kvstore and Cilium dependency

When using an external kvstore, it’s important to break the circular dependency between Cilium and kvstore. If kvstore pods are running within the same cluster and are using a pod network then kvstore relies on Cilium. However, Cilium also relies on the kvstore, which creates a circular dependency. There are two recommended ways of breaking this dependency:

  • Deploy kvstore outside of cluster or on separately managed cluster.

  • Deploy kvstore pods with a host network, by specifying hostNetwork: true in the pod spec.

Configure Cilium

When using an external kvstore, the address of the external kvstore needs to be configured in the ConfigMap. Download the base YAML and configure it with Helm:

Download the Cilium release tarball and change to the kubernetes install directory:

curl -LO https://github.com/cilium/cilium/archive/main.tar.gz
tar xzf main.tar.gz
cd cilium-main/install/kubernetes

Deploy Cilium release via Helm:

helm install cilium ./cilium \
  --namespace kube-system \
  --set etcd.enabled=true \
  --set "etcd.endpoints[0]=http://etcd-endpoint1:2379" \
  --set "etcd.endpoints[1]=http://etcd-endpoint2:2379" \
  --set "etcd.endpoints[2]=http://etcd-endpoint3:2379"

If you do not want Cilium to store state in Kubernetes custom resources (CRDs), consider setting identityAllocationMode:

--set identityAllocationMode=kvstore

Optional: Configure the SSL certificates

Create a Kubernetes secret with the root certificate authority, and client-side key and certificate of etcd:

kubectl create secret generic -n kube-system cilium-etcd-secrets \
    --from-file=etcd-client-ca.crt=ca.crt \
    --from-file=etcd-client.key=client.key \
    --from-file=etcd-client.crt=client.crt

Adjust the helm template generation to enable SSL for etcd and use https instead of http for the etcd endpoint URLs:

helm install cilium ./cilium \
  --namespace kube-system \
  --set etcd.enabled=true \
  --set etcd.ssl=true \
  --set "etcd.endpoints[0]=https://etcd-endpoint1:2379" \
  --set "etcd.endpoints[1]=https://etcd-endpoint2:2379" \
  --set "etcd.endpoints[2]=https://etcd-endpoint3:2379"

Validate the Installation

Warning

Make sure you install cilium-cli v0.15.0 or later. The rest of instructions do not work with older versions of cilium-cli. To confirm the cilium-cli version that’s installed in your system, run:

cilium version --client

See Cilium CLI upgrade notes for more details.

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Clone the Cilium GitHub repository so that the Cilium CLI can access the latest unreleased Helm chart from the main branch:

git clone git@github.com:cilium/cilium.git
cd cilium

To validate that Cilium has been properly installed, you can run

$ cilium status --wait
   /¯¯\
/¯¯\__/¯¯\    Cilium:         OK
\__/¯¯\__/    Operator:       OK
/¯¯\__/¯¯\    Hubble:         disabled
\__/¯¯\__/    ClusterMesh:    disabled
   \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium-operator    Running: 2
                  cilium             Running: 2
Image versions    cilium             quay.io/cilium/cilium:v1.9.5: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

$ cilium connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)

Note

The connectivity test may fail to deploy due to too many open files in one or more of the pods. If you notice this error, you can increase the inotify resource limits on your host machine (see Pod errors due to “too many open files”).

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉

Next Steps