Installation Using K3s

This guide walks you through installation of Cilium on K3s, a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

Cilium is presently supported on amd64 and arm64 architectures.

Install a Master Node

The first step is to install a K3s master node making sure to disable support for the default CNI plugin and the built-in network policy enforcer:

Note

If running Cilium in Kubernetes Without kube-proxy mode, add option --disable-kube-proxy

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--flannel-backend=none --disable-network-policy' sh -

Install Agent Nodes (Optional)

K3s can run in standalone mode or as a cluster making it a great choice for local testing with multi-node data paths. Agent nodes are joined to the master node using a node-token which can be found on the master node at /var/lib/rancher/k3s/server/node-token.

Install K3s on agent nodes and join them to the master node making sure to replace the variables with values from your environment:

curl -sfL https://get.k3s.io | K3S_URL='https://${MASTER_IP}:6443' K3S_TOKEN=${NODE_TOKEN} sh -

Should you encounter any issues during the installation, please refer to the Troubleshooting section and/or seek help on Cilium Slack.

Please consult the Kubernetes Requirements for information on how you need to configure your Kubernetes cluster to operate with Cilium.

Configure Cluster Access

For the Cilium CLI to access the cluster in successive steps you will need to use the kubeconfig file stored at /etc/rancher/k3s/k3s.yaml by setting the KUBECONFIG environment variable:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Install Cilium

Warning

Make sure you install cilium-cli v0.15.0 or later. The rest of instructions do not work with older versions of cilium-cli. To confirm the cilium-cli version that’s installed in your system, run:

cilium version --client

See Cilium CLI upgrade notes for more details.

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Clone the Cilium GitHub repository so that the Cilium CLI can access the latest unreleased Helm chart from the main branch:

git clone git@github.com:cilium/cilium.git
cd cilium

Note

Install Cilium with --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16" to match k3s default podCIDR 10.42.0.0/16.

Note

If you are using Rancher Desktop, you may need to override the cni path by adding the additional flag --set 'cni.binPath=/usr/libexec/cni'

Install Cilium by running:

cilium install --chart-directory ./install/kubernetes/cilium --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16"

Validate the Installation

To validate that Cilium has been properly installed, you can run

$ cilium status --wait
   /¯¯\
/¯¯\__/¯¯\    Cilium:         OK
\__/¯¯\__/    Operator:       OK
/¯¯\__/¯¯\    Hubble:         disabled
\__/¯¯\__/    ClusterMesh:    disabled
   \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium-operator    Running: 2
                  cilium             Running: 2
Image versions    cilium             quay.io/cilium/cilium:v1.9.5: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

$ cilium connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)

Note

The connectivity test may fail to deploy due to too many open files in one or more of the pods. If you notice this error, you can increase the inotify resource limits on your host machine (see Pod errors due to “too many open files”).

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉

Next Steps