Installation using Rancher Kubernetes Engine

This guide walks you through installation of Cilium on standalone Rancher Kubernetes Engine (RKE) clusters, SUSE’s CNCF-certified Kubernetes distribution with built-in security and compliance capabilities. RKE solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks.

If you’re using the Rancher Management Console/UI to install your RKE clusters, head over to the Installation using Rancher guide.

Install a Cluster Using RKE1

The first step is to install a cluster based on the RKE1 Kubernetes installation guide. When creating the cluster, make sure to change the default network plugin in the generated config.yaml file.

Change:

network:
  options:
    flannel_backend_type: "vxlan"
  plugin: "canal"

To:

network:
  plugin: none

Install a Cluster Using RKE2

The first step is to install a cluster based on the RKE2 Kubernetes installation guide. You can either use the RKE2-integrated Cilium version or you can configure the RKE2 cluster with cni: none (see doc), and install Cilium with Helm. You can use either method while the directly integrated one is recommended for most users.

Cilium power-users might want to use the cni: none method as Rancher is using a custom rke2-cilium Helm chart with independent release cycles for its integrated Cilium version. By instead using the out-of-band Cilium installation (based on the official Cilium Helm chart), power-users gain more flexibility from a Cilium perspective.

Deploy Cilium

Install Cilium via helm install:

helm repo add cilium https://helm.cilium.io
helm repo update
helm install cilium cilium/cilium --version 1.16.3 \
   --namespace $CILIUM_NAMESPACE

Validate the Installation

Warning

Make sure you install cilium-cli v0.15.0 or later. The rest of instructions do not work with older versions of cilium-cli. To confirm the cilium-cli version that’s installed in your system, run:

cilium version --client

See Cilium CLI upgrade notes for more details.

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

To validate that Cilium has been properly installed, you can run

$ cilium status --wait
   /¯¯\
/¯¯\__/¯¯\    Cilium:         OK
\__/¯¯\__/    Operator:       OK
/¯¯\__/¯¯\    Hubble:         disabled
\__/¯¯\__/    ClusterMesh:    disabled
   \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium-operator    Running: 2
                  cilium             Running: 2
Image versions    cilium             quay.io/cilium/cilium:v1.9.5: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

$ cilium connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)

Note

The connectivity test may fail to deploy due to too many open files in one or more of the pods. If you notice this error, you can increase the inotify resource limits on your host machine (see Pod errors due to “too many open files”).

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉

Next Steps