Quick Installation

This guide will walk you through the quick default installation. It will automatically detect and use the best configuration possible for the Kubernetes distribution you are using. All state is stored using Kubernetes CRDs.

This is the best installation method for most use cases. For large environments (> 500 nodes) or if you want to run specific datapath modes, refer to the Advanced Installation guide.

Should you encounter any issues during the installation, please refer to the Troubleshooting section and / or seek help on the Slack channel.

Create the Cluster

If you don’t have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service:

The following commands create a Kubernetes cluster using Google Kubernetes Engine. See Installing Google Cloud SDK for instructions on how to install gcloud and prepare your account.

export NAME="$(whoami)-$RANDOM"
# Create the node pool with the following taint to guarantee that
# Pods are only scheduled in the node when Cilium is ready.
gcloud container clusters create "${NAME}" \
 --node-taints node.cilium.io/agent-not-ready=true:NoSchedule \
 --zone us-west2-a
gcloud container clusters get-credentials "${NAME}" --zone us-west2-a

The following commands create a Kubernetes cluster using Azure Kubernetes Service. See Azure Cloud CLI for instructions on how to install az and prepare your account.

For more details about why node pools must be set up in this way on AKS, see the note below the commands.

export NAME="$(whoami)-$RANDOM"
export AZURE_RESOURCE_GROUP="${NAME}-group"
az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2

# Create AKS cluster
az aks create \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --name "${NAME}" \
  --network-plugin azure \
  --node-count 1

# Get name of initial system node pool
nodepool_to_delete=$(az aks nodepool list \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --cluster-name "${NAME}" \
  --output tsv --query "[0].name")

# Create system node pool tainted with `CriticalAddonsOnly=true:NoSchedule`
az aks nodepool add \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --cluster-name "${NAME}" \
  --name systempool \
  --mode system \
  --node-count 1 \
  --node-taints "CriticalAddonsOnly=true:NoSchedule" \
  --no-wait

# Create user node pool tainted with `node.cilium.io/agent-not-ready=true:NoSchedule`
az aks nodepool add \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --cluster-name "${NAME}" \
  --name userpool \
  --mode user \
  --node-count 2 \
  --node-taints "node.cilium.io/agent-not-ready=true:NoSchedule" \
  --no-wait

# Delete the initial system node pool
az aks nodepool delete \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --cluster-name "${NAME}" \
  --name "${nodepool_to_delete}" \
  --no-wait

# Get the credentials to access the cluster with kubectl
az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

Attention

Do NOT specify the --network-policy flag when creating the cluster, as this will cause the Azure CNI plugin to install unwanted iptables rules.

Note

Node pools must be tainted with node.cilium.io/agent-not-ready=true:NoSchedule to ensure that applications pods will only be scheduled once Cilium is ready to manage them, however on AKS:

  • It is not possible to assign taints to the initial node pool at this time, cf. Azure/AKS#1402.
  • It is not possible to assign custom node taints such as node.cilium.io/agent-not-ready=true:NoSchedule to system node pools, cf. Azure/AKS#2578.

In order to have Cilium properly manage application pods on AKS with these limitations, the operations above:

  • Replace the initial node pool with a new system node pool tainted with CriticalAddonsOnly=true:NoSchedule, preventing application pods from being scheduled on it.
  • Create a secondary user node pool tainted with node.cilium.io/agent-not-ready=true:NoSchedule, preventing application pods from being scheduled on it until Cilium is ready to manage them.

The following commands create a Kubernetes cluster with eksctl using Amazon Elastic Kubernetes Service. See eksctl Installation for instructions on how to install eksctl and prepare your account.

export NAME="$(whoami)-$RANDOM"
cat <<EOF >eks-config.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${NAME}
  region: eu-west-1

managedNodeGroups:
- name: ng-1
  desiredCapacity: 2
  privateNetworking: true
  # taint nodes so that application pods are
  # not scheduled until Cilium is deployed.
  taints:
   - key: "node.cilium.io/agent-not-ready"
     value: "true"
     effect: "NoSchedule"
EOF
eksctl create cluster -f ./eks-config.yaml

Install kind >= v0.7.0 per kind documentation: Installation and Usage

curl -LO https://raw.githubusercontent.com/cilium/cilium/v1.11/Documentation/gettingstarted/kind-config.yaml
kind create cluster --config=kind-config.yaml

Install minikube >= v1.12 as per minikube documentation: Install Minikube. The following command will bring up a single node minikube cluster prepared for installing cilium.

minikube start --network-plugin=cni --cni=false

Note

From minikube v1.12.1+, cilium networking plugin can be enabled directly with --cni=cilium parameter in minikube start command. However, this may not install the latest version of cilium.

Install the Cilium CLI

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
rm cilium-darwin-amd64.tar.gz{,.sha256sum}
See the full page of releases.

Install Cilium

You can install Cilium on any Kubernetes cluster. Pick one of the options below:

These are the generic instructions on how to install Cilium into any Kubernetes cluster. The installer will attempt to automatically pick the best configuration options for you. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms.

Requirements:

Tip

See System Requirements for more details on the system requirements.

Install Cilium

Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:

cilium install

To install Cilium on Google Kubernetes Engine (GKE), perform the following steps:

Default Configuration:

Datapath IPAM Datastore
Direct Routing Kubernetes PodCIDR Kubernetes CRD

Requirements:

  • The cluster must be created with the taint node.cilium.io/agent-not-ready=true:NoSchedule using --node-taints option.

Install Cilium:

Install Cilium into the GKE cluster:

cilium install

To install Cilium on Azure Kubernetes Service (AKS), perform the following steps:

Default Configuration:

Datapath IPAM Datastore
Direct Routing Azure IPAM Kubernetes CRD

Tip

If you want to chain Cilium on top of the Azure CNI, refer to the guide Azure CNI.

Requirements:

  • The AKS cluster must be created with --network-plugin azure for compatibility with Cilium. The Azure network plugin will be replaced with Cilium by the installer.
  • Node pools must be properly tainted to ensure applications pods are properly managed by Cilium:
    • User node pools must be tainted with node.cilium.io/agent-not-ready=true:NoSchedule to ensure application pods will only be scheduled once Cilium is ready to manage them.
    • System node pools must be tainted with CriticalAddonsOnly=true:NoSchedule, preventing application pods from being scheduled on them. This is necessary because it is not possible to assign custom node taints such as node.cilium.io/agent-not-ready=true:NoSchedule to system node pools, cf. Azure/AKS#2578.
      • The initial node pool must be replaced with a new system node pool since it is not possible to assign taints to the initial node pool at this time, cf. Azure/AKS#1402.

Limitations:

  • All VMs and VM scale sets used in a cluster must belong to the same resource group.

Install Cilium:

Install Cilium into the AKS cluster:

cilium install --azure-resource-group "${AZURE_RESOURCE_GROUP}"

To install Cilium on Amazon Elastic Kubernetes Service (EKS), perform the following steps:

Default Configuration:

Datapath IPAM Datastore
Direct Routing (ENI) AWS ENI Kubernetes CRD

For more information on AWS ENI mode, see AWS ENI.

Tip

If you want to chain Cilium on top of the AWS CNI, refer to the guide AWS VPC CNI plugin.

Requirements:

  • The EKS Managed Nodegroups must be properly tainted to ensure applications pods are properly managed by Cilium:

    • managedNodeGroups must be tainted with node.cilium.io/agent-not-ready=true:NoSchedule to ensure application pods will only be scheduled once Cilium is ready to manage them. For example, when using a ClusterConfig file to create the cluster:

      apiVersion: eksctl.io/v1alpha5
      kind: ClusterConfig
      ...
      managedNodeGroups:
      - name: ng-1
        ...
        # taint nodes so that application pods are
        # not scheduled until Cilium is deployed.
        taints:
         - key: "node.cilium.io/agent-not-ready"
           value: "true"
           effect: "NoSchedule"
      

Limitations:

  • The AWS ENI integration of Cilium is currently only enabled for IPv4. If you want to use IPv6, use a datapath/IPAM mode other than ENI.

Install Cilium:

Install Cilium into the EKS cluster.

cilium install
cilium status --wait

To install Cilium on OpenShift, perform the following steps:

Default Configuration:

Datapath IPAM Datastore
Encapsulation Cluster Pool Kubernetes CRD

Requirements:

  • OpenShift 4.x

Install Cilium:

Cilium is a Certified OpenShift CNI Plugin and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to Installation on OpenShift OKD for more information.

To install Cilium on Rancher Kubernetes Engine (RKE), perform the following steps:

Note

If you are using RKE2, Cilium has been directly integrated. Please see Using Cilium in the RKE2 documentation. You can use either method.

Default Configuration:

Datapath IPAM Datastore
Encapsulation Cluster Pool Kubernetes CRD

Requirements:

  • Follow the RKE Installation Guide with the below change:

    From:

    network:
      options:
        flannel_backend_type: "vxlan"
      plugin: "canal"
    

    To:

    network:
      plugin: none
    

Install Cilium:

Install Cilium into your newly created RKE cluster:

cilium install

To install Cilium on k3s, perform the following steps:

Default Configuration:

Datapath IPAM Datastore
Encapsulation Cluster Pool Kubernetes CRD

Requirements:

  • Install your k3s cluster as you would normally would but pass in --flannel-backend=none so you can install Cilium on top:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--flannel-backend=none' sh -

Install Cilium:

Install Cilium into your newly created Kubernetes cluster:

KUBECONFIG=/etc/rancher/k3s/k3s.yaml cilium install

If the installation fails for some reason, run cilium status to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed.

Tip

You may be seeing cilium install print something like this:

♻️  Restarted unmanaged pod kube-system/event-exporter-gke-564fb97f9-rv8hg
♻️  Restarted unmanaged pod kube-system/kube-dns-6465f78586-hlcrz
♻️  Restarted unmanaged pod kube-system/kube-dns-autoscaler-7f89fb6b79-fsmsg
♻️  Restarted unmanaged pod kube-system/l7-default-backend-7fd66b8b88-qqhh5
♻️  Restarted unmanaged pod kube-system/metrics-server-v0.3.6-7b5cdbcbb8-kjl65
♻️  Restarted unmanaged pod kube-system/stackdriver-metadata-agent-cluster-level-6cc964cddf-8n2rt

This indicates that your cluster was already running some pods before Cilium was deployed and the installer has automatically restarted them to ensure all pods get networking provided by Cilium.

Validate the Installation

To validate that Cilium has been properly installed, you can run

$ cilium status --wait
   /¯¯\
/¯¯\__/¯¯\    Cilium:         OK
\__/¯¯\__/    Operator:       OK
/¯¯\__/¯¯\    Hubble:         disabled
\__/¯¯\__/    ClusterMesh:    disabled
   \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium-operator    Running: 2
                  cilium             Running: 2
Image versions    cilium             quay.io/cilium/cilium:v1.9.5: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

$ cilium connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉