Installation on Azure AKS

This guide covers installing Cilium into an Azure AKS environment. This guide will work when setting up AKS in both Basic and Advanced networking mode.

This is achieved using Cilium in CNI chaining mode, with the Azure CNI plugin as the base CNI plugin and Cilium chaining on top to provide L3-L7 observability, network policy enforcement, Kubernetes services implementation, as well as other advanced features like transparent encryption and clustermesh.

Prerequisites

Ensure that you have the Azure Cloud CLI installed.

To verify, confirm that the following command displays the set of available Kubernetes versions.

az aks get-versions -l westus -o table

Create an AKS Cluster

You can use any method to create and deploy an AKS cluster with the exception of specifying the Network Policy option. Doing so will still work but will result in unwanted iptables rules being installed on all of your nodes.

If you want to us the CLI to create a dedicated set of Azure resources (resource groups, networks, etc.) specifically for this tutorial, the following commands (borrowed from the AKS documentation) run as a script or manually all in the same terminal are sufficient.

It can take 10+ minutes for the final command to be complete indicating that the cluster is ready.

Note

Do NOT specify the ‘–network-policy’ flag when creating the cluster, as this will cause the Azure CNI plugin to push down unwanted iptables rules.

export RESOURCE_GROUP_NAME=aks-test
export CLUSTER_NAME=aks-test
export LOCATION=westeurope

az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
az aks create \
    --resource-group $RESOURCE_GROUP_NAME \
    --name $CLUSTER_NAME \
    --location $LOCATION \
    --node-count 2 \
    --network-plugin azure

Note

When setting up AKS, it is important to use the flag --network-plugin azure to ensure that CNI mode is enabled.

Configure kubectl to Point to Newly Created Cluster

Run the following commands to configure kubectl to connect to this AKS cluster:

az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME

To verify, you should see AKS in the name of the nodes when you run:

$ kubectl get nodes
NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-22314427-vmss000000   Ready    agent   91s   v1.17.11
aks-nodepool1-22314427-vmss000001   Ready    agent   91s   v1.17.11

Create an AKS + Cilium CNI configuration

Create a chaining.yaml file based on the following template to specify the desired CNI chaining configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: cilium
data:
  cni-config: |-
    {
      "cniVersion": "0.3.0",
      "name": "azure",
      "plugins": [
        {
          "type": "azure-vnet",
          "mode": "transparent",
          "bridge": "azure0",
          "ipam": {
             "type": "azure-vnet-ipam"
           }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true},
          "snat": true
        },
        {
           "name": "cilium",
           "type": "cilium-cni"
        }
      ]
    }

Create the cilium namespace:

kubectl create namespace cilium

Deploy the ConfigMap:

kubectl apply -f chaining.yaml

Deploy Cilium

Note

First, make sure you have Helm 3 installed. Helm 2 is no longer supported.

Setup Helm repository:

helm repo add cilium https://helm.cilium.io/

Deploy Cilium release via Helm:

helm install cilium cilium/cilium --version 1.9.0 \
  --namespace cilium \
  --set cni.chainingMode=generic-veth \
  --set cni.customConf=true \
  --set nodeinit.enabled=true \
  --set nodeinit.expectAzureVnet=true \
  --set cni.configMap=cni-configuration \
  --set tunnel=disabled \
  --set masquerade=false

This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the eBPF filesystem and updating the existing Azure CNI plugin to run in ‘transparent’ mode.

Validate the Installation

You can monitor as Cilium and all required components are being installed:

$ kubectl -n cilium get pods --watch
cilium-2twr9                      0/1     Init:0/2            0          17s
cilium-fkhjv                      0/1     Init:0/2            0          17s
cilium-node-init-bhr5l            1/1     Running             0          17s
cilium-node-init-l77v9            1/1     Running             0          17s
cilium-operator-f8bd5cd96-qdspd   0/1     ContainerCreating   0          17s
cilium-operator-f8bd5cd96-tvdn6   0/1     ContainerCreating   0          17s

It may take a couple of minutes for all components to come up:

cilium-operator-f8bd5cd96-tvdn6   1/1     Running             0          25s
cilium-operator-f8bd5cd96-qdspd   1/1     Running             0          26s
cilium-fkhjv                      1/1     Running             0          60s
cilium-2twr9                      1/1     Running             0          61s

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.

kubectl create ns cilium-test

Deploy the check with:

kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/1.9.0/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

$ kubectl get pods -n cilium-test
NAME                                                     READY   STATUS    RESTARTS   AGE
echo-a-76c5d9bd76-q8d99                                  1/1     Running   0          66s
echo-b-795c4b4f76-9wrrx                                  1/1     Running   0          66s
echo-b-host-6b7fc94b7c-xtsff                             1/1     Running   0          66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b          1/1     Running   0          66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8            1/1     Running   0          65s
pod-to-a-79546bc469-rl2qq                                1/1     Running   0          66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p                    1/1     Running   0          66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn                     1/1     Running   0          66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt             1/1     Running   0          65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw           1/1     Running   0          66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc            1/1     Running   0          66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82            1/1     Running   0          65s
pod-to-external-1111-d56f47579-d79dz                     1/1     Running   0          66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7   0/1     Running   0          66s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Specify Environment Variables

Specify the namespace in which Cilium is installed as CILIUM_NAMESPACE environment variable. Subsequent commands reference this environment variable.

export CILIUM_NAMESPACE=cilium

Enable Hubble for Cluster-Wide Visibility

Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI with the following Helm upgrade command on your existing installation (Cilium agent pods will be restarted in the process).

helm upgrade cilium cilium/cilium --version 1.9.0 \
   --namespace $CILIUM_NAMESPACE \
   --reuse-values \
   --set hubble.listenAddress=":4244" \
   --set hubble.relay.enabled=true \
   --set hubble.ui.enabled=true

Once the Hubble UI pod is started, use port forwarding for the hubble-ui service. This allows opening the UI locally on a browser:

kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80

And then open http://localhost:12000/ to access the UI.

Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:

Download the latest hubble release:

export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
tar zxf hubble-linux-amd64.tar.gz

and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

sudo mv hubble /usr/local/bin

Download the latest hubble release:

export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
tar zxf hubble-darwin-amd64.tar.gz

and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

sudo mv hubble /usr/local/bin

Download the latest hubble release:

curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
set /p HUBBLE_VERSION=<stable.txt
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
certutil -hashfile hubble-windows-amd64.tar.gz SHA256
type hubble-windows-amd64.tar.gz.sha256sum
:: verify that the checksum from the two commands above match
tar zxf hubble-windows-amd64.tar.gz

and move the hubble.exe CLI to a directory listed in the %PATH% environment variable after extracting it from the tarball.

Similarly to the UI, use port forwarding for the hubble-relay service to make it available locally:

kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80

In a separate terminal window, run the hubble status command specifying the Hubble Relay address:

$ hubble --server localhost:4245 status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 5455/16384 (33.29%)
Flows/s: 11.30
Connected Nodes: 4/4

If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:

hubble --server localhost:4245 observe

If you encounter any problem at this point, you may seek help on Slack.

Tip

Hubble CLI configuration can be persisted using a configuration file or environment variables. This avoids having to specify options specific to a particular environment every time a command is run. Run hubble help config for more information.

For more information about Hubble and its components, see the Observability section.

Delete the AKS Cluster

Finally, delete the test cluster by running:

az aks delete \
    --resource-group $RESOURCE_GROUP_NAME \
    --name $CLUSTER_NAME