Installation on Azure AKS

This guide covers installing Cilium into an Azure AKS environment. This guide will work when setting up AKS in both Basic and Advanced networking mode.

This is achieved using Cilium in CNI chaining mode, with the Azure CNI plugin as the base CNI plugin and Cilium chaining on top to provide L3-L7 observability, network policy enforcement enforcement, Kubernetes services implementation, as well as other advanced features like transparent encryption and clustermesh.

Prerequisites

Ensure that you have the Azure Cloud CLI installed.

To verify, confirm that the following command displays the set of available Kubernetes versions.

az aks get-versions -l westus -o table

Create an AKS Cluster

You can use any method to create and deploy an AKS cluster with the exception of specifying the Network Policy option. Doing so will still work but will result in unwanted iptables rules being installed on all of your nodes.

If you want to us the CLI to create a dedicated set of Azure resources (resource groups, networks, etc.) specifically for this tutorial, the following commands (borrowed from the AKS documentation) run as a script or manually all in the same terminal are sufficient.

It can take 10+ minutes for the final command to be complete indicating that the cluster is ready.

Note

Do NOT specify the ‘–network-policy’ flag when creating the cluster, as this will cause the Azure CNI plugin to push down unwanted iptables rules:

export RESOURCE_GROUP_NAME=group1
export CLUSTER_NAME=aks-test1
export LOCATION=westus

az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
az aks create \
    --resource-group $RESOURCE_GROUP_NAME \
    --name $CLUSTER_NAME \
    --node-count 2 \
    --generate-ssh-keys \
    --network-plugin azure

Configure kubectl to Point to Newly Created Cluster

Run the following commands to configure kubectl to connect to this AKS cluster:

az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME

To verify, you should see AKS in the name of the nodes when you run:

kubectl get nodes
NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-12032939-0   Ready    agent   8m26s   v1.13.10

Create an AKS + Cilium CNI configuration

Create a chaining.yaml file based on the following template to specify the desired CNI chaining configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: cilium
data:
  cni-config: |-
    {
      "cniVersion": "0.3.0",
      "name": "azure",
      "plugins": [
        {
          "type": "azure-vnet",
          "mode": "transparent",
          "bridge": "azure0",
          "ipam": {
             "type": "azure-vnet-ipam"
           }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true},
          "snat": true
        },
        {
           "name": "cilium",
           "type": "cilium-cni"
        }
      ]
    }

Create the cilium namespace:

kubectl create namespace cilium

Deploy the ConfigMap:

kubectl apply -f chaining.yaml

Deploy Cilium

Note

First, make sure you have Helm 3 installed.

If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).

Setup helm repository:

helm repo add cilium https://helm.cilium.io/

Deploy Cilium release via Helm:

helm install cilium cilium/cilium --version 1.7.6 \
  --namespace cilium \
  --set global.cni.chainingMode=generic-veth \
  --set global.cni.customConf=true \
  --set global.nodeinit.enabled=true \
  --set nodeinit.azure=true \
  --set global.cni.configMap=cni-configuration \
  --set global.tunnel=disabled \
  --set global.masquerade=false

This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the BPF filesystem and updating the existing Azure CNI plugin to run in ‘transparent’ mode.

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n kube-system get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.7/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

NAME                                                    READY   STATUS    RESTARTS   AGE
echo-a-5995597649-f5d5g                                 1/1     Running   0          4m51s
echo-b-54c9bb5f5c-p6lxf                                 1/1     Running   0          4m50s
echo-b-host-67446447f7-chvsp                            1/1     Running   0          4m50s
host-to-b-multi-node-clusterip-78f9869d75-l8cf8         1/1     Running   0          4m50s
host-to-b-multi-node-headless-798949bd5f-vvfff          1/1     Running   0          4m50s
pod-to-a-59b5fcb7f6-gq4hd                               1/1     Running   0          4m50s
pod-to-a-allowed-cnp-55f885bf8b-5lxzz                   1/1     Running   0          4m50s
pod-to-a-external-1111-7ff666fd8-v5kqb                  1/1     Running   0          4m48s
pod-to-a-l3-denied-cnp-64c6c75c5d-xmqhw                 1/1     Running   0          4m50s
pod-to-b-intra-node-845f955cdc-5nfrt                    1/1     Running   0          4m49s
pod-to-b-multi-node-clusterip-666594b445-bsn4j          1/1     Running   0          4m49s
pod-to-b-multi-node-headless-746f84dff5-prk4w           1/1     Running   0          4m49s
pod-to-b-multi-node-nodeport-7cb9c6cb8b-ksm4h           1/1     Running   0          4m49s
pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-tg9dh   1/1     Running   0          4m48s

Install Hubble

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. Visit Hubble Github page.

Generate the deployment files using Helm and deploy it:

git clone https://github.com/cilium/hubble.git --branch v0.5
cd hubble/install/kubernetes

helm template hubble \
    --namespace kube-system \
    --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
    --set ui.enabled=true \
> hubble.yaml

Deploy Hubble:

kubectl apply -f hubble.yaml