Azure CNI

This guide explains how to set up Cilium in combination with Azure CNI. In this hybrid mode, the Azure CNI plugin is responsible for setting up the virtual network devices as well as address allocation (IPAM). After the initial networking is setup, the Cilium CNI plugin is called to attach BPF programs to the network devices set up by Azure CNI to enforce network policies, perform load-balancing, and encryption.

Note

If you are looking to install Cilium on Azure AKS, see the guide Installation on Azure AKS for a complete guide also covering cluster setup.

Limitations

  • Due to the architecture of the Azure CNI plugin, Layer 7 network policies and visibility including FQDN policies are currently not supported in this chaining mode. The upcoming native Azure IPAM mode will support all L7 network policies.

Create an AKS + Cilium CNI configuration

Create a chaining.yaml file based on the following template to specify the desired CNI chaining configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: cilium
data:
  cni-config: |-
    {
      "cniVersion": "0.3.0",
      "name": "azure",
      "plugins": [
        {
          "type": "azure-vnet",
          "bridge": "azure0",
          "ipam": {
             "type": "azure-vnet-ipam"
           }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true},
          "snat": true
        },
        {
           "name": "cilium",
           "type": "cilium-cni"
        }
      ]
    }

Create the cilium namespace:

kubectl create namespace cilium

Deploy the ConfigMap:

kubectl apply -f chaining.yaml

Prepare & Deploy Cilium

Download the Cilium release tarball and change to the kubernetes install directory:

curl -LO https://github.com/cilium/cilium/archive/v1.6.tar.gz
tar xzvf v1.6.tar.gz
cd cilium-1.6/install/kubernetes

Install Helm to prepare generating the deployment artifacts based on the Helm templates.

Generate the required YAML file and deploy it:

helm template cilium \
  --namespace cilium \
  --set global.cni.chainingMode=generic-veth \
  --set global.cni.customConf=true \
  --set global.nodeinit.enabled=true \
  --set global.cni.configMap=cni-configuration \
  --set global.tunnel=disabled \
  --set global.masquerade=false \
  > cilium.yaml
kubectl create -f cilium.yaml

This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the BPF filesystem and updating the existing Azure CNI plugin to run in ‘transparent’ mode.

Restart existing pods

The new CNI chaining configuration will not apply to any pod that is already running in the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them.

If you are unsure if a pod is managed by Cilium or not, run kubectl get cep in the respective namespace and see if the pod is listed.

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n kube-system get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.6/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a simple probe and echo server running with multiple replicas. The probe will only report readiness while it can successfully reach the echo server:

kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
echo-585798dd9d-ck5xc    1/1     Running   0          75s
echo-585798dd9d-jkdjx    1/1     Running   0          75s
echo-585798dd9d-mk5q8    1/1     Running   0          75s
echo-585798dd9d-tn9t4    1/1     Running   0          75s
echo-585798dd9d-xmr4p    1/1     Running   0          75s
probe-866bb6f696-9lhfw   1/1     Running   0          75s
probe-866bb6f696-br4dr   1/1     Running   0          75s
probe-866bb6f696-gv5kf   1/1     Running   0          75s
probe-866bb6f696-qg2b7   1/1     Running   0          75s
probe-866bb6f696-tb926   1/1     Running   0          75s