This guide explains how to set up Cilium in combination with aws-cni. In this hybrid mode, the aws-cni plugin is responsible for setting up the virtual network devices as well as address allocation (IPAM) via ENI. After the initial networking is setup, the Cilium CNI plugin is called to attach BPF programs to the network devices set up by aws-cni to enforce network policies, perform load-balancing, and encryption.
Setup Cluster on AWS¶
Follow the instructions in the Installation on AWS EKS guide to set up an EKS cluster or use any other method of your preference to set up a Kubernetes cluster.
Ensure that the aws-vpc-cni-k8s plugin is installed. If you have set up an EKS cluster, this is automatically done.
Download the Cilium release tarball and change to the kubernetes install directory:
curl -LO https://github.com/cilium/cilium/archive/v1.6.tar.gz tar xzvf v1.6.tar.gz cd cilium-1.6/install/kubernetes
Install Helm to prepare generating the deployment artifacts based on the Helm templates.
Generate the required YAML files and deploy them:
helm template cilium \ --namespace kube-system \ --set global.cni.chainingMode=aws-cni \ --set global.masquerade=false \ --set global.tunnel=disabled \ --set global.nodeinit.enabled=true \ > cilium.yaml kubectl apply -f cilium.yaml
This will enable chaining with the aws-cni plugin. It will also disable tunneling. Tunneling is not required as ENI IP addresses can be directly routed in your VPC. You can also disable masquerading for the same reason.
Restart existing pods¶
The new CNI chaining configuration will not apply to any pod that is already running in the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them.
If you are unsure if a pod is managed by Cilium or not, run
kubectl get cep
in the respective namespace and see if the pod is listed.
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods.
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.6/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a simple probe and echo server running with multiple replicas. The probe will only report readiness while it can successfully reach the echo server:
kubectl get pods NAME READY STATUS RESTARTS AGE echo-585798dd9d-ck5xc 1/1 Running 0 75s echo-585798dd9d-jkdjx 1/1 Running 0 75s echo-585798dd9d-mk5q8 1/1 Running 0 75s echo-585798dd9d-tn9t4 1/1 Running 0 75s echo-585798dd9d-xmr4p 1/1 Running 0 75s probe-866bb6f696-9lhfw 1/1 Running 0 75s probe-866bb6f696-br4dr 1/1 Running 0 75s probe-866bb6f696-gv5kf 1/1 Running 0 75s probe-866bb6f696-qg2b7 1/1 Running 0 75s probe-866bb6f696-tb926 1/1 Running 0 75s