This guide explains how to set up Cilium in combination with aws-cni. In this hybrid mode, the aws-cni plugin is responsible for setting up the virtual network devices as well as address allocation (IPAM) via ENI. After the initial networking is setup, the Cilium CNI plugin is called to attach BPF programs to the network devices set up by aws-cni to enforce network policies, perform load-balancing, and encryption.
Setup Cluster on AWS¶
Follow the instructions in the Installation on AWS EKS guide to set up an EKS cluster or use any other method of your preference to set up a Kubernetes cluster.
Ensure that the aws-vpc-cni-k8s plugin is installed. If you have set up an EKS cluster, this is automatically done.
First, make sure you have Helm 3 installed.
If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.8.1 \ --namespace kube-system \ --set global.cni.chainingMode=aws-cni \ --set global.masquerade=false \ --set global.tunnel=disabled \ --set global.nodeinit.enabled=true
This will enable chaining with the aws-cni plugin. It will also disable tunneling. Tunneling is not required as ENI IP addresses can be directly routed in your VPC. You can also disable masquerading for the same reason.
Restart existing pods¶
The new CNI chaining configuration will not apply to any pod that is already running in the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them.
If you are unsure if a pod is managed by Cilium or not, run
kubectl get cep
in the respective namespace and see if the pod is listed.
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods.
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.8.1/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
NAME READY STATUS RESTARTS AGE echo-a-5995597649-f5d5g 1/1 Running 0 4m51s echo-b-54c9bb5f5c-p6lxf 1/1 Running 0 4m50s echo-b-host-67446447f7-chvsp 1/1 Running 0 4m50s host-to-b-multi-node-clusterip-78f9869d75-l8cf8 1/1 Running 0 4m50s host-to-b-multi-node-headless-798949bd5f-vvfff 1/1 Running 0 4m50s pod-to-a-59b5fcb7f6-gq4hd 1/1 Running 0 4m50s pod-to-a-allowed-cnp-55f885bf8b-5lxzz 1/1 Running 0 4m50s pod-to-a-external-1111-7ff666fd8-v5kqb 1/1 Running 0 4m48s pod-to-a-l3-denied-cnp-64c6c75c5d-xmqhw 1/1 Running 0 4m50s pod-to-b-intra-node-845f955cdc-5nfrt 1/1 Running 0 4m49s pod-to-b-multi-node-clusterip-666594b445-bsn4j 1/1 Running 0 4m49s pod-to-b-multi-node-headless-746f84dff5-prk4w 1/1 Running 0 4m49s pod-to-b-multi-node-nodeport-7cb9c6cb8b-ksm4h 1/1 Running 0 4m49s pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-tg9dh 1/1 Running 0 4m48s
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the
Pending state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as
environment variable. Subsequent commands reference this environment variable.
Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.
Hubble can be configured to be in local mode or distributed mode (beta).
Restart the Cilium daemonset to allow Cilium agent to pick up the ConfigMap changes:
kubectl rollout restart -n $CILIUM_NAMESPACE ds/cilium
To pick one Cilium instance and validate that Hubble is properly configured to listen on a UNIX domain socket:
kubectl exec -n $CILIUM_NAMESPACE -t ds/cilium -- hubble observe
(Distributed mode only) To validate that Hubble Relay is running, install the
hubbleCLI is installed, set up a port forwarding for
hubble-relayservice and run
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay 4245:80 hubble observe --server localhost:4245
(For Linux / MacOS) For convenience, you may set and export the
$ export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245
This will allow you to use
hubble observecommands without having to specify the server address via the
(Distributed mode only) To validate that Hubble UI is properly configured, set up a port forwarding for
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui 12000:80
and then open http://localhost:12000/.