This guide explains how to set up Cilium in combination with aws-cni. In this hybrid mode, the aws-cni plugin is responsible for setting up the virtual network devices as well as address allocation (IPAM) via ENI. After the initial networking is setup, the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by aws-cni to enforce network policies, perform load-balancing, and encryption.
Some advanced Cilium features may be limited when chaining with other CNI plugins, such as:
Due to a bug in certain version of the AWS CNI, please ensure that you are running the AWS CNI 1.7.9 or newer to guarantee compatibility with Cilium.
Setup Cluster on AWS¶
Follow the instructions in the Installation on AWS EKS guide to set up an EKS cluster or use any other method of your preference to set up a Kubernetes cluster.
Ensure that the aws-vpc-cni-k8s plugin is installed. If you have set up an EKS cluster, this is automatically done.
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.9.10 \ --namespace kube-system \ --set cni.chainingMode=aws-cni \ --set masquerade=false \ --set tunnel=disabled \ --set nodeinit.enabled=true
This will enable chaining with the aws-cni plugin. It will also disable tunneling. Tunneling is not required as ENI IP addresses can be directly routed in your VPC. You can also disable masquerading for the same reason.
Restart existing pods¶
The new CNI chaining configuration will not apply to any pod that is already running in the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them.
If you are unsure if a pod is managed by Cilium or not, run
kubectl get cep
in the respective namespace and see if the pod is listed.
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the
Pending state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as
environment variable. Subsequent commands reference this environment variable.
Enable Hubble for Cluster-Wide Visibility¶
Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI as follows on your existing installation:
Once the Hubble UI pod is started, use port forwarding for the
service. This allows opening the UI locally on a browser:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
And then open http://localhost:12000/ to access the UI.
Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:
Similarly to the UI, use port forwarding for the
hubble-relay service to
make it available locally:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
In a separate terminal window, run the
hubble status command specifying the
Hubble Relay address:
$ hubble --server localhost:4245 status Healthcheck (via localhost:4245): Ok Current/Max Flows: 5455/16384 (33.29%) Flows/s: 11.30 Connected Nodes: 4/4
If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:
hubble --server localhost:4245 observe
If you encounter any problem at this point, you may seek help on Slack.
Hubble CLI configuration can be persisted using a configuration file or
environment variables. This avoids having to specify options specific to a
particular environment every time a command is run. Run
config for more information.
For more information about Hubble and its components, see the Observability section.