Setting up Cilium in AWS ENI mode¶
The AWS ENI integration is still subject to some limitations. See Limitations for details.
Create an AWS cluster¶
Setup a Kubernetes on AWS. You can use any method you prefer, but for the
simplicity of this tutorial, we are going to use eksctl. For more details on how to set up an
EKS cluster using
eksctl, see the section Installation on AWS EKS.
eksctl create cluster --name test-cluster --without-nodegroup
Disable VPC CNI (
aws-node DaemonSet) (EKS only)¶
If you are running an EKS cluster, you should delete the
Cilium will manage ENIs instead of VPC CNI, so the
has to be deleted to prevent conflict behavior.
aws-node DaemonSet is deleted, EKS will not try to restore it.
kubectl -n kube-system delete daemonset aws-node
First, make sure you have Helm 3 installed.
If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.8.10 \ --namespace kube-system \ --set global.eni=true \ --set config.ipam=eni \ --set global.egressMasqueradeInterfaces=eth0 \ --set global.tunnel=disabled \ --set global.nodeinit.enabled=true
The above options are assuming that masquerading is desired and that the VM
is connected to the VPC using
eth0. It will route all traffic that does
not stay in the VPC via
eth0 and masquerade it.
If you want to avoid masquerading, set
global.masquerade=false. You must
ensure that the security groups associated with the ENIs (
eth2, …) allow for egress traffic to outside of the VPC. By default,
the security groups for pod ENIs are derived from the primary ENI
Create a node group¶
eksctl create nodegroup --cluster test-cluster --nodes 2
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-6788c799fd-42qxx 1/1 Running 0 69s echo-b-59757679d4-pjtdl 1/1 Running 0 69s echo-b-host-f86bd784d-wnh4v 1/1 Running 0 68s host-to-b-multi-node-clusterip-585db65b4d-x74nz 1/1 Running 0 68s host-to-b-multi-node-headless-77c64bc7d8-kgf8p 1/1 Running 0 67s pod-to-a-allowed-cnp-87b5895c8-bfw4x 1/1 Running 0 68s pod-to-a-b76ddb6b4-2v4kb 1/1 Running 0 68s pod-to-a-denied-cnp-677d9f567b-kkjp4 1/1 Running 0 68s pod-to-b-intra-node-nodeport-8484fb6d89-bwj8q 1/1 Running 0 68s pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk 1/1 Running 0 68s pod-to-b-multi-node-headless-5fd98b9648-5bjj8 1/1 Running 0 68s pod-to-b-multi-node-nodeport-74bd8d7bd5-kmfmm 1/1 Running 0 68s pod-to-external-1111-7489c7c46d-jhtkr 1/1 Running 0 68s pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75 1/1 Running 0 68s
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the
Pending state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as
environment variable. Subsequent commands reference this environment variable.
Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.
Hubble can be configured to be in local mode or distributed mode (beta).
Restart the Cilium daemonset to allow Cilium agent to pick up the ConfigMap changes:
kubectl rollout restart -n $CILIUM_NAMESPACE ds/cilium
To pick one Cilium instance and validate that Hubble is properly configured to listen on a UNIX domain socket:
kubectl exec -n $CILIUM_NAMESPACE -t ds/cilium -- hubble observe
(Distributed mode only) To validate that Hubble Relay is running, install the
hubbleCLI is installed, set up a port forwarding for
hubble-relayservice and run
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay 4245:80 hubble observe --server localhost:4245
(For Linux / MacOS) For convenience, you may set and export the
$ export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245
This will allow you to use
hubble observecommands without having to specify the server address via the
(Distributed mode only) To validate that Hubble UI is properly configured, set up a port forwarding for
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui 12000:80
and then open http://localhost:12000/.
The AWS ENI integration of Cilium is currently only enabled for IPv4.
When applying L7 policies at egress, the source identity context is lost as it is currently not carried in the packet. This means that traffic will look like it is coming from outside of the cluster to the receiving pod.
HostPort type services additionally require either of the following configurations:
Make sure to disable DHCP on ENIs¶
Cilium will use both the primary and secondary IP addresses assigned to ENIs.
Use of the primary IP address is required for SNAT on the ENI, but this
can conflict with a DHCP agent running on the node and assigning the primary IP
of the ENI to the interface of the node. A common scenario where this happens
NetworkManager is running on the node and automatically performing
DHCP on all network interfaces of the VM. Be sure to disable DHCP on any ENIs
that get attached to the node or disable