Installation on AWS EKS¶
Create an EKS Cluster¶
Ensure your AWS credentials are located in
~/.aws/credentials or are stored
as environment variables .
Next, install eksctl :
Ensure that aws-iam-authenticator is installed and in the executable path:
If not, install it based on the AWS IAM authenticator documentation .
Create the cluster¶
Create an EKS cluster with
eksctl see the eksctl Documentation for
details on how to set credentials, change region, VPC, cluster size, etc.
eksctl create cluster --name test-cluster --without-nodegroup
You should see something like this:
[ℹ] using region us-west-2 [ℹ] setting availability zones to [us-west-2b us-west-2a us-west-2c] [...] [✔] EKS cluster "test-cluster" in "us-west-2" region is ready
Delete VPC CNI (
Cilium will manage ENIs instead of VPC CNI, so the
has to be deleted to prevent conflict behavior.
aws-node DaemonSet is deleted, EKS will not try to restore it.
kubectl -n kube-system delete daemonset aws-node
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.9.0 \ --namespace kube-system \ --set eni=true \ --set ipam.mode=eni \ --set egressMasqueradeInterfaces=eth0 \ --set tunnel=disabled \ --set nodeinit.enabled=true
This helm command sets
meaning that Cilium will allocate a fully-routable AWS ENI IP address for each pod,
similar to the behavior of the
Amazon VPC CNI plugin.
Cilium can alternatively run in EKS using an overlay mode that gives pods non-VPC-routable IPs.
This allows running more pods per Kubernetes worker node than the ENI limit, but means
that pod connectivity to resources outside the cluster (e.g., VMs in the VPC or AWS managed
services) is masqueraded (i.e., SNAT) by Cilium to use the VPC IP address of the Kubernetes worker node.
Excluding the lines for
tunnel=disabled from the
helm command will configure Cilium to use overlay routing mode (which is the helm default).
Create a node group¶
eksctl create nodegroup --cluster test-cluster --nodes 2
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/1.9.0/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 0/1 Running 0 66s
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the
Pending state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as
environment variable. Subsequent commands reference this environment variable.
Enable Hubble for Cluster-Wide Visibility¶
Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI with the following Helm upgrade command on your existing installation (Cilium agent pods will be restarted in the process).
helm upgrade cilium cilium/cilium --version 1.9.0 \ --namespace $CILIUM_NAMESPACE \ --reuse-values \ --set hubble.listenAddress=":4244" \ --set hubble.relay.enabled=true \ --set hubble.ui.enabled=true
Once the Hubble UI pod is started, use port forwarding for the
service. This allows opening the UI locally on a browser:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
And then open http://localhost:12000/ to access the UI.
Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:
Similarly to the UI, use port forwarding for the
hubble-relay service to
make it available locally:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
In a separate terminal window, run the
hubble status command specifying the
Hubble Relay address:
$ hubble --server localhost:4245 status Healthcheck (via localhost:4245): Ok Current/Max Flows: 5455/16384 (33.29%) Flows/s: 11.30 Connected Nodes: 4/4
If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:
hubble --server localhost:4245 observe
If you encounter any problem at this point, you may seek help on Slack.
Hubble CLI configuration can be persisted using a configuration file or
environment variables. This avoids having to specify options specific to a
particular environment every time a command is run. Run
config for more information.
For more information about Hubble and its components, see the Observability section.