Installation on AWS EKS¶
Create an EKS Cluster¶
Ensure your AWS credentials are located in
~/.aws/credentials or are stored
as environment variables .
Next, install eksctl :
Ensure that aws-iam-authenticator is installed and in the executable path:
If not, install it based on the AWS IAM authenticator documentation .
Create the cluster¶
Create an EKS cluster with
eksctl see the eksctl Documentation for
details on how to set credentials, change region, VPC, cluster size, etc.
eksctl create cluster --name test-cluster --without-nodegroup
You should see something like this:
[ℹ] using region us-west-2 [ℹ] setting availability zones to [us-west-2b us-west-2a us-west-2c] [...] [✔] EKS cluster "test-cluster" in "us-west-2" region is ready
Delete VPC CNI (
Cilium will manage ENIs instead of VPC CNI, so the
has to be deleted to prevent conflict behavior.
aws-node DaemonSet is deleted, EKS will not try to restore it.
kubectl -n kube-system delete daemonset aws-node
First, make sure you have Helm 3 installed.
If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.8.2 \ --namespace kube-system \ --set global.eni=true \ --set config.ipam=eni \ --set global.egressMasqueradeInterfaces=eth0 \ --set global.tunnel=disabled \ --set global.nodeinit.enabled=true
This helm command sets
meaning that Cilium will allocate a fully-routable AWS ENI IP address for each pod,
similar to the behavior of the
Amazon VPC CNI plugin.
Cilium can alternatively run in EKS using an overlay mode that gives pods non-VPC-routable IPs.
This allows running more pods per Kubernetes worker node than the ENI limit, but means
that pod connectivity to resources outside the cluster (e.g., VMs in the VPC or AWS managed
services) is masqueraded (i.e., SNAT) by Cilium to use the VPC IP address of the Kubernetes worker node.
Excluding the lines for
global.tunnel=disabled from the
helm command will configure Cilium to use overlay routing mode (which is the helm default).
Create a node group¶
eksctl create nodegroup --cluster test-cluster --nodes 2
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods.
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.8.2/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
NAME READY STATUS RESTARTS AGE echo-a-5995597649-f5d5g 1/1 Running 0 4m51s echo-b-54c9bb5f5c-p6lxf 1/1 Running 0 4m50s echo-b-host-67446447f7-chvsp 1/1 Running 0 4m50s host-to-b-multi-node-clusterip-78f9869d75-l8cf8 1/1 Running 0 4m50s host-to-b-multi-node-headless-798949bd5f-vvfff 1/1 Running 0 4m50s pod-to-a-59b5fcb7f6-gq4hd 1/1 Running 0 4m50s pod-to-a-allowed-cnp-55f885bf8b-5lxzz 1/1 Running 0 4m50s pod-to-a-external-1111-7ff666fd8-v5kqb 1/1 Running 0 4m48s pod-to-a-l3-denied-cnp-64c6c75c5d-xmqhw 1/1 Running 0 4m50s pod-to-b-intra-node-845f955cdc-5nfrt 1/1 Running 0 4m49s pod-to-b-multi-node-clusterip-666594b445-bsn4j 1/1 Running 0 4m49s pod-to-b-multi-node-headless-746f84dff5-prk4w 1/1 Running 0 4m49s pod-to-b-multi-node-nodeport-7cb9c6cb8b-ksm4h 1/1 Running 0 4m49s pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-tg9dh 1/1 Running 0 4m48s
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the
Pending state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as
environment variable. Subsequent commands reference this environment variable.
Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.
Hubble can be configured to be in local mode or distributed mode (beta).
Restart the Cilium daemonset to allow Cilium agent to pick up the ConfigMap changes:
kubectl rollout restart -n $CILIUM_NAMESPACE ds/cilium
To pick one Cilium instance and validate that Hubble is properly configured to listen on a UNIX domain socket:
kubectl exec -n $CILIUM_NAMESPACE -t ds/cilium -- hubble observe
(Distributed mode only) To validate that Hubble Relay is running, install the
hubbleCLI is installed, set up a port forwarding for
hubble-relayservice and run
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay 4245:80 hubble observe --server localhost:4245
(For Linux / MacOS) For convenience, you may set and export the
$ export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245
This will allow you to use
hubble observecommands without having to specify the server address via the
(Distributed mode only) To validate that Hubble UI is properly configured, set up a port forwarding for
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui 12000:80
and then open http://localhost:12000/.