Getting Started Using Kind

This guide uses kind to demonstrate deployment and operation of Cilium in a multi-node Kubernetes cluster.

Kind requires docker to be installed and running.

Install Dependencies

  1. Install docker stable as described in: Install Docker Engine
  2. Install kubectl version >= v1.14.0 as described in the Kubernetes Docs
  3. Install helm >= v3.0.3 per Helm documentation: Installing Helm
  4. Install kind >= v0.7.0 per kind documentation: Installation and Usage

Kind Configuration

Kind doesn’t use flags for configuration. Instead it uses YAML configuration that is very similar to Kubernetes.

Create a kind-config.yaml file based on the following template. The template will create 3 node + 1 apiserver cluster with the latest version of kubernetes from when the kind release was created.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true

To change the version of kubernetes being run, image has to be defined for each node. See the Node Configration documentation.

Start Kind

Pass the kind-config.yaml you created with the --config flag of kind.

kind create cluster --config=kind-config.yaml

This will add a kind-kind context to KUBECONFIG or if unset, ${HOME}/.kube/config

kubectl cluster-info --context kind-kind

Install Cilium

Note

First, make sure you have Helm 3 installed.

If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).

Setup helm repository:

helm repo add cilium https://helm.cilium.io/

(optional, but recommended) Pre-load Cilium images into the kind cluster so each worker doesn’t have to pull them.

docker pull cilium/cilium:v1.7.4
kind load docker-image cilium/cilium:v1.7.4

Install Cilium release via Helm:

helm install cilium cilium/cilium --version 1.7.4 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.pullPolicy=IfNotPresent

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n kube-system get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.7.4/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

kubectl get pods
NAME                                                     READY   STATUS             RESTARTS   AGE
echo-a-9b85dd869-292s2                                   1/1     Running            0          8m37s
echo-b-c7d9f4686-gdwcs                                   1/1     Running            0          8m37s
host-to-b-multi-node-clusterip-6d496f7cf9-956jb          1/1     Running            0          8m37s
host-to-b-multi-node-headless-bd589bbcf-jwbh2            1/1     Running            0          8m37s
pod-to-a-7cc4b6c5b8-9jfjb                                1/1     Running            0          8m36s
pod-to-a-allowed-cnp-6cc776bb4d-2cszk                    1/1     Running            0          8m36s
pod-to-a-external-1111-5c75bd66db-sxfck                  1/1     Running            0          8m35s
pod-to-a-l3-denied-cnp-7fdd9975dd-2pp96                  1/1     Running            0          8m36s
pod-to-b-intra-node-9d9d4d6f9-qccfs                      1/1     Running            0          8m35s
pod-to-b-multi-node-clusterip-5956c84b7c-hwzfg           1/1     Running            0          8m35s
pod-to-b-multi-node-headless-6698899447-xlhfw            1/1     Running            0          8m35s
pod-to-external-fqdn-allow-google-cnp-667649bbf6-v6rf8   1/1     Running            0          8m35s

Install Hubble

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. Visit Hubble Github page.

Generate the deployment files using Helm and deploy it:

git clone https://github.com/cilium/hubble.git --branch v0.5
cd hubble/install/kubernetes

helm template hubble \
    --namespace kube-system \
    --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
    --set ui.enabled=true \
> hubble.yaml

Deploy Hubble:

kubectl apply -f hubble.yaml

Next steps

Now that you have a Kubernetes cluster with Cilium up and running, you can take a couple of next steps to explore various capabilities:

Troubleshooting

Unable to contact k8s api-server

In the Cilum agent logs you will see:

level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s
level=error msg="Unable to contact k8s api-server" error="Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" ipAddr="https://10.96.0.1:443" subsys=k8s
level=fatal msg="Unable to initialize Kubernetes subsystem" error="unable to create k8s client: unable to create k8s client: Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" subsys=daemon

As Kind is running nodes as containers in Docker, they’re sharing your host machines’ kernel. If Host-Reachable Services wasn’t disabled, the eBPF programs attached by Cilium may be out of date and no longer routing api-server requests to the current kind-control-plane container.

Recreating the kind cluster and using the helm command Install Cilium will detach the inaccurate eBPF programs.

Cluster Mesh

With Kind we can simulate Cluster Mesh in a sandbox too.

Kind Configuration

This time we need to create (2) config.yaml, one for each kubernetes cluster. We will explicitly configure their pod-network-cidr and service-cidr to not overlap.

Example kind-cluster1.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: 10.0.0.0/16
  serviceSubnet: 10.1.0.0/16

Example kind-cluster2.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: 10.2.0.0/16
  serviceSubnet: 10.3.0.0/16

Create Kind Clusters

We can now create the respective clusters:

kind create cluster --name=cluster1 --config=kind-cluster1.yaml
kind create cluster --name=cluster2 --config=kind-cluster2.yaml

Deploy Cilium

This is the same helm command as from Install Cilium. However we’re enabling managed etcd and setting both cluster-name and cluster-id for each cluster.

Make sure context is set to kind-cluster2 cluster.

kubectl config use-context kind-cluster2
helm install cilium cilium/cilium --version 1.7.4 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.etcd.enabled=true \
   --set global.etcd.managed=true \
   --set global.identityAllocationMode=kvstore \
   --set global.cluster.name=cluster2 \
   --set global.cluster.id=2

Change the kubectl context to kind-cluster1 cluster:

kubectl config use-context kind-cluster1
helm install cilium cilium/cilium --version 1.7.4 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.etcd.enabled=true \
   --set global.etcd.managed=true \
   --set global.identityAllocationMode=kvstore \
   --set global.cluster.name=cluster1 \
   --set global.cluster.id=1

Setting up Cluster Mesh

We can complete setup by following the Cluster Mesh guide with Expose the Cilium etcd to other clusters. For Kind, we’ll want to deploy the NodePort service into the kube-system namespace.