Getting Started Using Kind

This guide uses kind to demonstrate deployment and operation of Cilium in a multi-node Kubernetes cluster running locally on Docker.

Install Dependencies

  1. Install docker stable as described in Install Docker Engine
  2. Install kubectl version >= v1.14.0 as described in the Kubernetes Docs
  3. Install helm >= v3.0.3 per Helm documentation: Installing Helm
  4. Install kind >= v0.7.0 per kind documentation: Installation and Usage

Configure kind

Configuring kind cluster creation is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium.

Create a kind-config.yaml file based on the following template. It will create a cluster with 3 worker nodes and 1 control-plane node.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true

By default, the latest version of Kubernetes from when the kind release was created is used.

To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration documentation for more information.

Tip

By default, kind uses the following pod and service subnets:

Networking.PodSubnet     = "10.244.0.0/16"
Networking.ServiceSubnet = "10.96.0.0/12"

If any of these subnets conflicts with your local network address range, update the networking section of the kind configuration file to specify different subnets that do not conflict or you risk having connectivity issues when deploying Cilium. For example:

networking:
  disableDefaultCNI: true
  podSubnet: "10.10.0.0/16"
  serviceSubnet: "10.11.0.0/16"

Create a cluster

To create a cluster with the configuration defined above, pass the kind-config.yaml you created with the --config flag of kind.

kind create cluster --config=kind-config.yaml

After a couple of seconds or minutes, a 4 nodes cluster should be created.

A new kubectl context (kind-kind) should be added to KUBECONFIG or, if unset, to ${HOME}/.kube/config:

kubectl cluster-info --context kind-kind

Note

The cluster nodes will remain in state NotReady until Cilium is deployed. This behavior is expected.

Install Cilium

Note

First, make sure you have Helm 3 installed.

If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).

Setup Helm repository:

helm repo add cilium https://helm.cilium.io/

Preload the cilium image into each worker node in the kind cluster:

docker pull cilium/cilium:v1.8.2
kind load docker-image cilium/cilium:v1.8.2

Then, install Cilium release via Helm:

helm install cilium cilium/cilium --version 1.8.2 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.pullPolicy=IfNotPresent \
   --set config.ipam=kubernetes

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n kube-system get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.8.2/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

NAME                                                    READY   STATUS    RESTARTS   AGE
echo-a-5995597649-f5d5g                                 1/1     Running   0          4m51s
echo-b-54c9bb5f5c-p6lxf                                 1/1     Running   0          4m50s
echo-b-host-67446447f7-chvsp                            1/1     Running   0          4m50s
host-to-b-multi-node-clusterip-78f9869d75-l8cf8         1/1     Running   0          4m50s
host-to-b-multi-node-headless-798949bd5f-vvfff          1/1     Running   0          4m50s
pod-to-a-59b5fcb7f6-gq4hd                               1/1     Running   0          4m50s
pod-to-a-allowed-cnp-55f885bf8b-5lxzz                   1/1     Running   0          4m50s
pod-to-a-external-1111-7ff666fd8-v5kqb                  1/1     Running   0          4m48s
pod-to-a-l3-denied-cnp-64c6c75c5d-xmqhw                 1/1     Running   0          4m50s
pod-to-b-intra-node-845f955cdc-5nfrt                    1/1     Running   0          4m49s
pod-to-b-multi-node-clusterip-666594b445-bsn4j          1/1     Running   0          4m49s
pod-to-b-multi-node-headless-746f84dff5-prk4w           1/1     Running   0          4m49s
pod-to-b-multi-node-nodeport-7cb9c6cb8b-ksm4h           1/1     Running   0          4m49s
pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-tg9dh   1/1     Running   0          4m48s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Specify Environment Variables

Specify the namespace in which Cilium is installed as CILIUM_NAMESPACE environment variable. Subsequent commands reference this environment variable.

export CILIUM_NAMESPACE=kube-system

Enable Hubble

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.

  • Hubble can be configured to be in local mode or distributed mode (beta).

    In local mode, Hubble listens on a UNIX domain socket. You can connect to a Hubble instance by running hubble command from inside the Cilium pod. This provides networking visibility for traffic observed by the local Cilium agent.

    helm upgrade cilium cilium/cilium --version 1.8.2 \
       --namespace $CILIUM_NAMESPACE \
       --reuse-values \
       --set global.hubble.enabled=true \
       --set global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"
    

    In distributed mode (beta), Hubble listens on a TCP port on the host network. This allows Hubble Relay to communicate with all the Hubble instances in the cluster. Hubble CLI and Hubble UI in turn connect to Hubble Relay to provide cluster-wide networking visibility.

    Warning

    In Distributed mode, Hubble runs a gRPC service over plain-text HTTP on the host network without any authentication/authorization. The main consequence is that anybody who can reach the Hubble gRPC service can obtain all the networking metadata from the host. It is therefore strongly discouraged to enable distributed mode in a production environment.

    helm upgrade cilium cilium/cilium --version 1.8.2 \
       --namespace $CILIUM_NAMESPACE \
       --reuse-values \
       --set global.hubble.enabled=true \
       --set global.hubble.listenAddress=":4244" \
       --set global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
       --set global.hubble.relay.enabled=true \
       --set global.hubble.ui.enabled=true
    
  • Restart the Cilium daemonset to allow Cilium agent to pick up the ConfigMap changes:

    kubectl rollout restart -n $CILIUM_NAMESPACE ds/cilium
    
  • To pick one Cilium instance and validate that Hubble is properly configured to listen on a UNIX domain socket:

    kubectl exec -n $CILIUM_NAMESPACE -t ds/cilium -- hubble observe
    
  • (Distributed mode only) To validate that Hubble Relay is running, install the hubble CLI:

    Download the latest hubble release:

    export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
    sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
    tar zxf hubble-linux-amd64.tar.gz
    

    and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

    sudo mv hubble /usr/local/bin
    

    Download the latest hubble release:

    export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
    shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
    tar zxf hubble-darwin-amd64.tar.gz
    

    and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

    sudo mv hubble /usr/local/bin
    

    Download the latest hubble release:

    curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
    set /p HUBBLE_VERSION=<stable.txt
    curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
    curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
    certutil -hashfile hubble-windows-amd64.tar.gz SHA256
    type hubble-windows-amd64.tar.gz.sha256sum
    :: verify that the checksum from the two commands above match
    tar zxf hubble-windows-amd64.tar.gz
    

    and move the hubble.exe CLI to a directory listed in the %PATH% environment variable after extracting it from the tarball.

    Once the hubble CLI is installed, set up a port forwarding for hubble-relay service and run hubble observe command:

    kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay 4245:80
    hubble observe --server localhost:4245
    

    (For Linux / MacOS) For convenience, you may set and export the HUBBLE_DEFAULT_SOCKET_PATH environment variable:

    $ export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245
    

    This will allow you to use hubble status and hubble observe commands without having to specify the server address via the --server flag.

  • (Distributed mode only) To validate that Hubble UI is properly configured, set up a port forwarding for hubble-ui service:

    kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui 12000:80
    

    and then open http://localhost:12000/.

Next steps

Now that you have a Kubernetes cluster with Cilium up and running, you can take a couple of next steps to explore various capabilities:

Troubleshooting

Unable to contact k8s api-server

In the Cilum agent logs you will see:

level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s
level=error msg="Unable to contact k8s api-server" error="Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" ipAddr="https://10.96.0.1:443" subsys=k8s
level=fatal msg="Unable to initialize Kubernetes subsystem" error="unable to create k8s client: unable to create k8s client: Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" subsys=daemon

As Kind is running nodes as containers in Docker, they’re sharing your host machines’ kernel. If Host-Reachable Services wasn’t disabled, the eBPF programs attached by Cilium may be out of date and no longer routing api-server requests to the current kind-control-plane container.

Recreating the kind cluster and using the helm command Install Cilium will detach the inaccurate eBPF programs.

Cluster Mesh

With Kind we can simulate Cluster Mesh in a sandbox too.

Kind Configuration

This time we need to create (2) config.yaml, one for each kubernetes cluster. We will explicitly configure their pod-network-cidr and service-cidr to not overlap.

Example kind-cluster1.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.0.0.0/16"
  serviceSubnet: "10.1.0.0/16"

Example kind-cluster2.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.2.0.0/16"
  serviceSubnet: "10.3.0.0/16"

Create Kind Clusters

We can now create the respective clusters:

kind create cluster --name=cluster1 --config=kind-cluster1.yaml
kind create cluster --name=cluster2 --config=kind-cluster2.yaml

Deploy Cilium

This is the same helm command as from Install Cilium. However we’re enabling managed etcd and setting both cluster-name and cluster-id for each cluster.

Make sure context is set to kind-cluster2 cluster.

kubectl config use-context kind-cluster2
helm install cilium cilium/cilium --version 1.8.2 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.etcd.enabled=true \
   --set global.etcd.managed=true \
   --set global.identityAllocationMode=kvstore \
   --set global.cluster.name=cluster2 \
   --set global.cluster.id=2

Change the kubectl context to kind-cluster1 cluster:

kubectl config use-context kind-cluster1
helm install cilium cilium/cilium --version 1.8.2 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.etcd.enabled=true \
   --set global.etcd.managed=true \
   --set global.identityAllocationMode=kvstore \
   --set global.cluster.name=cluster1 \
   --set global.cluster.id=1

Setting up Cluster Mesh

We can complete setup by following the Cluster Mesh guide with Expose the Cilium etcd to other clusters. For Kind, we’ll want to deploy the NodePort service into the kube-system namespace.