Getting Started Using Kind

This guide uses kind to demonstrate deployment and operation of Cilium in a multi-node Kubernetes cluster running locally on Docker.

Install Dependencies

  1. Install docker stable as described in Install Docker Engine
  2. Install kubectl version >= v1.14.0 as described in the Kubernetes Docs
  3. Install helm >= v3.0.3 per Helm documentation: Installing Helm
  4. Install kind >= v0.7.0 per kind documentation: Installation and Usage

Configure kind

Configuring kind cluster creation is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium.

Create a kind-config.yaml file based on the following template. It will create a cluster with 3 worker nodes and 1 control-plane node.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true

By default, the latest version of Kubernetes from when the kind release was created is used.

To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration documentation for more information.

Tip

By default, kind uses the following pod and service subnets:

Networking.PodSubnet     = "10.244.0.0/16"
Networking.ServiceSubnet = "10.96.0.0/12"

If any of these subnets conflicts with your local network address range, update the networking section of the kind configuration file to specify different subnets that do not conflict or you risk having connectivity issues when deploying Cilium. For example:

networking:
  disableDefaultCNI: true
  podSubnet: "10.10.0.0/16"
  serviceSubnet: "10.11.0.0/16"

Create a cluster

To create a cluster with the configuration defined above, pass the kind-config.yaml you created with the --config flag of kind.

kind create cluster --config=kind-config.yaml

After a couple of seconds or minutes, a 4 nodes cluster should be created.

A new kubectl context (kind-kind) should be added to KUBECONFIG or, if unset, to ${HOME}/.kube/config:

kubectl cluster-info --context kind-kind

Note

The cluster nodes will remain in state NotReady until Cilium is deployed. This behavior is expected.

Install Cilium

Note

First, make sure you have Helm 3 installed.

If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).

Download the Cilium release tarball and change to the kubernetes install directory:

curl -LO https://github.com/cilium/cilium/archive/master.tar.gz
tar xzf master.tar.gz
cd cilium-master/install/kubernetes

Preload the cilium image into each worker node in the kind cluster:

docker pull cilium/cilium:latest
kind load docker-image cilium/cilium:latest

Then, install Cilium release via Helm:

helm install cilium ./cilium \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set bpf.masquerade=false \
   --set image.pullPolicy=IfNotPresent \
   --set ipam.mode=kubernetes

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n kube-system get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.

kubectl create ns cilium-test

Deploy the check with:

kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

$ kubectl get pods -n cilium-test
NAME                                                     READY   STATUS    RESTARTS   AGE
echo-a-76c5d9bd76-q8d99                                  1/1     Running   0          66s
echo-b-795c4b4f76-9wrrx                                  1/1     Running   0          66s
echo-b-host-6b7fc94b7c-xtsff                             1/1     Running   0          66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b          1/1     Running   0          66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8            1/1     Running   0          65s
pod-to-a-79546bc469-rl2qq                                1/1     Running   0          66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p                    1/1     Running   0          66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn                     1/1     Running   0          66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt             1/1     Running   0          65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw           1/1     Running   0          66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc            1/1     Running   0          66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82            1/1     Running   0          65s
pod-to-external-1111-d56f47579-d79dz                     1/1     Running   0          66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7   0/1     Running   0          66s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Specify Environment Variables

Specify the namespace in which Cilium is installed as CILIUM_NAMESPACE environment variable. Subsequent commands reference this environment variable.

export CILIUM_NAMESPACE=kube-system

Enable Hubble

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.

  • Hubble can be configured to be in distributed mode or local mode.

    In distributed mode, Hubble listens on a TCP port on the host network. This allows Hubble Relay to communicate with all the Hubble instances in the cluster. Hubble CLI and Hubble UI in turn connect to Hubble Relay to provide cluster-wide networking visibility.

    Note

    In Distributed mode, Hubble runs a gRPC service over HTTP on the host network. It is secured using mutual TLS (mTLS) by default to only allow access to Hubble Relay. Refer to Use custom TLS certificates in distributed mode to manually provide TLS certificates.

    helm upgrade cilium ./cilium \
       --namespace $CILIUM_NAMESPACE \
       --reuse-values \
       --set hubble.enabled=true \
       --set hubble.listenAddress=":4244" \
       --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
       --set hubble.relay.enabled=true \
       --set hubble.ui.enabled=true
    

    In local mode, Hubble listens on a UNIX domain socket. You can connect to a Hubble instance by running hubble command from inside the Cilium pod. This provides networking visibility for traffic observed by the local Cilium agent.

    helm upgrade cilium ./cilium \
       --namespace $CILIUM_NAMESPACE \
       --reuse-values \
       --set hubble.enabled=true \
       --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"
    
  • Restart the Cilium daemonset to allow Cilium agent to pick up the ConfigMap changes:

    kubectl rollout restart -n $CILIUM_NAMESPACE ds/cilium
    
  • To pick one Cilium instance and validate that Hubble is properly configured to listen on a UNIX domain socket:

    kubectl exec -n $CILIUM_NAMESPACE -t ds/cilium -- hubble observe
    
  • (Distributed mode only) To validate that Hubble Relay is running, install the hubble CLI:

    Download the latest hubble release:

    export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
    sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
    tar zxf hubble-linux-amd64.tar.gz
    

    and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

    sudo mv hubble /usr/local/bin
    

    Download the latest hubble release:

    export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
    curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
    shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
    tar zxf hubble-darwin-amd64.tar.gz
    

    and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

    sudo mv hubble /usr/local/bin
    

    Download the latest hubble release:

    curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
    set /p HUBBLE_VERSION=<stable.txt
    curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
    curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
    certutil -hashfile hubble-windows-amd64.tar.gz SHA256
    type hubble-windows-amd64.tar.gz.sha256sum
    :: verify that the checksum from the two commands above match
    tar zxf hubble-windows-amd64.tar.gz
    

    and move the hubble.exe CLI to a directory listed in the %PATH% environment variable after extracting it from the tarball.

    Once the hubble CLI is installed, set up a port forwarding for hubble-relay service and run hubble observe command:

    kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
    hubble observe --server localhost:4245
    

    (For Linux / MacOS) For convenience, you may set and export the HUBBLE_DEFAULT_SOCKET_PATH environment variable:

    export HUBBLE_DEFAULT_SOCKET_PATH=localhost:4245
    

    This will allow you to use hubble status and hubble observe commands without having to specify the server address via the --server flag.

  • (Distributed mode only) To validate that Hubble UI is properly configured, set up a port forwarding for hubble-ui service:

    kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
    

    and then open http://localhost:12000/.

Next steps

Now that you have a Kubernetes cluster with Cilium up and running, you can take a couple of next steps to explore various capabilities:

Troubleshooting

Unable to contact k8s api-server

In the Cilum agent logs you will see:

level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s
level=error msg="Unable to contact k8s api-server" error="Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" ipAddr="https://10.96.0.1:443" subsys=k8s
level=fatal msg="Unable to initialize Kubernetes subsystem" error="unable to create k8s client: unable to create k8s client: Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" subsys=daemon

As Kind is running nodes as containers in Docker, they’re sharing your host machines’ kernel. If Host-Reachable Services wasn’t disabled, the eBPF programs attached by Cilium may be out of date and no longer routing api-server requests to the current kind-control-plane container.

Recreating the kind cluster and using the helm command Install Cilium will detach the inaccurate eBPF programs.

Cluster Mesh

With Kind we can simulate Cluster Mesh in a sandbox too.

Kind Configuration

This time we need to create (2) config.yaml, one for each kubernetes cluster. We will explicitly configure their pod-network-cidr and service-cidr to not overlap.

Example kind-cluster1.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.0.0.0/16"
  serviceSubnet: "10.1.0.0/16"

Example kind-cluster2.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.2.0.0/16"
  serviceSubnet: "10.3.0.0/16"

Create Kind Clusters

We can now create the respective clusters:

kind create cluster --name=cluster1 --config=kind-cluster1.yaml
kind create cluster --name=cluster2 --config=kind-cluster2.yaml

Deploy Cilium

This is the same helm command as from Install Cilium. However we’re enabling managed etcd and setting both cluster-name and cluster-id for each cluster.

Make sure context is set to kind-cluster2 cluster.

kubectl config use-context kind-cluster2
helm install cilium ./cilium \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set etcd.enabled=true \
   --set etcd.managed=true \
   --set identityAllocationMode=kvstore \
   --set cluster.name=cluster2 \
   --set cluster.id=2

Change the kubectl context to kind-cluster1 cluster:

kubectl config use-context kind-cluster1
helm install cilium ./cilium \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set etcd.enabled=true \
   --set etcd.managed=true \
   --set identityAllocationMode=kvstore \
   --set cluster.name=cluster1 \
   --set cluster.id=1

Setting up Cluster Mesh

We can complete setup by following the Cluster Mesh guide with Expose the Cilium etcd to other clusters. For Kind, we’ll want to deploy the NodePort service into the kube-system namespace.