Installation on Google GKE

GKE Requirements

  1. Install the Google Cloud SDK (gcloud) see Installing Google Cloud SDK.
  2. Create a project or use an existing one
export GKE_PROJECT=gke-clusters
gcloud projects create $GKE_PROJECT
gcloud config set project $GKE_PROJECT
  1. Enable the GKE API for the project if not already done
gcloud services enable container.googleapis.com

Create a GKE Cluster

You can apply any method to create a GKE cluster. The example given here is using the Google Cloud SDK.

export CLUSTER_NAME=cluster1
gcloud container clusters create $CLUSTER_NAME --image-type COS --num-nodes 2

When done, you should be able to access your cluster like this:

kubectl get nodes
NAME                                      STATUS   ROLES    AGE   VERSION
gke-cluster1-default-pool-a63a765c-flr2   Ready    <none>   6m    v1.11.7-gke.4
gke-cluster1-default-pool-a63a765c-z73c   Ready    <none>   6m    v1.11.7-gke.4

Deploy Cilium

Extract the Cluster CIDR to enable native-routing:

NATIVE_CIDR=$(gcloud container clusters describe $CLUSTER_NAME | grep -i clusterIpv4Cidr | awk '{print $2}')
echo $NATIVE_CIDR

Note

First, make sure you have Helm 3 installed.

If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).

Setup helm repository:

helm repo add cilium https://helm.cilium.io/

Deploy Cilium release via Helm:

If you are ready to restart existing pods when initializing the node, you can also pass the --set nodeinit.restartPods flag to the helm command below. This will ensure all pods are managed by Cilium.

kubectl create namespace cilium
helm install cilium cilium/cilium --version 1.7.4 \
  --namespace cilium \
  --set global.nodeinit.enabled=true \
  --set nodeinit.reconfigureKubelet=true \
  --set nodeinit.removeCbrBridge=true \
  --set global.cni.binPath=/home/kubernetes/bin \
  --set global.gke.enabled=true \
  --set global.native-routing-cidr=$NATIVE_CIDR

The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added to the cluster. The NodeInit DaemonSet will perform the following actions:

  • Reconfigure kubelet to run in CNI mode
  • Mount the BPF filesystem

Restart unmanaged Pods

If you did not use the nodeinit.restartPods=true in the Helm options when deploying Cilium, then unmanaged pods need to be restarted manually. Restart all already running pods which are not running in host-networking mode to ensure that Cilium starts managing them. This is required to ensure that all pods which have been running before Cilium was deployed have network connectivity provided by Cilium and NetworkPolicy applies to them:

kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{print "-n "$1" "$2}' | xargs kubectl delete pod
pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted
pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted
pod "heapster-v1.6.0-beta.1-56d5d5d87f-qw8pv" deleted
pod "kube-dns-5f8689dbc9-2nzft" deleted
pod "kube-dns-5f8689dbc9-j7x5f" deleted
pod "kube-dns-autoscaler-76fcd5f658-22r72" deleted
pod "kube-state-metrics-7d9774bbd5-n6m5k" deleted
pod "l7-default-backend-6f8697844f-d2rq2" deleted
pod "metrics-server-v0.3.1-54699c9cc8-7l5w2" deleted

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n cilium get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.7.4/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

kubectl get pods
NAME                                                     READY   STATUS             RESTARTS   AGE
echo-a-9b85dd869-292s2                                   1/1     Running            0          8m37s
echo-b-c7d9f4686-gdwcs                                   1/1     Running            0          8m37s
host-to-b-multi-node-clusterip-6d496f7cf9-956jb          1/1     Running            0          8m37s
host-to-b-multi-node-headless-bd589bbcf-jwbh2            1/1     Running            0          8m37s
pod-to-a-7cc4b6c5b8-9jfjb                                1/1     Running            0          8m36s
pod-to-a-allowed-cnp-6cc776bb4d-2cszk                    1/1     Running            0          8m36s
pod-to-a-external-1111-5c75bd66db-sxfck                  1/1     Running            0          8m35s
pod-to-a-l3-denied-cnp-7fdd9975dd-2pp96                  1/1     Running            0          8m36s
pod-to-b-intra-node-9d9d4d6f9-qccfs                      1/1     Running            0          8m35s
pod-to-b-multi-node-clusterip-5956c84b7c-hwzfg           1/1     Running            0          8m35s
pod-to-b-multi-node-headless-6698899447-xlhfw            1/1     Running            0          8m35s
pod-to-external-fqdn-allow-google-cnp-667649bbf6-v6rf8   1/1     Running            0          8m35s

Install Hubble

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. Visit Hubble Github page.

Generate the deployment files using Helm and deploy it:

git clone https://github.com/cilium/hubble.git --branch v0.5
cd hubble/install/kubernetes

helm template hubble \
    --namespace kube-system \
    --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
    --set ui.enabled=true \
> hubble.yaml

Deploy Hubble:

kubectl apply -f hubble.yaml