Installation on Google GKE¶
GKE Requirements¶
- Install the Google Cloud SDK (
gcloud
) see Installing Google Cloud SDK. - Create a project or use an existing one
export GKE_PROJECT=gke-clusters
gcloud projects create $GKE_PROJECT
gcloud config set project $GKE_PROJECT
- Enable the GKE API for the project if not already done
gcloud services enable container.googleapis.com
Create a GKE Cluster¶
You can apply any method to create a GKE cluster. The example given here is using the Google Cloud SDK.
Note
Either of the cluster zone or region must be specified in gcloud
commands below. The full list of locations is available on
this page.
This guide uses --zone
to specify the zone but you may replace
this flag with --region
instead.
export CLUSTER_NAME=cluster1
export CLUSTER_ZONE=us-west2-a
gcloud container clusters create $CLUSTER_NAME --image-type COS --num-nodes 2 --machine-type n1-standard-4 --zone $CLUSTER_ZONE
Retrieve the credentials to access the cluster:
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_ZONE
When done, you should be able to access your cluster like this:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster1-default-pool-a63a765c-flr2 Ready <none> 6m v1.14.10-gke.36
gke-cluster1-default-pool-a63a765c-z73c Ready <none> 6m v1.14.10-gke.36
Deploy Cilium¶
Extract the Cluster CIDR to enable native-routing:
NATIVE_CIDR="$(gcloud container clusters describe $CLUSTER_NAME --zone $CLUSTER_ZONE --format 'value(clusterIpv4Cidr)')"
echo $NATIVE_CIDR
Note
First, make sure you have Helm 3 installed. Helm 2 is no longer supported.
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
If you are ready to restart existing pods when initializing the node, you can
also pass the --set nodeinit.restartPods=true
flag to the helm
command
below. This will ensure all pods are managed by Cilium.
helm install cilium cilium/cilium --version 1.9.4 \ --namespace kube-system \ --set nodeinit.enabled=true \ --set nodeinit.reconfigureKubelet=true \ --set nodeinit.removeCbrBridge=true \ --set cni.binPath=/home/kubernetes/bin \ --set gke.enabled=true \ --set ipam.mode=kubernetes \ --set nativeRoutingCIDR=$NATIVE_CIDR
The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added to the cluster. The NodeInit DaemonSet will perform the following actions:
- Reconfigure kubelet to run in CNI mode
- Mount the eBPF filesystem
Restart unmanaged Pods¶
If you did not use the nodeinit.restartPods=true
in the Helm options when
deploying Cilium, then unmanaged pods need to be restarted manually. Restart
all already running pods which are not running in host-networking mode to
ensure that Cilium starts managing them. This is required to ensure that all
pods which have been running before Cilium was deployed have network
connectivity provided by Cilium and NetworkPolicy applies to them:
kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod
pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted
pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted
pod "heapster-v1.6.0-beta.1-56d5d5d87f-qw8pv" deleted
pod "kube-dns-5f8689dbc9-2nzft" deleted
pod "kube-dns-5f8689dbc9-j7x5f" deleted
pod "kube-dns-autoscaler-76fcd5f658-22r72" deleted
pod "kube-state-metrics-7d9774bbd5-n6m5k" deleted
pod "l7-default-backend-6f8697844f-d2rq2" deleted
pod "metrics-server-v0.3.1-54699c9cc8-7l5w2" deleted
Note
This may error out on macOS due to -r
being unsupported by
xargs
. In this case you can safely run this command without -r
with the symptom that this will hang if there are no pods to
restart. You can stop this with ctrl-c
.
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-bbpwg 0/1 PodInitializing 0 27s
cilium-node-init-jwtw6 1/1 Running 0 27s
cilium-node-init-t5cm9 1/1 Running 0 27s
cilium-operator-7967c75f94-ckd5g 0/1 Pending 0 27s
cilium-rnrxr 0/1 Running 0 27s
It may take a couple of minutes for all components to come up:
kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
cilium-bbpwg 1/1 Running 0 70s
cilium-node-init-jwtw6 1/1 Running 0 70s
cilium-node-init-t5cm9 1/1 Running 0 70s
cilium-operator-7967c75f94-ckd5g 1/1 Running 0 70s
cilium-rnrxr 1/1 Running 0 70s
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the Pending
state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Specify Environment Variables¶
Specify the namespace in which Cilium is installed as CILIUM_NAMESPACE
environment variable. Subsequent commands reference this environment variable.
export CILIUM_NAMESPACE=cilium
Enable Hubble for Cluster-Wide Visibility¶
Hubble is the component for observability in Cilium. To obtain cluster-wide visibility into your network traffic, deploy Hubble Relay and the UI as follows on your existing installation:
If you installed Cilium via helm install
, you may enable Hubble
Relay and UI with the following command:
helm upgrade cilium cilium/cilium --version 1.9.4 \ --namespace $CILIUM_NAMESPACE \ --reuse-values \ --set hubble.listenAddress=":4244" \ --set hubble.relay.enabled=true \ --set hubble.ui.enabled=true
On Cilium 1.9.1 and older, the Cilium agent pods will be restarted in the process.
If you installed Cilium 1.9.2 or newer via the provided
quick-install.yaml
, you may deploy Hubble Relay and UI on top of
your existing installation with the following command:
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml
Installation via quick-hubble-install.yaml
only works if the
installed Cilium version is 1.9.2 or newer. Users of Cilium 1.9.0
or 1.9.1 are encouraged to upgrade to a newer version by applying the
most recent Cilium quick-install.yaml
first.
Alternatively, it is possible to manually generate a YAML manifest for the Cilium DaemonSet and Hubble Relay/UI as follows. The generated YAML can be applied on top of an existing installation:
# Set this to your installed Cilium version
export CILIUM_VERSION=1.9.1
# Please set any custom Helm values you may need for Cilium,
# such as for example `--set operator.replicas=1` on single-cluster nodes.
helm template cilium cilium/cilium --version $CILIUM_VERSION \\
--namespace $CILIUM_NAMESPACE \\
--set hubble.tls.auto.method="cronJob" \\
--set hubble.listenAddress=":4244" \\
--set hubble.relay.enabled=true \\
--set hubble.ui.enabled=true > cilium-with-hubble.yaml
# This will modify your existing Cilium DaemonSet and ConfigMap
kubectl apply -f cilium-with-hubble.yaml
The Cilium agent pods will be restarted in the process.
Once the Hubble UI pod is started, use port forwarding for the hubble-ui
service. This allows opening the UI locally on a browser:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
And then open http://localhost:12000/ to access the UI.
Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions below:
Download the latest hubble release:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
tar zxf hubble-linux-amd64.tar.gz
and move the hubble
CLI to a directory listed in the $PATH
environment variable. For example:
sudo mv hubble /usr/local/bin
Download the latest hubble release:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
tar zxf hubble-darwin-amd64.tar.gz
and move the hubble
CLI to a directory listed in the $PATH
environment variable. For example:
sudo mv hubble /usr/local/bin
Download the latest hubble release:
curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
set /p HUBBLE_VERSION=<stable.txt
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
certutil -hashfile hubble-windows-amd64.tar.gz SHA256
type hubble-windows-amd64.tar.gz.sha256sum
:: verify that the checksum from the two commands above match
tar zxf hubble-windows-amd64.tar.gz
and move the hubble.exe
CLI to a directory listed in the %PATH%
environment variable after
extracting it from the tarball.
Similarly to the UI, use port forwarding for the hubble-relay
service to
make it available locally:
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
In a separate terminal window, run the hubble status
command specifying the
Hubble Relay address:
$ hubble --server localhost:4245 status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 5455/16384 (33.29%)
Flows/s: 11.30
Connected Nodes: 4/4
If Hubble Relay reports that all nodes are connected, as in the example output above, you can now use the CLI to observe flows of the entire cluster:
hubble --server localhost:4245 observe
If you encounter any problem at this point, you may seek help on Slack.
Tip
Hubble CLI configuration can be persisted using a configuration file or
environment variables. This avoids having to specify options specific to a
particular environment every time a command is run. Run hubble help
config
for more information.
For more information about Hubble and its components, see the Observability section.