Getting Started Using K3s¶
This guide walks you through installation of Cilium on K3s, a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
This guide assumes installation on amd64 architecture. Cilium is presently supported on amd64 architecture with ARM support planned for a future release.
Install a Master Node¶
The first step is to install a K3s master node making sure to disable support for the default CNI plugin:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--flannel-backend=none --no-flannel' sh -
Install Agent Nodes (Optional)¶
K3s can run in standalone mode or as a cluster making it a great choice for
local testing with multi-node data paths. Agent nodes are joined to the master
node using a node-token which can be found on the master node at
/var/lib/rancher/k3s/server/node-token
.
Install K3s on agent nodes and join them to the master node making sure to replace the variables with values from your environment:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--disable-network-policy --no-flannel' K3S_URL='https://${MASTER_IP}:6443' K3S_TOKEN=${NODE_TOKEN} sh -
Should you encounter any issues during the installation, please refer to the
Troubleshooting section and / or seek help on the Slack channel
.
Please consult the Kubernetes Requirements for information on how you need to configure your Kubernetes cluster to operate with Cilium.
Mount the eBPF Filesystem¶
On each node, run the following to mount the eBPF Filesystem:
sudo mount bpffs -t bpf /sys/fs/bpf
Install Cilium¶
Install Cilium as DaemonSet into your new Kubernetes cluster. The DaemonSet will automatically install itself as Kubernetes CNI plugin.
Note
quick-install.yaml
is a pre-rendered Cilium chart template. The
template is generated using helm template
command with default configuration parameters without any customization.
In case of installing Cilium with CRIO, please see CRIO instructions.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml
Warning
experimental-install.yaml
is a pre-rendered Cilium chart template with
experimental features enabled. These features may include unreleased or beta
features that are not considered production-ready. While it provides a convenient
way to try out experimental features, It should only be used in testing environments.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/experimental-install.yaml
Restart unmanaged Pods¶
If you did not use the nodeinit.restartPods=true
in the Helm options when
deploying Cilium, then unmanaged pods need to be restarted manually. Restart
all already running pods which are not running in host-networking mode to
ensure that Cilium starts managing them. This is required to ensure that all
pods which have been running before Cilium was deployed have network
connectivity provided by Cilium and NetworkPolicy applies to them:
kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod
pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted
pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted
pod "heapster-v1.6.0-beta.1-56d5d5d87f-qw8pv" deleted
pod "kube-dns-5f8689dbc9-2nzft" deleted
pod "kube-dns-5f8689dbc9-j7x5f" deleted
pod "kube-dns-autoscaler-76fcd5f658-22r72" deleted
pod "kube-state-metrics-7d9774bbd5-n6m5k" deleted
pod "l7-default-backend-6f8697844f-d2rq2" deleted
pod "metrics-server-v0.3.1-54699c9cc8-7l5w2" deleted
Note
This may error out on macOS due to -r
being unsupported by
xargs
. In this case you can safely run this command without -r
with the symptom that this will hang if there are no pods to
restart. You can stop this with ctrl-c
.
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the Pending
state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Now that you have a Kubernetes cluster with Cilium up and running, you can take a couple of next steps to explore various capabilities: