Getting Started Using Minikube¶
This guide uses minikube to demonstrate deployment and operation of Cilium in a single-node Kubernetes cluster. The minikube VM requires approximately 5GB of RAM and supports hypervisors like VirtualBox that run on Linux, macOS, and Windows.
Install kubectl & minikube¶
- Install
kubectl
version >= v1.10.0 as described in the Kubernetes Docs - Install
minikube
>= v1.3.1 as per minikube documentation: Install Minikube.
Note
It is important to validate that you have minikube v1.3.1 installed. Older versions of minikube are shipping a kernel configuration that is not compatible with the TPROXY requirements of Cilium >= 1.6.0.
minikube version
minikube version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631
- Create a minikube cluster:
minikube start --network-plugin=cni --memory=4096
# Only available for minikube >= v1.12.1
minikube start --cni=cilium --memory=4096
Note
From minikube v1.12.1+, cilium networking plugin can be enabled directly with
--network-plugin=cilium
parameter in minikube start
command. With this
flag enabled, minikube
will not only mount eBPF file system but also
deploy quick-install.yaml
automatically.
- Mount the eBPF filesystem
minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf
Note
In case of installing Cilium for a specific Kubernetes version, the
--kubernetes-version vx.y.z
parameter can be appended to the minikube
start
command for bootstrapping the local cluster. By default, minikube
will install the most recent version of Kubernetes.
Install Cilium¶
Install Cilium as DaemonSet into your new Kubernetes cluster. The DaemonSet will automatically install itself as Kubernetes CNI plugin.
Note
quick-install.yaml
is a pre-rendered Cilium chart template. The
template is generated using helm template
command with default configuration parameters without any customization.
In case of installing Cilium with CRIO, please see CRIO instructions.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/install/kubernetes/quick-install.yaml
Warning
experimental-install.yaml
is a pre-rendered Cilium chart template with
experimental features enabled. These features may include unreleased or beta
features that are not considered production-ready. While it provides a convenient
way to try out experimental features, It should only be used in testing environments.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/install/kubernetes/experimental-install.yaml
Validate the Installation¶
You can monitor as Cilium and all required components are being installed:
kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-6788c799fd-42qxx 1/1 Running 0 69s
echo-b-59757679d4-pjtdl 1/1 Running 0 69s
echo-b-host-f86bd784d-wnh4v 1/1 Running 0 68s
host-to-b-multi-node-clusterip-585db65b4d-x74nz 1/1 Running 0 68s
host-to-b-multi-node-headless-77c64bc7d8-kgf8p 1/1 Running 0 67s
pod-to-a-allowed-cnp-87b5895c8-bfw4x 1/1 Running 0 68s
pod-to-a-b76ddb6b4-2v4kb 1/1 Running 0 68s
pod-to-a-denied-cnp-677d9f567b-kkjp4 1/1 Running 0 68s
pod-to-b-intra-node-nodeport-8484fb6d89-bwj8q 1/1 Running 0 68s
pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk 1/1 Running 0 68s
pod-to-b-multi-node-headless-5fd98b9648-5bjj8 1/1 Running 0 68s
pod-to-b-multi-node-nodeport-74bd8d7bd5-kmfmm 1/1 Running 0 68s
pod-to-external-1111-7489c7c46d-jhtkr 1/1 Running 0 68s
pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75 1/1 Running 0 68s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the Pending
state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Next steps¶
Now that you have a Kubernetes cluster with Cilium up and running, you can take a couple of next steps to explore various capabilities: