Per-node configuration
The Cilium agent process (a.k.a. DaemonSet) supports setting configuration on a per-node basis. This allows overriding cilium-config ConfigMap for a node or set of nodes. It is managed by CiliumNodeConfig objects.
This feature is useful for:
Gradually rolling out changes.
Selectively enabling features that require specific hardware:
CiliumNodeConfig objects
A CiliumNodeConfig object allows for overriding ConfigMap / Agent arguments.
It consists of a set of fields and a label selector. The label selector
defines to which nodes the configuration applies. As is the standard with
Kubernetes, an empty LabelSelector (e.g. {}
) selects all nodes.
Note
Creating or modifying a CiliumNodeConfig will not cause changes to take effect until pods are deleted and re-created (or their node is restarted).
Example: selective XDP enablement
To enable LoadBalancer & NodePort XDP Acceleration only on nodes with necessary hardware, one would label the relevant nodes and override their configuration.
apiVersion: cilium.io/v2
kind: CiliumNodeConfig
metadata:
namespace: kube-system
name: enable-xdp
spec:
nodeSelector:
matchLabels:
io.cilium.xdp-offload: "true"
defaults:
bpf-lb-acceleration: native
Example: KubeProxyReplacement Rollout
To roll out kube-proxy replacement in a gradual manner,
you may also wish to use the CiliumNodeConfig feature. This will label all migrated
nodes with io.cilium.migration/kube-proxy-replacement: true
Warning
You must have installed Cilium with the Helm values k8sServiceHost
and
k8sServicePort
. Otherwise, Cilium will not be able to reach the Kubernetes
APIServer after kube-proxy is uninstalled.
You can apply these two values to a running cluster via helm upgrade
.
Patch kube-proxy to only run on unmigrated nodes.
kubectl -n kube-system patch daemonset kube-proxy --patch '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "io.cilium.migration/kube-proxy-replacement", "operator": "NotIn", "values": ["true"]}]}]}}}}}}}'
Configure Cilium to use kube-proxy replacement on migrated nodes
cat <<EOF | kubectl apply --server-side -f - apiVersion: cilium.io/v2 kind: CiliumNodeConfig metadata: namespace: kube-system name: kube-proxy-replacement spec: nodeSelector: matchLabels: io.cilium.migration/kube-proxy-replacement: true defaults: kube-proxy-replacement: true kube-proxy-replacement-healthz-bind-address: "0.0.0.0:10256" EOF
Select a node to migrate. Optionally, cordon and drain that node:
export NODE=kind-worker kubectl label node $NODE --overwrite 'io.cilium.migration/kube-proxy-replacement=true' kubectl cordon $NODE
Delete Cilium DaemonSet to reload configuration:
kubectl -n kube-system delete pod -l k8s-app=cilium --field-selector spec.nodeName=$NODE
Ensure Cilium has the correct configuration:
kubectl -n kube-system exec $(kubectl -n kube-system get pod -l k8s-app=cilium --field-selector spec.nodeName=$NODE -o name) -c cilium-agent -- \ cilium config get kube-proxy-replacement true
Uncordon node
kubectl uncordon $NODE
Cleanup: set default to kube-proxy-replacement:
cilium config set --restart=false kube-proxy-replacement true cilium config set --restart=false kube-proxy-replacement-healthz-bind-address "0.0.0.0:10256" kubectl -n kube-system delete ciliumnodeconfig kube-proxy-replacement
Cleanup: delete kube-proxy daemonset, unlabel nodes
kubectl -n kube-system delete daemonset kube-proxy kubectl label node --all --overwrite 'io.cilium.migration/kube-proxy-replacement-'