IPVLAN based Networking (beta)¶
This guide explains how to configure Cilium to set up an ipvlan-based datapath instead of the default veth-based one.
This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.
The feature lacks support of the following, which will be resolved in upcoming Cilium releases:
- IPVLAN L2 mode
- L7 policy enforcement
- FQDN Policies
- IPVLAN with tunneling
- BPF-based masquerading
The ipvlan-based datapath in L3 mode requires v4.12 or more recent Linux kernel, while L3S mode, in addition, requires a stable kernel with the fix mentioned in this document (see below).
First, make sure you have Helm 3 installed.
If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.8.8 \ --namespace kube-system \ --set global.datapathMode=ipvlan \ --set global.ipvlan.masterDevice=eth0 \ --set global.tunnel=disabled
It is required to specify the master ipvlan device which typically points to a
networking device that is facing the external network. This is done through
global.ipvlan.masterDevice to the name of the networking device
"bond0", for example. Be aware this option will be
used by all nodes, so it is required this device name is consistent on all
nodes where you are going to deploy Cilium.
The ipvlan datapath only supports direct routing mode right now, therefore
tunneling must be disabled through setting
To make ipvlan work between hosts, routes on each host have to be installed
either manually or automatically by Cilium. The latter can be enabled
global.installIptablesRules parameter is optional and if set to
"false" then Cilium will not install any iptables rules which are
mainly for interaction with kube-proxy, and additionally it will trigger
ipvlan setup in L3 mode. For the default case where the latter is
ipvlan is operated in L3S mode such that netfilter in host namespace
is not bypassed. Optionally, the agent can also be set up for masquerading
all traffic leaving the ipvlan master device if
global.masquerade is set
"true". Note that in order for L3S mode to work correctly, a kernel
with the following fix is required: d5256083f62e .
This fix is included in stable kernels
4.20.6 or higher. Without this kernel fix, ipvlan in L3S mode cannot
connect to kube-apiserver.
Masquerading with iptables in L3-only mode is not possible since netfilter hooks are bypassed in the kernel in this mode, hence L3S (symmetric) had to be introduced in the kernel at the cost of performance.
Example ConfigMap extract for ipvlan in pure L3 mode:
helm install cilium cilium/cilium --version 1.8.8 \ --namespace kube-system \ --set global.datapathMode=ipvlan \ --set global.ipvlan.masterDevice=bond0 \ --set global.tunnel=disabled \ --set global.installIptablesRules=false \ --set global.l7Proxy.enabled=false \ --set global.autoDirectNodeRoutes=true
Example ConfigMap extract for ipvlan in L3S mode with iptables masquerading all traffic leaving the node:
helm install cilium cilium/cilium --version 1.8.8 \ --namespace kube-system \ --set global.datapathMode=ipvlan \ --set global.ipvlan.masterDevice=bond0 \ --set global.tunnel=disabled \ --set global.masquerade=true \ --set global.autoDirectNodeRoutes=true
Verify that it has come up correctly:
kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-crf7f 1/1 Running 0 10m
For further information on Cilium’s ipvlan datapath mode, see eBPF Datapath.