Using kube-router to run BGP

This guide explains how to configure Cilium and kube-router to co-operate to use kube-router for BGP peering and route propagation and Cilium for policy enforcement and load-balancing.


This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.

Deploy kube-router

Download the kube-router DaemonSet template:

curl -LO

Open the file generic-kuberouter-only-advertise-routes.yaml and edit the args: section. The following arguments are requried to be set to exactly these values:

- "--run-router=true"
- "--run-firewall=false"
- "--run-service-proxy=false"
- "--enable-cni=false"
- "--enable-pod-egress=false"

The following arguments are optional and may be set according to your needs. For the purpose of keeping this guide simple, the following values are being used which require the least preparations in your cluster. Please see the kube-router user guide for more information.

- "--enable-ibgp=true"
- "--enable-overlay=true"
- "--advertise-cluster-ip=true"
- "--advertise-external-ip=true"
- "--advertise-loadbalancer-ip=true"

The following arguments are optional and should be set if you want BGP peering with an external router. This is useful if you want externally routable Kubernetes Pod and Service IPs. Note the values used here should be changed to whatever IPs and ASNs are configured on your external router.

- "--cluster-asn=65001"
- "--peer-router-ips=,10.0.2"
- "--peer-router-asns=65000,65000"

Apply the DaemonSet file to deploy kube-router and verify it has come up correctly:

$ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml
$ kubectl -n kube-system get pods -l k8s-app=kube-router
NAME                READY     STATUS    RESTARTS   AGE
kube-router-n6fv8   1/1       Running   0          10m
kube-router-nj4vs   1/1       Running   0          10m
kube-router-xqqwc   1/1       Running   0          10m
kube-router-xsmd4   1/1       Running   0          10m

Deploy Cilium

In order for routing to be delegated to kube-router, tunneling/encapsulation must be disabled. This is done by setting the tunnel=disabled in the ConfigMap cilium-config or by adjusting the DaemonSet to run the cilium-agent with the argument --tunnel=disabled:

# Encapsulation mode for communication between nodes
# Possible values:
#   - disabled
#   - vxlan (default)
#   - geneve
tunnel: "disabled"

You can then install Cilium according to the instructions in section Requirements.

Ensure that Cilium is up and running:

$ kubectl -n kube-system get pods -l k8s-app=cilium
cilium-fhpk2   1/1       Running   0          45m
cilium-jh6kc   1/1       Running   0          44m
cilium-rlx6n   1/1       Running   0          44m
cilium-x5x9z   1/1       Running   0          45m

Verify Installation

Verify that kube-router has installed routes:

$ kubectl -n kube-system exec -ti cilium-fhpk2 -- ip route list scope global
default via dev eth0 proto dhcp src metric 1024 via dev cilium_host src via dev eth0 proto 17 dev tun-172011760 proto 17 src dev tun-1720186231 proto 17 src

In the above example, we see three categories of routes that have been installed:

  • Local PodCIDR: This route points to all pods running on the host and makes these pods available to * via dev cilium_host src
  • BGP route: This type of route is installed if kube-router determines that the remote PodCIDR can be reached via a router known to the local host. It will instruct pod to pod traffic to be forwarded directly to that router without requiring any encapsulation. * via dev eth0 proto 17
  • IPIP tunnel route: If no direct routing path exists, kube-router will fall back to using an overlay and establish an IPIP tunnel between the nodes. * dev tun-172011760 proto 17 src * dev tun-1720186231 proto 17 src

You can test connectivity by deploying the following connectivity checker pods:

$ kubectl create -f
$ kubectl get pods
NAME                                                    READY   STATUS    RESTARTS   AGE
echo-a-dd67f6b4b-s62jl                                  1/1     Running   0          2m15s
echo-b-55d8dbd74f-t8jwk                                 1/1     Running   0          2m15s
host-to-b-multi-node-clusterip-686f99995d-tn6kq         1/1     Running   0          2m15s
host-to-b-multi-node-headless-bdbc856d-9zv4x            1/1     Running   0          2m15s
pod-to-a-766584ffff-wh2s8                               1/1     Running   0          2m15s
pod-to-a-allowed-cnp-5899c44899-f9tdv                   1/1     Running   0          2m15s
pod-to-a-external-1111-55c488465-7sd55                  1/1     Running   0          2m14s
pod-to-a-l3-denied-cnp-856998c977-j9dhs                 1/1     Running   0          2m15s
pod-to-b-intra-node-7b6cbc6c56-hqz7r                    1/1     Running   0          2m15s
pod-to-b-multi-node-clusterip-77c8446b6d-qc8ch          1/1     Running   0          2m15s
pod-to-b-multi-node-headless-854b65674d-9zlp8           1/1     Running   0          2m15s
pod-to-external-fqdn-allow-google-cnp-bb9597947-bc85q   1/1     Running   0          2m14s