Kubernetes Ingress Support
Cilium uses the standard Kubernetes Ingress resource definition, with
an ingressClassName
of cilium
. This can be used for path-based
routing and for TLS termination. For backwards compatibility, the
kubernetes.io/ingress.class
annotation with value of cilium
is also supported.
Note
The ingress controller creates a Service of LoadBalancer type, so your environment will need to support this.
Cilium allows you to specify load balancer mode for the Ingress resource:
dedicated
: The Ingress controller will create a dedicated loadbalancer for the Ingress.shared
: The Ingress controller will use a shared loadbalancer for all Ingress resources.
Each load balancer mode has its own benefits and drawbacks. The shared mode saves resources by sharing a single LoadBalancer config across all Ingress resources in the cluster, while the dedicated mode can help to avoid potential conflicts (e.g. path prefix) between resources.
Note
It is possible to change the load balancer mode for an Ingress resource. When the mode is changed, active connections to backends of the Ingress may be terminated during the reconfiguration due to a new load balancer IP address being assigned to the Ingress resource.
This is a step-by-step guide on how to enable the Ingress Controller in an existing K8s cluster with Cilium installed.
Prerequisites
Cilium must be configured with NodePort enabled, using
nodePort.enabled=true
or by enabling the kube-proxy replacement withkubeProxyReplacement=true
. For more information, see kube-proxy replacement.Cilium must be configured with the L7 proxy enabled using
l7Proxy=true
(enabled by default).By default, the Ingress controller creates a Service of LoadBalancer type, so your environment will need to support this. Alternatively, you can change this to NodePort or, since Cilium 1.16+, directly expose the Cilium L7 proxy on the host network.
Known Issues
If your Cilium install’s nodes have multiple network devices, Ingress traffic can have issues with traffic to Envoy arriving on the same node as a backend Pod unless you set
endpointRoutes.enabled
totrue
in Helm. Fixing this issue is tracked in #24318.Similarly, you are using Native Routing, (no tunneling) and your Cilium install sets the Helm
bpf.masquerade
value totrue
,you can also have issues with same-node backend routing. The workaround in this case is to sethostLegacyRouting
totrue
. Fixing this issue is tracked in #31653.
Installation
Cilium Ingress Controller can be enabled with helm flag ingressController.enabled
set as true. Please refer to Installation using Helm for a fresh installation.
$ helm upgrade cilium ./cilium \ --namespace kube-system \ --reuse-values \ --set ingressController.enabled=true \ --set ingressController.loadbalancerMode=dedicated $ kubectl -n kube-system rollout restart deployment/cilium-operator $ kubectl -n kube-system rollout restart ds/cilium
Cilium can become the default ingress controller by setting the
--set ingressController.default=true
flag. This will create ingress entries even when the ingressClass
is not set.
If you only want to use envoy traffic management feature without Ingress support, you should only
enable --enable-envoy-config
flag.
$ helm upgrade cilium ./cilium \ --namespace kube-system \ --reuse-values \ --set envoyConfig.enabled=true $ kubectl -n kube-system rollout restart deployment/cilium-operator $ kubectl -n kube-system rollout restart ds/cilium
Additionally, the proxy load-balancing feature can be configured with the loadBalancer.l7.backend=envoy
flag.
$ helm upgrade cilium ./cilium \ --namespace kube-system \ --reuse-values \ --set loadBalancer.l7.backend=envoy $ kubectl -n kube-system rollout restart deployment/cilium-operator $ kubectl -n kube-system rollout restart ds/cilium
Next you can check the status of the Cilium agent and operator:
$ cilium status
Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
See the full page of releases.
Clone the Cilium GitHub repository so that the Cilium CLI can access the latest unreleased Helm chart from the main branch:
git clone git@github.com:cilium/cilium.git
cd cilium
Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
See the full page of releases.
Clone the Cilium GitHub repository so that the Cilium CLI can access the latest unreleased Helm chart from the main branch:
git clone git@github.com:cilium/cilium.git
cd cilium
Cilium Ingress Controller can be enabled with the below command
$ cilium install --chart-directory ./install/kubernetes/cilium --set kubeProxyReplacement=true --set ingressController.enabled=true --set ingressController.loadbalancerMode=dedicated
Cilium can become the default ingress controller by setting the
--set ingressController.default=true
flag. This will create ingress entries even when the ingressClass
is not set.
If you only want to use envoy traffic management feature without Ingress support, you should only
enable --enable-envoy-config
flag.
$ cilium install --chart-directory ./install/kubernetes/cilium --set kubeProxyReplacement=true --set envoyConfig.enabled=true
Additionally, the proxy load-balancing feature can be configured with the loadBalancer.l7.backend=envoy
flag.
$ cilium install --chart-directory ./install/kubernetes/cilium --set kubeProxyReplacement=true --set envoyConfig.enabled=true --set loadBalancer.l7.backend=envoy
Next you can check the status of the Cilium agent and operator:
$ cilium status
It is also recommended that you install Hubble CLI which will be used used to observe the traffic in later steps.
Reference
How Cilium Ingress and Gateway API differ from other Ingress controllers
One of the biggest differences between Cilium’s Ingress and Gateway API support and other Ingress controllers is how closely tied the implementation is to the CNI. For Cilium, Ingress and Gateway API are part of the networking stack, and so behave in a different way to other Ingress or Gateway API controllers (even other Ingress or Gateway API controllers running in a Cilium cluster).
Other Ingress or Gateway API controllers are generally installed as a Deployment or Daemonset in the cluster, and exposed via a Loadbalancer Service or similar (which Cilium can, of course, enable).
Cilium’s Ingress and Gateway API config is exposed with a Loadbalancer or NodePort service, or optionally can be exposed on the Host network also. But in all of these cases, when traffic arrives at the Service’s port, eBPF code intercepts the traffic and transparently forwards it to Envoy (using the TPROXY kernel facility).
This affects things like client IP visibility, which works differently for Cilium’s Ingress and Gateway API support to other Ingress controllers.
It also allows Cilium’s Network Policy engine to apply CiliumNetworkPolicy to traffic bound for and traffic coming from an Ingress.
Cilium’s ingress config and CiliumNetworkPolicy
Ingress and Gateway API traffic bound to backend services via Cilium passes through a per-node Envoy proxy.
The per-node Envoy proxy has special code that allows it to interact with the eBPF policy engine, and do policy lookups on traffic. This allows Envoy to be a Network Policy enforcement point, both for Ingress (and Gateway API) traffic, and also for east-west traffic via GAMMA or L7 Traffic Management.
However, for ingress config, there’s also an additional step. Traffic that arrives at
Envoy for Ingress or Gateway API is assigned the special ingress
identity
in Cilium’s Policy engine.
Traffic coming from outside the cluster is usually assigned the world
identity
(unless there are IP CIDR policies in the cluster). This means that there are
actually two logical Policy enforcement points in Cilium Ingress - before traffic
arrives at the ingress
identity, and after, when it is about to exit the
per-node Envoy.
This means that, when applying Network Policy to a cluster, it’s important to
ensure that both steps are allowed, and that traffic is allowed from world
to
ingress
, and from ingress
to identities in the cluster (like the
productpage
identity in the image above).
Please see the Ingress and Network Policy Example for more details for Ingress, although the same principles also apply for Gateway API.
Source IP Visibility
Note
By default, source IP visibility for Cilium ingress config, both Ingress and Gateway API, should just work on most installations. Read this section for more information on requirements and relevant settings.
Having a backend be able to deduce what IP address the actual request came from is important for most applications.
By default, Cilium’s Envoy instances are configured to append the visible source
address of incoming HTTP connections to the X-Forwarded-For
header, using the
usual rules. That is, by default Cilium sets the number of trusted hops to 0
,
indicating that Envoy should use the address the connection is opened from, rather
than a value inside the X-Forwarded-For
list. Increasing this count will
have Envoy use the n
th value from the list, counting from the right.
Envoy will also set the X-Envoy-External-Address
header to the trusted client
address, whatever that turns out to be, based on X-Forwarded-For
.
Note
Backends using Cilium ingress (whether via Ingress or Gateway API) should
just see the X-Forwarded-For
and X-Envoy-External-Address
headers (which
are handled transparently by many HTTP libraries).
externalTrafficPolicy
for Loadbalancer or NodePort Services
Cilium’s ingress support (both for Ingress and Gateway API) often uses a Loadbalancer or NodePort Service to expose the Envoy Daemonset.
In these cases, the Service object has one field that is particularly relevant
to Client IP visibility - the externalTrafficPolicy
field.
It has two relevant settings:
Local
: Nodes will only route traffic to Pods running on the local node, without masquerading the source IP. Because of this, in clusters that usekube-proxy
, this is the only way to ensure source IP visibility. Part of the contract forexternalTrafficPolicy
local is also that the node will open a port (thehealthCheckNodePort
, automatically set by Kubernetes whenexternalTrafficPolicy: Local
is set), and requests tohttp://<nodeIP>:<healthCheckNodePort>/healthz
will return 200 on nodes that have local pods running, and non-200 on nodes that don’t. Cilium implements this for general Loadbalancer Services, but it’s a bit different for Cilium ingress config (both Ingress and Gateway API).Cluster
: Node will route to all endpoints across the cluster evenly. This has a couple of other effects: Firstly, upstream loadbalancers will expect to be able to send traffic to any node and have it end up at a backend Pod, and the node may masquerade the source IP. This means that in many cases,externalTrafficPolicy: Cluster
may mean that the backend pod does not see the source IP.
In Cilium’s case, all ingress traffic bound for a Service that exposes Envoy is always going to the local node, and is always forwarded to Envoy using the Linux Kernel TPROXY function, which transparently forwards packets to the backend.
This means that for Cilium ingress config, for both Ingress and Gateway API, things
work a little differently in both externalTrafficPolicy
cases.
Note
In both externalTrafficPolicy
cases, traffic will arrive at any node
in the cluster, and be forwarded to Envoy while keeping the source IP intact.
Also, for any Services that exposes Cilium’s Envoy, Cilium will ensure that
when externalTrafficPolicy: Local
is set, every node in the cluster will
pass the healthCheckNodePort
check, so that external load balancers will
forward correctly.
However, for Cilium’s ingress config, both Ingress and Gateway API, it is not
necessary to configure externalTrafficPolicy: Local
to keep the source IP
visible to the backend pod (via the X-Forwarded-For
and X-Envoy-External-Address
fields).
TLS Passthrough and source IP visibility
Both Ingress and Gateway API support TLS Passthrough configuration (via annotation for Ingress, and the TLSRoute resource for Gateway API). This configuration allows multiple TLS Passthrough backends to share the same TLS port on a loadbalancer, with Envoy inspecting the Server Name Indicator (SNI) field of the TLS handshake, and using that to forward the TLS stream to a backend.
However, this poses problems for source IP visibility, because Envoy is doing a TCP Proxy of the TLS stream.
What happens is that the TLS traffic arrives at Envoy, terminating a TCP stream, Envoy inspects the client hello to find the SNI, picks a backend to forward to, then starts a new TCP stream and forwards the TLS traffic inside the downstream (outside) packets to the upstream (the backend).
Because it’s a new TCP stream, as far as the backends are concerned, the source IP is Envoy (which is often the Node IP, depending on your Cilium config).
Note
When doing TLS Passthrough, backends will see Cilium Envoy’s IP address as the source of the forwarded TLS streams.
Ingress Path Types and Precedence
The Ingress specification supports three types of paths:
Exact - match the given path exactly.
Prefix - match the URL path prefix split by
/
. The last path segment must match the whole segment - if you configure a Prefix path of/foo/bar
,/foo/bar/baz
will match, but/foo/barbaz
will not.ImplementationSpecific - Interpretation of the Path is up to the IngressClass. In Cilium’s case, we define ImplementationSpecific to be “Regex”, so Cilium will interpret any given path as a regular expression and program Envoy accordingly. Notably, some other implementations have ImplementationSpecific mean “Prefix”, and in those cases, Cilium will treat the paths differently. (Since a path like
/foo/bar
contains no regex characters, when it is configured in Envoy as a regex, it will function as anExact
match instead).
When multiple path types are configured on an Ingress object, Cilium will configure Envoy with the matches in the following order:
Exact
ImplementationSpecific (that is, regular expression)
Prefix
The
/
Prefix match has special handling and always goes last.
Within each of these path types, the paths are sorted in decreasing order of string length.
If you do use ImplementationSpecific regex support, be careful with using the
*
operator, since it will increase the length of the regex, but may match
another, shorter option.
For example, if you have two ImplementationSpecific paths, /impl
, and /impl.*
,
the second will be sorted ahead of the first in the generated config. But because
*
is in use, the /impl
match will never be hit, as any request to that
path will match the /impl.*
path first.
See the Ingress Path Types for more information.
Supported Ingress Annotations
Name |
Description |
Default Value |
---|---|---|
|
The loadbalancer mode for the ingress.
Allows a per ingress override
of the default set in the Helm value
ingressController.loadbalancerMode .Applicable values are
dedicated andshared . |
dedicated (from Helm chart)
|
|
The loadbalancer class for the ingress.
Only applicable when
loadbalancer-mode is set to dedicated . |
unspecified |
|
The Service type for dedicated Ingress.
Applicable values are
LoadBalancer and
NodePort . |
|
|
The Service externalTrafficPolicy for dedicated
Ingress. Applicable values are
Cluster and
Local . |
|
|
The NodePort to use for the HTTP Ingress.
Applicable only if
ingress.cilium.io/service-type is
NodePort . If unspecified, a randomNodePort will be allocated by kubernetes.
|
unspecified |
|
The NodePort to use for the HTTPS Ingress.
Applicable only if
ingress.cilium.io/service-type is
NodePort . If unspecified, a randomNodePort will be allocated by kubernetes.
|
unspecified |
|
The port to use for the Envoy listener on the host
network. Applicable and mandatory only for
dedicated Ingress and if host network mode is
enabled.
|
|
|
Enable TLS Passthrough mode for this Ingress.
Applicable values are
enabled and disabled ,although boolean-style values will also be
accepted.
Note that some conditions apply to TLS
Passthrough Ingresses, due to how
TLS Passthrough works:
* A
host field must be set in the Ingress* Default backends are ignored
* Rules with paths other than
/ are ignoredIf all the rules in an Ingress are ignored for
these reasons, no Envoy config will be generated
and the Ingress will have no effect.
Note that this annotation is analogous to
the
ssl-passthrough on other Ingresscontrollers.
|
|
|
Enable enforced HTTPS redirects for this Ingress.
Applicable values are
enabled and disabled ,although boolean-style values will also be
accepted.
Note that if the annotation is not present, this
behavior will be controlled by the
enforce-ingress-https configurationfile setting (or
ingressController.enforceHttps in Helm).
Any host with TLS config will have redirects to
HTTPS configured for each match specified in the
Ingress.
|
unspecified |
Additionally, cloud-provider specific annotations for the LoadBalancer Service are supported.
By default, annotations with values beginning with:
lbipam.cilium.io
nodeipam.cilium.io
service.beta.kubernetes.io
service.kubernetes.io
cloud.google.com
will be copied from an Ingress object to the generated LoadBalancer Service objects.
This setting is controlled by the Cilium Operator’s ingress-lb-annotation-prefixes
config flag, and can be configured in Cilium’s Helm values.yaml
using the ingressController.ingressLBAnnotationPrefixes
setting.
Please refer to the Kubernetes documentation for more details.
Host network mode
Note
Supported since Cilium 1.16+
Host network mode allows you to expose the Cilium ingress controller (Envoy listener) directly on the host network. This is useful in cases where a LoadBalancer Service is unavailable, such as in development environments or environments with cluster-external loadbalancers.
Note
Enabling the Cilium ingress controller host network mode automatically disables the LoadBalancer/NodePort type Service mode. They are mutually exclusive.
The listener is exposed on all interfaces (
0.0.0.0
for IPv4 and/or::
for IPv6).
Host network mode can be enabled via Helm:
ingressController:
enabled: true
hostNetwork:
enabled: true
Once enabled, host network ports can be specified with the following methods:
- Shared Ingress: Globally via Helm flags
ingressController.hostNetwork.sharedListenerPort
: Host network port to expose the Cilium ingress controller Envoy listener. The default port is8080
. If you change it, you should choose a port number higher than1023
(see Bind to privileged port).
- Dedicated Ingress: Per
Ingress
resource via annotations ingress.cilium.io/host-listener-port
: Host network port to expose the Cilium ingress controller Envoy listener. The default port is8080
but it can only be used for a singleIngress
resource as it needs to be unique perIngress
resource. You should choose a port higher than1023
(see Bind to privileged port). This annotation is mandatory if the global Cilium ingress controller mode is configured todedicated
(ingressController.loadbalancerMode
) or the ingress resource sets theingress.cilium.io/loadbalancer-mode
annotation todedicated
and multipleIngress
resources are deployed.
- Dedicated Ingress: Per
The default behavior regarding shared or dedicated ingress can be configured via
ingressController.loadbalancerMode
.
Warning
Be aware that misconfiguration might result in port clashes. Configure unique ports that are still available on all Cilium Nodes where Cilium ingress controller Envoy listeners are exposed.
Bind to privileged port
By default, the Cilium L7 Envoy process does not have any Linux capabilities out-of-the-box and is therefore not allowed to listen on privileged ports.
If you choose a port equal to or lower than 1023
, ensure that the Helm value
envoy.securityContext.capabilities.keepCapNetBindService=true
is configured
and to add the capability NET_BIND_SERVICE
to the respective
Cilium Envoy container via Helm values:
Standalone DaemonSet mode:
envoy.securityContext.capabilities.envoy
Embedded mode:
securityContext.capabilities.ciliumAgent
Configure the following Helm values to allow privileged port bindings in host network mode:
ingressController:
enabled: true
hostNetwork:
enabled: true
envoy:
enabled: true
securityContext:
capabilities:
keepCapNetBindService: true
envoy:
# Add NET_BIND_SERVICE to the list (keep the others!)
- NET_BIND_SERVICE
ingressController:
enabled: true
hostNetwork:
enabled: true
envoy:
securityContext:
capabilities:
keepCapNetBindService: true
securityContext:
capabilities:
ciliumAgent:
# Add NET_BIND_SERVICE to the list (keep the others!)
- NET_BIND_SERVICE
Deploy Gateway API listeners on subset of nodes
The Cilium ingress controller Envoy listener can be exposed on a specific subset of nodes. This only works in combination with the host network mode and can be configured via a node label selector in the Helm values:
ingressController:
enabled: true
hostNetwork:
enabled: true
nodes:
matchLabels:
role: infra
component: ingress
This will deploy the Ingress Controller Envoy listener only on the Cilium Nodes matching the configured labels. An empty selector selects all nodes and continues to expose the functionality on all Cilium nodes.
Examples
Please refer to one of the below examples on how to use and leverage Cilium’s Ingress features: