Using Kubernetes Constructs In Policy
This section covers Kubernetes specific network policy aspects.
Namespaces
Namespaces are used to create virtual clusters within a Kubernetes cluster. All Kubernetes objects including NetworkPolicy and CiliumNetworkPolicy belong to a particular namespace.
Known Pitfalls
This section covers known pitfalls when using Kubernetes constructs in policy.
Considerations Of Namespace Boundaries
Depending on how a policy is defined and created, Kubernetes namespaces are automatically taken into account.
Network policies imported directly with the API Reference apply to all namespaces unless a namespace selector is specified as described in Example.
Example
This example demonstrates how to enforce Kubernetes namespace-based boundaries
for the namespaces ns1
and ns2
by enabling default-deny on all pods of
either namespace and then allowing communication from all pods within the same
namespace.
Note
The example locks down ingress of the pods in ns1
and ns2
.
This means that the pods can still communicate egress to anywhere
unless the destination is in either ns1
or ns2
in which case
both source and destination have to be in the same namespace. In
order to enforce namespace boundaries at egress, the same example can
be used by specifying the rules at egress in addition to ingress.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "isolate-ns1"
namespace: ns1
spec:
endpointSelector:
matchLabels:
{}
ingress:
- fromEndpoints:
- matchLabels:
{}
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "isolate-ns1"
namespace: ns2
spec:
endpointSelector:
matchLabels:
{}
ingress:
- fromEndpoints:
- matchLabels:
{}
[
{
"ingress" : [
{
"fromEndpoints" : [
{
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns1"
}
}
]
}
],
"endpointSelector" : {
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns1"
}
}
},
{
"endpointSelector" : {
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns2"
}
},
"ingress" : [
{
"fromEndpoints" : [
{
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "ns2"
}
}
]
}
]
}
]
Policies Only Apply Within The Namespace
Network policies created and imported as CiliumNetworkPolicy CRD and NetworkPolicy apply within the namespace. In other words, the policy only applies to pods within that namespace. It’s possible, however, to grant access to and from pods in other namespaces as described in Example.
Example
The following example exposes all pods with the label name=leia
in the
namespace ns1
to all pods with the label name=luke
in the namespace
ns2
.
Refer to the example YAML files for a fully functional example including pods deployed to different namespaces.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "k8s-expose-across-namespace"
namespace: ns1
spec:
endpointSelector:
matchLabels:
name: leia
ingress:
- fromEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: ns2
name: luke
[{
"labels": [{"key": "name", "value": "k8s-svc-account"}],
"endpointSelector": {
"matchLabels": {"name":"leia", "k8s:io.kubernetes.pod.namespace":"ns1"}
},
"ingress": [{
"fromEndpoints": [{
"matchLabels":{"name": "luke", "k8s:io.kubernetes.pod.namespace":"ns2"}
}]
}]
}]
Specifying Namespace In EndpointSelector, FromEndpoints, ToEndpoints
Specifying the namespace by way of the label
k8s:io.kubernetes.pod.namespace
in the fromEndpoints
and
toEndpoints
fields is supported as described in
Example.
However, Kubernetes prohibits specifying the namespace in the endpointSelector
,
as it would violate the namespace isolation principle of Kubernetes. The
endpointSelector
always applies to pods in the namespace
associated with the CiliumNetworkPolicy resource itself.
Example
The following example allows all pods in the public
namespace in which the
policy is created to communicate with kube-dns on port 53/UDP in the kube-system
namespace.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-to-kubedns"
namespace: public
spec:
endpointSelector:
{}
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: '53'
protocol: UDP
[
{
"endpointSelector" : {
"matchLabels": {
"k8s:io.kubernetes.pod.namespace": "public"
}
},
"egress" : [
{
"toEndpoints" : [
{
"matchLabels" : {
"k8s:io.kubernetes.pod.namespace" : "kube-system",
"k8s-app" : "kube-dns"
}
}
],
"toPorts" : [
{
"ports" : [
{
"port" : "53",
"protocol" : "UDP"
}
]
}
]
}
]
}
]
Namespace Specific Information
Using namespace-specific information like
io.cilium.k8s.namespace.labels
within a fromEndpoints
or
toEndpoints
is supported only for a CiliumClusterwideNetworkPolicy
and not a CiliumNetworkPolicy. Hence, io.cilium.k8s.namespace.labels
will be ignored in CiliumNetworkPolicy resources.
Match Expressions
When using matchExpressions
in a CiliumNetworkPolicy or a
CiliumClusterwideNetworkPolicy, the list values are
treated as a logical AND. If you want to match multiple keys
with a logical OR, you must use multiple matchExpressions
.
Example
This example demonstrates how to enforce a policy with multiple matchExpressions
that achieves a logical OR between the keys and its values.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "or-statement-policy"
spec:
endpointSelector: {}
ingress:
- fromEndpoints:
- matchExpressions:
- key: "k8s:io.kubernetes.pod.namespace"
operator: "in"
values:
- "production"
- matchExpressions:
- key: "k8s:cilium.example.com/policy"
operator: "in"
values:
- "strict"
[
{
"labels": [
{
"key": "name",
"value": "or-statement-policy"
}
],
"endpointSelector": {},
"ingress": [
{
"fromEndpoints": [
{
"matchExpressions": [
{
"key": "k8s:io.kubernetes.pod.namespace",
"operator": "In",
"values": [
"production"
]
}
]
},
{
"matchExpressions": [
{
"key": "k8s:cilium.example.com/policy",
"operator": "In",
"values": [
"strict"
]
}
]
}
]
}
]
}
]
The following example shows a logical AND using a single matchExpression
.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "and-statement-policy"
spec:
endpointSelector: {}
ingress:
- fromEndpoints:
- matchExpressions:
- key: "k8s:io.kubernetes.pod.namespace"
operator: "in"
values:
- "production"
- key: "k8s:cilium.example.com/policy"
operator: "in"
values:
- "strict"
[
{
"labels": [
{
"key": "name",
"value": "and-statement-policy"
}
],
"endpointSelector": {},
"ingress": [
{
"fromEndpoints": [
{
"matchExpressions": [
{
"key": "k8s:io.kubernetes.pod.namespace",
"operator": "In",
"values": [
"production"
]
},
{
"key": "k8s:cilium.example.com/policy",
"operator": "In",
"values": [
"strict"
]
}
]
}
]
}
]
}
]
ServiceAccounts
Kubernetes Service Accounts are used to associate an identity to a pod or process managed by Kubernetes and grant identities access to Kubernetes resources and secrets. Cilium supports the specification of network security policies based on the service account identity of a pod.
The service account of a pod is either defined via the service account admission controller or can be directly specified in the Pod, Deployment, ReplicationController resource like this:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: leia
...
Example
The following example grants any pod running under the service account of
“luke” to issue a HTTP GET /public
request on TCP port 80 to all pods
running associated to the service account of “leia”.
Refer to the example YAML files for a fully functional example including deployment and service account resources.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "k8s-svc-account"
spec:
endpointSelector:
matchLabels:
io.cilium.k8s.policy.serviceaccount: leia
ingress:
- fromEndpoints:
- matchLabels:
io.cilium.k8s.policy.serviceaccount: luke
toPorts:
- ports:
- port: '80'
protocol: TCP
rules:
http:
- method: GET
path: "/public$"
[{
"labels": [{"key": "name", "value": "k8s-svc-account"}],
"endpointSelector": {"matchLabels": {"io.cilium.k8s.policy.serviceaccount":"leia"}},
"ingress": [{
"fromEndpoints": [
{"matchLabels":{"io.cilium.k8s.policy.serviceaccount":"luke"}}
],
"toPorts": [{
"ports": [
{"port": "80", "protocol": "TCP"}
],
"rules": {
"http": [
{
"method": "GET",
"path": "/public$"
}
]
}
}]
}]
}]
Multi-Cluster
When operating multiple cluster with cluster mesh, the cluster name is exposed
via the label io.cilium.k8s.policy.cluster
and can be used to restrict
policies to a particular cluster.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-cross-cluster"
description: "Allow x-wing in cluster1 to contact rebel-base in cluster2"
spec:
endpointSelector:
matchLabels:
name: x-wing
io.cilium.k8s.policy.cluster: cluster1
egress:
- toEndpoints:
- matchLabels:
name: rebel-base
io.kubernetes.pod.namespace: default
io.cilium.k8s.policy.cluster: cluster2
Note the io.kubernetes.pod.namespace: default
in the policy
rule. It makes sure the policy applies to rebel-base
in the
default
namespace of cluster2
regardless of the namespace in
cluster1
where x-wing
is deployed in. If the namespace label
of policy rules is omitted it defaults to the same namespace where the
policy itself is applied in, which may be not what is wanted when
deploying cross-cluster policies.
Clusterwide Policies
CiliumNetworkPolicy only allows to bind a policy restricted to a particular namespace. There can be situations where one wants to have a cluster-scoped effect of the policy, which can be done using Cilium’s CiliumClusterwideNetworkPolicy Kubernetes custom resource. The specification of the policy is same as that of CiliumNetworkPolicy except that it is not namespaced.
In the cluster, this policy will allow ingress traffic from pods matching the label name=luke
from any
namespace to pods matching the labels name=leia
in any namespace.
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "clusterwide-policy-example"
spec:
description: "Policy for selective ingress allow to a pod from only a pod with given label"
endpointSelector:
matchLabels:
name: leia
ingress:
- fromEndpoints:
- matchLabels:
name: luke
Allow All Cilium Managed Endpoints To Communicate With Kube-dns
The following example allows all Cilium managed endpoints in the cluster to communicate
with kube-dns on port 53/UDP in the kube-system
namespace.
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "wildcard-from-endpoints"
spec:
description: "Policy for ingress allow to kube-dns from all Cilium managed endpoints in the cluster"
endpointSelector:
matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
ingress:
- fromEndpoints:
- {}
toPorts:
- ports:
- port: "53"
protocol: UDP
Example: Add Health Endpoint
The following example adds the health entity to all Cilium managed endpoints in order to check cluster connectivity health.
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "cilium-health-checks"
spec:
endpointSelector:
matchLabels:
'reserved:health': ''
ingress:
- fromEntities:
- remote-node
egress:
- toEntities:
- remote-node