Creating policies from verdicts¶
Policy Audit Mode configures Cilium to allow all traffic while logging all
connections that would otherwise be dropped by policy. Policy Audit Mode may be
configured for the entire daemon using --policy-audit-mode=true
. When
Policy Audit Mode is enabled, no network policy is enforced so this setting is
not recommended for production deployment. Policy Audit Mode supports
auditing network policies implemented at networks layers 3 and 4. This guide
walks through the process of creating policies using Policy Audit Mode.
If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.
The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.
Setup Cilium¶
If you have not set up Cilium yet, pick any installation method as described in section Installation to set up Cilium for your Kubernetes environment. If in doubt, pick Getting Started Using Minikube as the simplest way to set up a Kubernetes cluster with Cilium:
minikube start --network-plugin=cni --memory=4096 minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/install/kubernetes/quick-install.yaml
Deploy the Demo Application¶
Now that we have Cilium deployed and kube-dns
operating correctly we can deploy our demo application.
In our Star Wars-inspired example, there are three microservices applications: deathstar, tiefighter, and xwing. The deathstar runs an HTTP webservice on port 80, which is exposed as a Kubernetes Service to load-balance requests to deathstar across two pod replicas. The deathstar service provides landing services to the empire’s spaceships so that they can request a landing port. The tiefighter pod represents a landing-request client service on a typical empire ship and xwing represents a similar service on an alliance ship. They exist so that we can test different security policies for access control to deathstar landing services.
Application Topology for Cilium and Kubernetes

The file http-sw-app.yaml
contains a Kubernetes Deployment for each of the three services.
Each deployment is identified using the Kubernetes labels (org=empire, class=deathstar
), (org=empire, class=tiefighter
),
and (org=alliance, class=xwing
).
It also includes a deathstar-service, which load-balances traffic to all pods with label (org=empire, class=deathstar
).
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/http-sw-app.yaml service/deathstar created deployment.extensions/deathstar created pod/tiefighter created pod/xwing created
Kubernetes will deploy the pods and service in the background. Running
kubectl get pods,svc
will inform you about the progress of the operation.
Each pod will go through several states until it reaches Running
at which
point the pod is ready.
$ kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/deathstar-6fb5694d48-5hmds 1/1 Running 0 107s
pod/deathstar-6fb5694d48-fhf65 1/1 Running 0 107s
pod/tiefighter 1/1 Running 0 107s
pod/xwing 1/1 Running 0 107s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/deathstar ClusterIP 10.96.110.8 <none> 80/TCP 107s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m53s
Each pod will be represented in Cilium as an Endpoint. We can invoke the
cilium
tool inside the Cilium pod to list them:
$ kubectl -n kube-system get pods -l k8s-app=cilium
NAME READY STATUS RESTARTS AGE
cilium-5ngzd 1/1 Running 0 3m19s
$ kubectl -n kube-system exec cilium-1c2cz -- cilium endpoint list
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
232 Disabled Disabled 16530 k8s:class=deathstar 10.0.0.147 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
726 Disabled Disabled 1 reserved:host ready
883 Disabled Disabled 4 reserved:health 10.0.0.244 ready
1634 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.118 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1673 Disabled Disabled 31028 k8s:class=tiefighter 10.0.0.112 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
2811 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
2843 Disabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=alliance
Both ingress and egress policy enforcement is still disabled on all of these pods because no network policy has been imported yet which select any of the pods.
Enable Policy Audit Mode¶
To observe policy audit messages, follow these steps:
Enable Policy Audit Mode in the daemon
$ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"true"}}' configmap/cilium-config patched $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium daemonset.apps/cilium restarted $ kubectl -n kube-system rollout status ds/cilium Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available... daemon set "cilium" successfully rolled out
If you installed Cilium via
helm install
, then you can usehelm upgrade
to enable Policy Audit Mode:helm upgrade cilium cilium/cilium --version 1.8.6 \ --namespace $CILIUM_NAMESPACE \ --reuse-values \ --set config.policyAuditMode=true
Apply a default-deny policy:
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy description: "Default-deny ingress policy for the empire" metadata: name: "empire-default-deny" spec: endpointSelector: matchLabels: org: empire ingress: - {}
CiliumNetworkPolicies match on pod labels using an “endpointSelector” to identify the sources and destinations to which the policy applies. The above policy denies traffic sent to any pods with label (
org=empire
). Due to the Policy Audit Mode enabled above, the traffic will not actually be denied but will instead trigger policy verdict notifications.To apply this policy, run:
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/sw_deny_policy.yaml ciliumnetworkpolicy.cilium.io/empire-default-deny created
With the above policy, we will enable default-deny posture on ingress to pods with the label
org=empire
and enable the policy verdict notifications for those pods. The same principle applies on egress as well.
Observe policy verdicts¶
In this example, we are tasked with applying security policy for the deathstar.
First, from the Cilium pod we need to monitor the notifications for policy
verdicts using cilium monitor -t policy-verdict
. We’ll be monitoring for
inbound traffic towards the deathstar to identify that traffic and determine
whether to extend the network policy to allow that traffic.
From another terminal with kubectl access, send some traffic from the tiefighter to the deathstar:
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
Back in the Cilium pod, the policy verdict logs are printed in the monitor output:
# cilium monitor -t policy-verdict
...
Policy verdict log: flow 0x63113709 local EP ID 232, remote ID 31028, dst port 80, proto 6, ingress true, action audit, match none, 10.0.0.112 :54134 -> 10.29.50.40:80 tcp SYN
In the above example, we can see that endpoint 232
has received traffic
(ingress true
) which doesn’t match the policy (action audit match
none
). The source of this traffic has the identity 31028
. Let’s gather a
bit more information about what these numbers mean:
# cilium endpoint list
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
232 Disabled (Audit) Disabled 16530 k8s:class=deathstar 10.29.50.40 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
...
# cilium identity get 31028
ID LABELS
31028 k8s:class=tiefighter
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
Create the Network Policy¶
Given the above information, we now know the labels of the target pod, the labels of the peer that’s attempting to connect, the direction of the traffic and the port. In this case, we can see clearly that it’s an empire craft so once we’ve determined that we expect this traffic to arrive at the deathstar, we can form a policy to match the traffic:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
description: "L3-L4 policy to restrict deathstar access to empire ships only"
metadata:
name: "rule1"
spec:
endpointSelector:
matchLabels:
org: empire
class: deathstar
ingress:
- fromEndpoints:
- matchLabels:
org: empire
toPorts:
- ports:
- port: "80"
protocol: TCP
To apply this L3/L4 policy, run:
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/sw_l3_l4_policy.yaml ciliumnetworkpolicy.cilium.io/rule1 created
Now if we run the landing requests again, we can observe in the monitor output that the traffic which was previously audited to be dropped by the policy are now reported as allowed:
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
Executed from the cilium pod:
# cilium monitor -t policy-verdict
Policy verdict log: flow 0xabf3bda6 local EP ID 232, remote ID 31028, dst port 80, proto 6, ingress true, action allow, match L3-L4, 10.0.0.112 :59824 -> 10.0.0.147:80 tcp SYN
Now the policy verdict states that the traffic would be allowed: action
allow
. Success!
Disable Policy Audit Mode¶
These steps should be repeated for each connection in the cluster to ensure that the network policy allows all of the expected traffic. The final step after deploying the policy is to disable Policy Audit Mode again:
$ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"false"}}' configmap/cilium-config patched $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium daemonset.apps/cilium restarted $ kubectl -n kube-system rollout status ds/cilium Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available... daemon set "cilium" successfully rolled outhelm upgrade cilium cilium/cilium --version 1.8.6 \ --namespace $CILIUM_NAMESPACE \ --reuse-values \ --set config.policyAuditMode=false
Now if we run the landing requests again, only the tiefighter pods with the
label org=empire
will succeed. The xwing pods will be blocked!
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
This works as expected. Now the same request run from an xwing pod will fail:
$ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
This request will hang, so press Control-C to kill the curl request, or wait for it to time out.
We hope you enjoyed the tutorial. Feel free to play more with the setup, follow the Identity-Aware and HTTP-Aware Policy Enforcement guide, and reach out to us on the Cilium Slack channel with any questions!
Clean-up¶
$ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/http-sw-app.yaml $ kubectl delete cnp rule1