Getting Started Using Istio

This document serves as an introduction to using Cilium to enforce security policies in Kubernetes micro-services managed with Istio. It is a detailed walk-through of getting a single-node Cilium + Istio environment running on your machine.

If you haven’t read the Introduction to Cilium yet, we’d encourage you to do that first.

The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

If you haven’t read the Introduction to Cilium yet, we’d encourage you to do that first.

The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

Step 0: Install kubectl & minikube

  1. Install kubectl version >= 1.7.0 as described in the Kubernetes Docs.
  2. Install one of the hypervisors supported by minikube .
  3. Install minikube >= 0.22.3 as described on minikube’s github page .

Boot a minikube cluster with the Container Network Interface (CNI) network plugin, the localkube bootstrapper.

The localkube bootstrapper provides etcd >= 3.1.0, a cilium dependency.

$ minikube start --network-plugin=cni --extra-config=kubelet.network-plugin=cni --memory=4096

After minikube has finished setting up your new Kubernetes cluster, you can check the status of the cluster by running kubectl get cs:

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
  1. Install etcd as a dependency of cilium in minikube by running:
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes/addons/etcd/standalone-etcd.yaml
service "etcd-cilium" created
statefulset.apps "etcd-cilium" created

To check that all pods are Running and 100% ready, including kube-dns and etcd-cilium-0 run:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
default       etcd-cilium-0                      1/1       Running   0          1m
kube-system   etcd-minikube                      1/1       Running   0          3m
kube-system   kube-addon-manager-minikube        1/1       Running   0          4m
kube-system   kube-apiserver-minikube            1/1       Running   0          3m
kube-system   kube-controller-manager-minikube   1/1       Running   0          3m
kube-system   kube-dns-86f4d74b45-lhzfv          3/3       Running   0          4m
kube-system   kube-proxy-tcd7h                   1/1       Running   0          4m
kube-system   kube-scheduler-minikube            1/1       Running   0          4m
kube-system   storage-provisioner                1/1       Running   0          4m

If you see output similar to this, you are ready to proceed to the next step.

Note

The output might differ between minikube versions, you should expect to have all pods in READY / Running state before continuing.

Step 1: Install Cilium

The next step is to install Cilium into your Kubernetes cluster. Cilium installation leverages the Kubernetes Daemon Set abstraction, which will deploy one Cilium pod per cluster node. This Cilium pod will run in the kube-system namespace along with all other system relevant daemons and services. The Cilium pod will run both the Cilium agent and the Cilium CNI plugin.

To deploy Cilium, run:

$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes/1.7/cilium.yaml | \
  sed -e 's/sidecar-http-proxy: "false"/sidecar-http-proxy: "true"/' | \
  kubectl create -f -

configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes/1.8/cilium.yaml | \
  sed -e 's/sidecar-http-proxy: "false"/sidecar-http-proxy: "true"/' | \
  kubectl create -f -

configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes/1.9/cilium.yaml | \
  sed -e 's/sidecar-http-proxy: "false"/sidecar-http-proxy: "true"/' | \
  kubectl create -f -

configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes/1.10/cilium.yaml | \
  sed -e 's/sidecar-http-proxy: "false"/sidecar-http-proxy: "true"/' | \
  kubectl create -f -

configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes/1.11/cilium.yaml | \
  sed -e 's/sidecar-http-proxy: "false"/sidecar-http-proxy: "true"/' | \
  kubectl create -f -

configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created

Kubernetes is now deploying Cilium with its RBAC settings, ConfigMap and DaemonSet as a pod on minikube. This operation is performed in the background.

Note that this Cilium configuration requires deploying Istio with sidecar proxies in order to filter HTTP traffic at Layer-7 for any pod.

Run the following command to check the progress of the deployment:

$ kubectl get daemonsets -n kube-system
NAME      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR   AGE
cilium    1         1         0         1            0           <none>          6s

Wait until the cilium Deployment shows a CURRENT count of 1 like above (a READY value of 0 is OK for this tutorial).

Step 2: Install Istio

Download Istio version 0.7.0:

$ export ISTIO_VERSION=0.7.0
$ curl -L https://git.io/getLatestIstio | sh -
$ export ISTIO_HOME=`pwd`/istio-${ISTIO_VERSION}
$ export PATH="$PATH:${ISTIO_HOME}/bin"

Deploy Istio on Kubernetes, with a Cilium-specific variant of Pilot which injects the Cilium network policy filters into each Istio sidecar proxy:

$ sed -e 's,docker\.io/istio/pilot:,docker.io/cilium/istio_pilot:,' \
      < ${ISTIO_HOME}/install/kubernetes/istio.yaml | \
      kubectl create -f -

Configure Istio’s sidecar injection to use Cilium’s Docker images for the sidecar proxies:

$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/istio-sidecar-injector-configmap-release.yaml

Check the progress of the deployment (every service should have an AVAILABLE count of 1):

$ kubectl get deployments -n istio-system
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
istio-ca        1         1         1            1           2m
istio-ingress   1         1         1            1           2m
istio-mixer     1         1         1            1           2m
istio-pilot     1         1         1            1           2m

Once all Istio pods are ready, we are ready to install the demo application.

Step 3: Deploy the Bookinfo Application V1

Now that we have Cilium and Istio deployed, we can deploy version v1 of the services of the Istio Bookinfo sample application.

The BookInfo application is broken into four separate microservices:

  • productpage. The productpage microservice calls the details and reviews microservices to populate the page.
  • details. The details microservice contains book information.
  • reviews. The reviews microservice contains book reviews. It also calls the ratings microservice.
  • ratings. The ratings microservice contains book ranking information that accompanies a book review.

In this demo, each specific version of each microservice is deployed into Kubernetes using separate YAML files which define:

  • A Kubernetes Service.
  • A Kubernetes Deployment specifying the microservice’s pods, specific to each service version.
  • A Cilium Network Policy limiting the traffic to the microservice, specific to each service version.
../../_images/istio-bookinfo-v1.png

Each Deployment must be packaged with Istio’s Envoy sidecar proxy in order to be managed by Istio, by running the istioctl kube-inject command on each YAML file. The resulting YAML files must then be adapted to mount Cilium’s API Unix domain sockets into the sidecar to allow Cilium’s Envoy filters to query the Cilium agent. This adaptation can be done with the cilium-kube-inject.sed script:

$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/cilium-kube-inject.sed > ./cilium-kube-inject.sed

To package the Istio sidecar proxy and generate final YAML specifications, run:

$ for service in productpage-service productpage-v1 details-v1 reviews-v1; do \
      curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/bookinfo-${service}.yaml | \
      istioctl kube-inject --injectConfigMapName istio-inject -f - | \
      sed -f ./cilium-kube-inject.sed | \
      kubectl create -f - ; done
service "productpage" created
ciliumnetworkpolicy "productpage-v1" created
deployment "productpage-v1" created
service "details" created
ciliumnetworkpolicy "details-v1" created
deployment "details-v1" created
service "reviews" created
ciliumnetworkpolicy "reviews-v1" created
deployment "reviews-v1" created

Check the progress of the deployment (every service should have an AVAILABLE count of 1):

$ kubectl get deployments -n default
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
details-v1       1         1         1            1           6m
productpage-v1   1         1         1            1           6m
reviews-v1       1         1         1            1           6m

To obtain the URL to the frontend productpage service, run:

$ export PRODUCTPAGE=`minikube service productpage -n default --url`
$ echo "Open URL: ${PRODUCTPAGE}/productpage"

Open that URL in your web browser and check that the application has been successfully deployed.

Step 4: Canary and Deploy the Reviews Service V2

We will now deploy version v2 of the reviews service. In addition to providing reviews from readers, reviews v2 queries a new ratings service for book ratings, and displays each rating as 1 to 5 black stars.

As a precaution, we will use Istio’s service routing feature to canary the v2 deployment to prevent breaking the end-to-end application completely if it is faulty.

Before deploying v2, to prevent any traffic from being routed to it for now, we will create this Istio route rules to route 100% of the reviews traffic to v1:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: reviews-default
spec:
  destination:
    name: reviews
  precedence: 1
  route:
  - labels:
      version: v1
../../_images/istio-bookinfo-reviews-v2-route-to-v1.png

Apply this route rule:

$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/route-rule-reviews-v1.yaml
routerule "reviews-default" created

Deploy the ratings v1 and reviews v2 services:

$ for service in ratings-v1 reviews-v2; do \
      curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/bookinfo-${service}.yaml | \
      istioctl kube-inject --injectConfigMapName istio-inject -f - | \
      sed -f ./cilium-kube-inject.sed | \
      kubectl create -f - ; done
service "ratings" created
ciliumnetworkpolicy "ratings-v1" created
deployment "ratings-v1" created
ciliumnetworkpolicy "reviews-v2" created
deployment "reviews-v2" created

Check the progress of the deployment (every service should have an AVAILABLE count of 1):

$ kubectl get deployments -n default
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
details-v1       1         1         1            1           6m
productpage-v1   1         1         1            1           6m
ratings-v1       1         1         1            1           57s
reviews-v1       1         1         1            1           6m
reviews-v2       1         1         1            1           57s

Check in your web browser that no stars are appearing in the Book Reviews, even after refreshing the page several times. This indicates that all reviews are retrieved from reviews v1 and none from reviews v2.

../../_images/istio-bookinfo-reviews-v1.png

The ratings-v1 CiliumNetworkPolicy explicitly whitelists access to the ratings API only from productpage and reviews v2:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: ratings-v1
  namespace: default
specs:
  - endpointSelector:
      matchLabels:
        "k8s:app": ratings
        "k8s:version": v1
    ingress:
    - fromEndpoints:
      - matchLabels:
          "k8s:app": productpage
          "k8s:version": v1
      toPorts:
      - ports:
        - port: "9080"
          protocol: TCP
        rules:
          http:
          - method: GET
            path: "/ratings/[0-9]*"
    - fromEndpoints:
        - matchLabels:
            "k8s:app": reviews
            "k8s:version": v2
      toPorts:
      - ports:
        - port: "9080"
          protocol: TCP
        rules:
          http:
          - method: GET
            path: "/ratings/[0-9]*"

Check that reviews v1 may not be able to access the ratings service, even if it were compromised or suffered from a bug, by running curl from within the pod:

$ export POD_REVIEWS_V1=`kubectl get pods -n default -l app=reviews,version=v1 -o jsonpath='{.items[0].metadata.name}'`
$ kubectl exec ${POD_REVIEWS_V1} -c istio-proxy -- curl --connect-timeout 5 http://ratings:9080/ratings/0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
curl: (28) Connection timed out after 5000 milliseconds

Update the Istio route rule to send 50% of reviews traffic to v1 and 50% to v2:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: reviews-default
spec:
  destination:
    name: reviews
  precedence: 1
  route:
  - labels:
      version: v1
    weight: 50
  - labels:
      version: v2
    weight: 50
../../_images/istio-bookinfo-reviews-v2-route-to-v1-and-v2.png

Apply this route rule:

$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/route-rule-reviews-v1-v2.yaml
routerule "reviews-default" configured

Check in your web browser that stars are appearing in the Book Reviews roughly 50% of the time. This may require refreshing the page for a few seconds to observe. Queries to reviews v2 result in reviews containing ratings displayed as black stars:

../../_images/istio-bookinfo-reviews-v2.png

Finally, update the route rule to send 100% of reviews traffic to v2:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: reviews-default
spec:
  destination:
    name: reviews
  precedence: 1
  route:
  - labels:
      version: v2
../../_images/istio-bookinfo-reviews-v2-route-to-v2.png

Apply this route rule:

$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/route-rule-reviews-v2.yaml
routerule "reviews-default" configured

Refresh the product page in your web browser several times to verify that stars are now appearing in the Book Reviews on every page refresh. All the reviews are now retrieved from reviews v2 and none from reviews v1.

Step 5: Deploy the Product Page Service V2

We will now deploy version v2 of the productpage service, which brings two changes:

  • It is deployed with a more restrictive CiliumNetworkPolicy, which restricts access to a subset of the HTTP URLs, at Layer-7.
  • It implements a new authentication audit log into Kafka.
../../_images/istio-bookinfo-productpage-v2-kafka.png

Because productpage v2 sends messages into Kafka, we must first deploy a Kafka broker:

$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/kafka-v1.yaml | \
      istioctl kube-inject --injectConfigMapName istio-inject -f - | \
      sed -f ./cilium-kube-inject.sed | \
      kubectl create -f -
service "kafka" created
ciliumnetworkpolicy "kafka-authaudit" created
statefulset "kafka-v1" created

Wait until the kafka-v1-0 pod is ready, i.e. until it has a READY count of 2/2:

$ kubectl get pods -n default -l app=kafka
NAME         READY     STATUS    RESTARTS   AGE
kafka-v1-0   2/2       Running   0          21m

Create the authaudit Kafka topic, which will be used by productpage v2:

$ kubectl exec kafka-v1-0 -c kafka -- bash -c '/opt/kafka_2.11-0.10.1.0/bin/kafka-topics.sh --zookeeper localhost:2181/kafka --create --topic authaudit --partitions 1 --replication-factor 1'
Created topic "authaudit".

We are now ready to deploy productpage v2.

The policy for v1 currently allows read access to the full HTTP REST API, under the /api/v1 HTTP URI path:

  • /api/v1/products: Returns the list of books and their details.
  • /api/v1/products/<id>: Returns details about a specific book.
  • /api/v1/products/<id>/reviews: Returns reviews for a specific book.
  • /api/v1/products/<id>/ratings: Returns ratings for a specific book.

Check that the full REST API is currently accessible in v1 and returns valid JSON data:

$ export PRODUCTPAGE=`minikube service productpage -n default --url`
$ for APIPATH in /api/v1/products /api/v1/products/0 /api/v1/products/0/reviews /api/v1/products/0/ratings; do echo ; curl -s -S "${PRODUCTPAGE}${APIPATH}" ; echo ; done

[{"descriptionHtml": "<a href=\"https://en.wikipedia.org/wiki/The_Comedy_of_Errors\">Wikipedia Summary</a>: The Comedy of Errors is one of <b>William Shakespeare's</b> early plays. It is his shortest and one of his most farcical comedies, with a major part of the humour coming from slapstick and mistaken identity, in addition to puns and word play.", "id": 0, "title": "The Comedy of Errors"}]

{"publisher": "PublisherA", "language": "English", "author": "William Shakespeare", "id": 0, "ISBN-10": "1234567890", "ISBN-13": "123-1234567890", "year": 1595, "type": "paperback", "pages": 200}

{"reviews": [{"reviewer": "Reviewer1", "rating": {"color": "black", "stars": 5}, "text": "An extremely entertaining play by Shakespeare. The slapstick humour is refreshing!"}, {"reviewer": "Reviewer2", "rating": {"color": "black", "stars": 4}, "text": "Absolutely fun and entertaining. The play lacks thematic depth when compared to other plays by Shakespeare."}], "id": "0"}

{"ratings": {"Reviewer2": 4, "Reviewer1": 5}, "id": 0}

We realized that the REST API to get the book reviews and ratings was meant only for consumption by other internal services, and will be blocked from external clients using the updated Layer-7 CiliumNetworkPolicy in productpage v2, i.e. only the /api/v1/products and /api/v1/products/<id> HTTP URLs will be whitelisted:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: productpage-v2
  namespace: default
specs:
  - endpointSelector:
      matchLabels:
        "k8s:app": productpage
        "k8s:version": v2
    ingress:
    - toPorts:
      - ports:
        - port: "9080"
          protocol: TCP
        rules:
          http:
          - method: GET
            path: "/"
          - method: GET
            path: "/index.html"
          - method: POST
            path: "/login"
          - method: GET
            path: "/logout"
          - method: GET
            path: "/productpage"
          - method: GET
            path: "/api/v1/products"
          - method: GET
            path: "/api/v1/products/[0-9]*"
#          - method: GET
#            path: "/api/v1/products/[0-9]*/reviews"
#          - method: GET
#            path: "/api/v1/products/[0-9]*/ratings"

Create the productpage v2 service and its updated CiliumNetworkPolicy and delete productpage v1:

$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/bookinfo-productpage-v2.yaml | \
      istioctl kube-inject --injectConfigMapName istio-inject -f - | \
      sed -f ./cilium-kube-inject.sed | \
      kubectl create -f -
ciliumnetworkpolicy "productpage-v2" created
deployment "productpage-v2" created

$ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/bookinfo-productpage-v1.yaml

Check the progress of the deployment (every service should have an AVAILABLE count of 1):

$ kubectl get deployments -n default
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
details-v1       1         1         1            1           15m
productpage-v2   1         1         1            1           1m
ratings-v1       1         1         1            1           10m
reviews-v1       1         1         1            1           15m
reviews-v2       1         1         1            1           10m

Check that the product REST API is still accessible, and that Cilium now denies at Layer-7 any access to the reviews and ratings REST API:

$ export PRODUCTPAGE=`minikube service productpage -n default --url`
$ for APIPATH in /api/v1/products /api/v1/products/0 /api/v1/products/0/reviews /api/v1/products/0/ratings; do echo ; curl -s -S "${PRODUCTPAGE}${APIPATH}" ; echo ; done

[{"descriptionHtml": "<a href=\"https://en.wikipedia.org/wiki/The_Comedy_of_Errors\">Wikipedia Summary</a>: The Comedy of Errors is one of <b>William Shakespeare's</b> early plays. It is his shortest and one of his most farcical comedies, with a major part of the humour coming from slapstick and mistaken identity, in addition to puns and word play.", "id": 0, "title": "The Comedy of Errors"}]

{"publisher": "PublisherA", "language": "English", "author": "William Shakespeare", "id": 0, "ISBN-10": "1234567890", "ISBN-13": "123-1234567890", "year": 1595, "type": "paperback", "pages": 200}

Access denied


Access denied

This demonstrated that requests to the /api/v1/products/<id>/reviews and /api/v1/products/<id>/ratings URIs now result in Cilium returning HTTP 403 Forbidden HTTP responses.

productpage v2 also implements an authorization audit logging. On every user login or logout, it produces into Kafka topic authaudit a JSON-formatted message which contains the following information:

  • event: login or logout
  • username
  • client IP address
  • timestamp

To observe the Kafka messages sent by productpage, we will run an additional authaudit-logger service. This service fetches and prints out all messages from the authaudit Kafka topic. Start this service:

$ curl -s https://raw.githubusercontent.com/cilium/cilium/v1.0/examples/kubernetes-istio/authaudit-logger-v1.yaml | \
      istioctl kube-inject --injectConfigMapName istio-inject -f - | \
      sed -f ./cilium-kube-inject.sed | \
      kubectl apply -f -

Check the progress of the deployment (every service should have an AVAILABLE count of 1):

$ kubectl get deployments -n default
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
authaudit-logger-v1   1         1         1            1           23s
details-v1            1         1         1            1           16m
productpage-v2        1         1         1            1           2m
ratings-v1            1         1         1            1           11m
reviews-v1            1         1         1            1           16m
reviews-v2            1         1         1            1           11m

Every login and logout on the product page will result in a line in this service’s log:

$ export POD_LOGGER_V1=`kubectl get pods -n default -l app=authaudit-logger,version=v1 -o jsonpath='{.items[0].metadata.name}'`
$ kubectl logs ${POD_LOGGER_V1} -c authaudit-logger
...
{"timestamp": "2017-12-04T09:34:24.341668", "remote_addr": "10.15.28.238", "event": "login", "user": "richard"}
{"timestamp": "2017-12-04T09:34:40.943772", "remote_addr": "10.15.28.238", "event": "logout", "user": "richard"}
{"timestamp": "2017-12-04T09:35:03.096497", "remote_addr": "10.15.28.238", "event": "login", "user": "gilfoyle"}
{"timestamp": "2017-12-04T09:35:08.777389", "remote_addr": "10.15.28.238", "event": "logout", "user": "gilfoyle"}

As you can see, the user-identifiable information sent by productpage in every Kafka message is sensitive, so access to this Kafka topic must be protected using Cilium. The CiliumNetworkPolicy configured on the Kafka broker enforces that:

  • only productpage v2 is allowed to produce messages into the authaudit topic;
  • only authaudit-logger can fetch messages from this topic;
  • no service can access any other topic.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: kafka-authaudit
specs:
  - endpointSelector:
      matchLabels:
        "k8s:app": kafka
    ingress:
    - fromEndpoints:
      - matchLabels:
          "k8s:app": productpage
          "k8s:version": v2
      toPorts:
      - ports:
        - port: "9092"
          protocol: TCP
        rules:
          kafka:
          - apiKey: "produce"
            topic: "authaudit"
          - apiKey: "apiversions"
          - apiKey: "metadata"
          - apiKey: "heartbeat"
    - fromEndpoints:
      - matchLabels:
          app: kafka
    - fromEndpoints:
      - matchLabels:
          "k8s:app": authaudit-logger
      toPorts:
      - ports:
        - port: "9092"
          protocol: TCP
        rules:
          kafka:
          - apiKey: "fetch"
            topic: "authaudit"
          - apiKey: "apiversions"
          - apiKey: "metadata"
          - apiKey: "findcoordinator"
          - apiKey: "joingroup"
          - apiKey: "leavegroup"
          - apiKey: "syncgroup"
          - apiKey: "offsets"
          - apiKey: "offsetcommit"
          - apiKey: "offsetfetch"
          - apiKey: "heartbeat"

Check that Cilium prevents the authaudit-logger service from writing into the authaudit topic (enter a message followed by ENTER, e.g. test message):

$ export POD_LOGGER_V1=`kubectl get pods -n default -l app=authaudit-logger,version=v1 -o jsonpath='{.items[0].metadata.name}'`
$ kubectl exec ${POD_LOGGER_V1} -c authaudit-logger -ti -- /opt/kafka_2.11-0.10.1.0/bin/kafka-console-producer.sh --broker-list=kafka:9092 --topic=authaudit
test message
[2017-12-07 02:13:47,020] ERROR Error when sending message to topic authaudit with key: null, value: 12 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [authaudit]

This demonstrated that Cilium sent a response with an authorization error for any Produce request from this service.

Create another topic named credit-card-payments, meant to transmit highly-sensitive credit card payment requests:

$ kubectl exec kafka-v1-0 -c kafka -- bash -c '/opt/kafka_2.11-0.10.1.0/bin/kafka-topics.sh --zookeeper localhost:2181/kafka --create --topic credit-card-payments --partitions 1 --replication-factor 1'
Created topic "credit-card-payments".

Check that Cilium prevents the authaudit-logger service from fetching messages from this topic:

$ export POD_LOGGER_V1=`kubectl get pods -n default -l app=authaudit-logger,version=v1 -o jsonpath='{.items[0].metadata.name}'`
$ kubectl exec ${POD_LOGGER_V1} -c authaudit-logger -ti -- /opt/kafka_2.11-0.10.1.0/bin/kafka-console-consumer.sh --bootstrap-server=kafka:9092 --topic=credit-card-payments
[2017-12-07 03:08:54,513] WARN Not authorized to read from topic credit-card-payments. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2017-12-07 03:08:54,517] ERROR Error processing message, terminating consumer process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [credit-card-payments]
Processed a total of 0 messages

This demonstrated that Cilium sent a response with an authorization error for any Fetch request from this service for any topic other than authaudit.

Step 6: Clean Up

You have now installed Cilium and Istio, deployed a demo app, and tested both Cilium’s L3-L7 network security policies and Istio’s service route rules. To clean up, run:

$ minikube delete

After this, you can re-run the tutorial from Step 0.