End-To-End Testing Framework (Legacy)
Introduction
This section provides an overview of the two modes available for running Cilium’s end-to-end tests locally: Vagrant and similar to GitHub Actions (GHA). It offers instructions on setting up and running tests in these modes.
Before proceeding, it is recommended to familiarize yourself with Ginkgo by reading the Ginkgo Getting-Started Guide. You can also run the example tests to get a feel for the Ginkgo workflow.
The tests in the test
directory are built on top of Ginkgo and utilize the
Ginkgo focus
concept to determine which virtual machines (VMs), in vagrant
mode are necessary to run specific tests. All test names must begin with one of
the following prefixes:
Runtime
: Tests Cilium in a runtime environment running on a single node.K8s
: Sets up a small multi-node Kubernetes environment for testing features beyond a single host and Kubernetes-specific functionalities.
Running Tests with GitHub Actions (GHA)
GitHub Actions provide an alternative mode for running Cilium’s end-to-end tests. The configuration is set up to closely match the environment used in GHA. Refer to the relevant documentation for instructions on running tests using GHA.
Running Tests with Vagrant
To run tests locally using Vagrant, the test scripts invoke vagrant
to create
virtual machine(s). These tests utilize the Ginkgo testing framework, leveraging
its rich capabilities and the benefits of Go’s compilation-time checks and
strong typing.
Running End-To-End Tests
Running Locally Ginkgo Tests based on Ginkgo’s GitHub Workflow
Although it is not possible to run conformance-ginkgo.yaml
or
conformance-runtime.yaml
locally, it is possible to setup an environment
similar to the one used on GitHub.
The following example will provide the steps to run one of the tests of the
focus f09-datapath-misc-2
on Kubernetes 1.27
with the kernel net-next
for the commit SHA 7b368923823e63c9824ea2b5ee4dc026bc4d5cd8
.
Download dependencies locally (
helm
,ginkgo
).For
helm
, the instructions can be found here$ HELM_VERSION=3.7.0 $ wget "https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz" $ tar -xf "helm-v${HELM_VERSION}-linux-amd64.tar.gz" $ mv linux-amd64/helm ./helm
Store these dependencies under a specific directory that will be used to run Qemu in the next steps.
For
ginkgo
, we will be using the same version used on GitHub action:$ cd ~/ $ go install github.com/onsi/ginkgo/ginkgo@v1.16.5 $ ${GOPATH}/bin/ginkgo version Ginkgo Version 1.16.5
Build the Ginkgo tests locally. This will create a binary named
test.test
which we can use later on to run our tests.$ cd github.com/cilium/cilium/test $ ${GOPATH}/bin/ginkgo build
Provision VMs using Qemu:
Retrieve the image tag for the k8s and kernel versions that will be used for testing by checking the file
.github/actions/ginkgo/main-k8s-versions.yaml
.For example:
kernel:
bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411
k8s:
kindest/node:v1.27.1@sha256:b7d12ed662b873bd8510879c1846e87c7e676a79fefc93e17b2a52989d3ff42b
Store the compressed VM image under a directory (
/tmp/_images
).
$ mkdir -p /tmp/_images $ kernel_tag="bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411" $ docker run -v /tmp/_images:/mnt/images \ "quay.io/lvh-images/kind:${kernel_tag}" \ cp -r /data/images/. /mnt/images/
Uncompress the VM image into a directory.
$ zstd -d /tmp/_images/kind_*.qcow2.zst -o /tmp/_images/datapath-conformance.qcow2
Provision the VM. Qemu will use the current terminal to provision the VM and will mount the current directory into the VM under
/host
.
$ qemu-system-x86_64 \ -nodefaults \ -no-reboot \ -smp 4 \ -m 12G \ -enable-kvm \ -cpu host \ -hda /tmp/_images/datapath-conformance.qcow2 \ -netdev user,id=user.0,hostfwd=tcp::2222-:22 \ -device virtio-net-pci,netdev=user.0 \ -fsdev local,id=host_id,path=./,security_model=none \ -device virtio-9p-pci,fsdev=host_id,mount_tag=host_mount \ -serial mon:stdio
Installing dependencies in the VM (
helm
).$ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host # echo "nameserver 8.8.8.8" > /etc/resolv.conf # git config --global --add safe.directory /host # cp ./helm /usr/bin
The VM is ready to be used for tests. Similarly to the GitHub Action, Kind will also be used to run the CI. The provisioning of Kind is different depending on the kernel version that is used, i.e., ginkgo tests are meant to run on differently when running on bpf-next.
$ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host/ # kernel_tag="bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411" # kubernetes_image="kindest/node:v1.27.1@sha256:b7d12ed662b873bd8510879c1846e87c7e676a79fefc93e17b2a52989d3ff42b" # ip_family="dual" # replace with "ipv4" if k8s 1.19 # # if [[ "${kernel_tag}" == bpf-next-* ]]; then # ./contrib/scripts/kind.sh "" 2 "" "${kubernetes_image}" "none" "${ip_family}" # kubectl label node kind-worker2 cilium.io/ci-node=kind-worker2 # # Avoid re-labeling this node by setting "node-role.kubernetes.io/controlplane" # kubectl label node kind-worker2 node-role.kubernetes.io/controlplane= # else # ./contrib/scripts/kind.sh "" 1 "" "${kubernetes_image}" "iptables" "${ip_family}" # fi ## Some tests using demo-customcalls.yaml are mounting this directoy # mkdir -p /home/vagrant/go/src/github.com/cilium # ln -s /host /home/vagrant/go/src/github.com/cilium/cilium # git config --add safe.directory /cilium
Verify that kind is running inside the VM:
$ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-787d4945fb-hqzpb 0/1 Pending 0 42s kube-system coredns-787d4945fb-tkq86 0/1 Pending 0 42s kube-system etcd-kind-control-plane 1/1 Running 0 57s kube-system kube-apiserver-kind-control-plane 1/1 Running 0 57s kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 56s kube-system kube-scheduler-kind-control-plane 1/1 Running 0 56s local-path-storage local-path-provisioner-6bd6454576-648bk 0/1 Pending 0 42s
Now that Kind is provisioned, the tests can be executed inside the VM. Let us first retrieve the focus regex, under
cliFocus
, off09-datapath-misc-2
from.github/actions/ginkgo/main-focus.yaml
.cliFocus="K8sDatapathConfig Check|K8sDatapathConfig IPv4Only|K8sDatapathConfig High-scale|K8sDatapathConfig Iptables|K8sDatapathConfig IPv4Only|K8sDatapathConfig IPv6|K8sDatapathConfig Transparent"
Run the binary
test.test
that was compiled in the previous step. The following code block is exactly the same as used on the GitHub workflow with one exception: the flag-cilium.holdEnvironment=true
. This flag will hold the testing environment in case the test fails to allow for further diagnosis of the current cluster.$ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host/test # kernel_tag="bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411" # k8s_version="1.27" # # export K8S_NODES=2 # export NETNEXT=0 # export K8S_VERSION="${k8s_version}" # export CNI_INTEGRATION=kind # export INTEGRATION_TESTS=true # # if [[ "${kernel_tag}" == bpf-next-* ]]; then # export KERNEL=net-next export NETNEXT=1 # export KUBEPROXY=0 # export K8S_NODES=3 # export NO_CILIUM_ON_NODES=kind-worker2 # elif [[ "${kernel_tag}" == 4.19-* ]]; then # export KERNEL=419 # elif [[ "${kernel_tag}" == 5.4-* ]]; then # export KERNEL=54 # fi # # # GitHub actions do not support IPv6 connectivity to outside # # world. If the infrastructure environment supports it, then # # this line can be removed # export CILIUM_NO_IPV6_OUTSIDE=true # # commit_sha="7b368923823e63c9824ea2b5ee4dc026bc4d5cd8" # cliFocus="K8sDatapathConfig Check|K8sDatapathConfig IPv4Only|K8sDatapathConfig High-scale|K8sDatapathConfig Iptables|K8sDatapathConfig IPv4Only|K8sDatapathConfig IPv6|K8sDatapathConfig Transparent" # quay_org="cilium" # # ./test.test \ --ginkgo.focus="${cliFocus}" \ --ginkgo.skip="" \ --ginkgo.seed=1679952881 \ --ginkgo.v -- \ -cilium.provision=false \ -cilium.image=quay.io/${quay_org}/cilium-ci \ -cilium.tag=${commit_sha} \ -cilium.operator-image=quay.io/${quay_org}/operator \ -cilium.operator-tag=${commit_sha} \ -cilium.hubble-relay-image=quay.io/${quay_org}/hubble-relay-ci \ -cilium.hubble-relay-tag=${commit_sha} \ -cilium.kubeconfig=/root/.kube/config \ -cilium.provision-k8s=false \ -cilium.operator-suffix=-ci \ -cilium.holdEnvironment=true Using CNI_INTEGRATION="kind" Running Suite: Suite-k8s-1.27 ============================= Random Seed: 1679952881 Will run 7 of 132 specs
Wait until the test execution completes.
Ran 7 of 132 Specs in 721.007 seconds SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 125 Skipped
Clean up.
Once tests are performed, qemu can be terminated by checking the PID and terminate the process.
$ pkill qemu-system-x86
The VM state is kept in
/tmp/_images/datapath-conformance.qcow2
and the dependencies are installed. Thus steps up to and excluding step installing kind can be skipped next time and the VM state can be re-used from step installing kind onwards.
Running All Ginkgo Tests
Running all of the Ginkgo tests may take an hour or longer. To run all the ginkgo tests, invoke the make command as follows from the root of the cilium repository:
$ sudo make -C test/ test
The first time that this is invoked, the testsuite will pull the testing VMs and provision Cilium into them. This may take several minutes, depending on your internet connection speed. Subsequent runs of the test will reuse the image.
Running Runtime Tests
To run all of the runtime tests, execute the following command from the test
directory:
INTEGRATION_TESTS=true ginkgo --focus="Runtime"
Ginkgo searches for all tests in all subdirectories that are “named” beginning with the string “Runtime” and contain any characters after it. For instance, here is an example showing what tests will be ran using Ginkgo’s dryRun option:
$ INTEGRATION_TESTS=true ginkgo --focus="Runtime" -dryRun
Running Suite: runtime
======================
Random Seed: 1516125117
Will run 42 of 164 specs
................
RuntimePolicyEnforcement Policy Enforcement Always
Always to Never with policy
/Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:258
•
------------------------------
RuntimePolicyEnforcement Policy Enforcement Always
Always to Never without policy
/Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:293
•
------------------------------
RuntimePolicyEnforcement Policy Enforcement Never
Container creation
/Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:332
•
------------------------------
RuntimePolicyEnforcement Policy Enforcement Never
Never to default with policy
/Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:349
.................
Ran 42 of 164 Specs in 0.002 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 122 Skipped PASS
Ginkgo ran 1 suite in 1.830262168s
Test Suite Passed
The output has been truncated. For more information about this functionality, consult the aforementioned Ginkgo documentation.
Running Kubernetes Tests
To run all of the Kubernetes tests, run the following command from the test
directory:
INTEGRATION_TESTS=true ginkgo --focus="K8s"
To run a specific test from the Kubernetes tests suite, run the following command
from the test
directory:
INTEGRATION_TESTS=true ginkgo --focus="K8s.*Check iptables masquerading with random-fully"
Similar to the Runtime test suite, Ginkgo searches for all tests in all subdirectories that are “named” beginning with the string “K8s” and contain any characters after it.
The Kubernetes tests support the following Kubernetes versions:
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
By default, the Vagrant VMs are provisioned with Kubernetes 1.23. To run with any other supported version of Kubernetes, run the test suite with the following format:
INTEGRATION_TESTS=true K8S_VERSION=<version> ginkgo --focus="K8s"
Note
When provisioning VMs with the net-next kernel (NETNEXT=1
) on
VirtualBox which version does not match a version of the VM image
VirtualBox Guest Additions, Vagrant will install a new version of
the Additions with mount.vboxsf
. The latter is not compatible with
vboxsf.ko
shipped within the VM image, and thus syncing of shared
folders will not work.
To avoid this, one can prevent Vagrant from installing the Additions by
putting the following into $HOME/.vagrant.d/Vagrantfile
:
Vagrant.configure('2') do |config|
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
end
config.vm.provider :virtualbox do |vbox|
vbox.check_guest_additions = false
end
end
Available CLI Options
For more advanced workflows, check the list of available custom options for the Cilium
framework in the test/
directory and interact with ginkgo directly:
$ cd test/
$ ginkgo . -- -cilium.help
-cilium.SSHConfig string
Specify a custom command to fetch SSH configuration (eg: 'vagrant ssh-config')
-cilium.help
Display this help message.
-cilium.holdEnvironment
On failure, hold the environment in its current state
-cilium.hubble-relay-image string
Specifies which image of hubble-relay to use during tests
-cilium.hubble-relay-tag string
Specifies which tag of hubble-relay to use during tests
-cilium.image string
Specifies which image of cilium to use during tests
-cilium.kubeconfig string
Kubeconfig to be used for k8s tests
-cilium.multinode
Enable tests across multiple nodes. If disabled, such tests may silently pass (default true)
-cilium.operator-image string
Specifies which image of cilium-operator to use during tests
-cilium.operator-tag string
Specifies which tag of cilium-operator to use during tests
-cilium.passCLIEnvironment
Pass the environment invoking ginkgo, including PATH, to subcommands
-cilium.provision
Provision Vagrant boxes and Cilium before running test (default true)
-cilium.provision-k8s
Specifies whether Kubernetes should be deployed and installed via kubeadm or not (default true)
-cilium.runQuarantined
Run tests that are under quarantine.
-cilium.showCommands
Output which commands are ran to stdout
-cilium.skipLogs
skip gathering logs if a test fails
-cilium.tag string
Specifies which tag of cilium to use during tests
-cilium.testScope string
Specifies scope of test to be ran (k8s, runtime)
-cilium.timeout duration
Specifies timeout for test run (default 24h0m0s)
Ginkgo ran 1 suite in 4.312100241s
Test Suite Failed
For more information about other built-in options to Ginkgo, consult the ginkgo-documentation.
Running Specific Tests Within a Test Suite
If you want to run one specified test, there are a few options:
By modifying code: add the prefix “FIt” on the test you want to run; this marks the test as focused. Ginkgo will skip other tests and will only run the “focused” test. For more information, consult the Focused Specs documentation from Ginkgo.
It("Example test", func(){ Expect(true).Should(BeTrue()) }) FIt("Example focused test", func(){ Expect(true).Should(BeTrue()) })
From the command line: specify a more granular focus if you want to focus on, say, Runtime L7 tests:
INTEGRATION_TESTS=true ginkgo --focus "Runtime.*L7"
This will focus on tests that contain “Runtime”, followed by any
number of any characters, followed by “L7”. --focus
is a regular
expression and quotes are required if it contains spaces and to escape
shell expansion of *
.
Compiling the tests without running them
To validate that the Go code you’ve written for testing is correct without needing to run the full test, you can build the test directory:
make -C test/ build
Updating Cilium images for Kubernetes tests
Sometimes when running the CI suite for a feature under development, it’s common
to re-run the CI suite on the CI VMs running on a local development machine after
applying some changes to Cilium. For this the new Cilium images have to be
built, and then used by the CI suite. To do so, one can run the following
commands on the k8s1
VM:
cd go/src/github.com/cilium/cilium
make LOCKDEBUG=1 docker-cilium-image
docker tag quay.io/cilium/cilium:latest \
k8s1:5000/cilium/cilium-dev:latest
docker push k8s1:5000/cilium/cilium-dev:latest
make -B LOCKDEBUG=1 docker-operator-generic-image
docker tag quay.io/cilium/operator-generic:latest \
k8s1:5000/cilium/operator-generic:latest
docker push k8s1:5000/cilium/operator-generic:latest
The commands were adapted from the test/provision/compile.sh
script.
Test Reports
The Cilium Ginkgo framework formulates JUnit reports for each test. The following files currently are generated depending upon the test suite that is ran:
runtime.xml
K8s.xml
Best Practices for Writing Tests
Provide informative output to console during a test using the By construct. This helps with debugging and gives those who did not write the test a good idea of what is going on. The lower the barrier of entry is for understanding tests, the better our tests will be!
Leave the testing environment in the same state that it was in when the test started by deleting resources, resetting configuration, etc.
Gather logs in the case that a test fails. If a test fails while running on Jenkins, a postmortem needs to be done to analyze why. So, dumping logs to a location where Jenkins can pick them up is of the highest imperative. Use the following code in an
AfterFailed
method:
AfterFailed(func() {
vm.ReportFailed()
})
Ginkgo Extensions
In Cilium, some Ginkgo features are extended to cover some uses cases that are useful for testing Cilium.
BeforeAll
This function will run before all BeforeEach within a Describe or Context.
This method is an equivalent to SetUp
or initialize functions in common
unit test frameworks.
AfterAll
This method will run after all AfterEach functions defined in a Describe or Context.
This method is used for tearing down objects created which are used by all
Its
within the given Context
or Describe
. It is ran after all Its
have ran, this method is a equivalent to tearDown
or finalize
methods in
common unit test frameworks.
A good use case for using AfterAll
method is to remove containers or pods
that are needed for multiple Its
in the given Context
or Describe
.
JustAfterEach
This method will run just after each test and before AfterFailed
and
AfterEach
. The main reason of this method is to perform some assertions
for a group of tests. A good example of using a global JustAfterEach
function is for deadlock detection, which checks the Cilium logs for deadlocks
that may have occurred in the duration of the tests.
AfterFailed
This method will run before all AfterEach
and after JustAfterEach
. This
function is only called when the test failed.This construct is used to gather
logs, the status of Cilium, etc, which provide data for analysis when tests
fail.
Example Test Layout
Here is an example layout of how a test may be written with the aforementioned constructs:
Test description diagram:
Describe
BeforeAll(A)
AfterAll(A)
AfterFailed(A)
AfterEach(A)
JustAfterEach(A)
TESTA1
TESTA2
TESTA3
Context
BeforeAll(B)
AfterAll(B)
AfterFailed(B)
AfterEach(B)
JustAfterEach(B)
TESTB1
TESTB2
TESTB3
Test execution flow:
Describe
BeforeAll
TESTA1; JustAfterEach(A), AfterFailed(A), AfterEach(A)
TESTA2; JustAfterEach(A), AfterFailed(A), AfterEach(A)
TESTA3; JustAfterEach(A), AfterFailed(A), AfterEach(A)
Context
BeforeAll(B)
TESTB1:
JustAfterEach(B); JustAfterEach(A)
AfterFailed(B); AfterFailed(A);
AfterEach(B) ; AfterEach(A);
TESTB2:
JustAfterEach(B); JustAfterEach(A)
AfterFailed(B); AfterFailed(A);
AfterEach(B) ; AfterEach(A);
TESTB3:
JustAfterEach(B); JustAfterEach(A)
AfterFailed(B); AfterFailed(A);
AfterEach(B) ; AfterEach(A);
AfterAll(B)
AfterAll(A)
Debugging:
You can retrieve all run commands and their output in the report directory
(./test/test_results
). Each test creates a new folder, which contains
a file called log where all information is saved, in case of a failing
test an exhaustive data will be added.
$ head test/test_results/RuntimeKafkaKafkaPolicyIngress/logs
level=info msg=Starting testName=RuntimeKafka
level=info msg="Vagrant: running command \"vagrant ssh-config runtime\""
cmd: "sudo cilium status" exitCode: 0
KVStore: Ok Consul: 172.17.0.3:8300
ContainerRuntime: Ok
Kubernetes: Disabled
Kubernetes APIs: [""]
Cilium: Ok OK
NodeMonitor: Disabled
Allocated IPv4 addresses:
Running with delve
Delve is a debugging tool for Go
applications. If you want to run your test with delve, you should add a new
breakpoint using
runtime.BreakPoint() in the
code, and run ginkgo using dlv
.
Example how to run ginkgo using dlv
:
dlv test . -- --ginkgo.focus="Runtime" -ginkgo.v=true --cilium.provision=false
Running End-To-End Tests In Other Environments via kubeconfig
The end-to-end tests can be run with an arbitrary kubeconfig file. Normally the
CI will use the kubernetes created via vagrant but this can be overridden with
--cilium.kubeconfig
. When used, ginkgo will not start a VM nor compile
cilium. It will also skip some setup tasks like labeling nodes for testing.
This mode expects:
The current directory is
cilium/test
A test focus with
--focus
.--focus="K8s"
selects all kubernetes tests. If not passing--focus=K8s
then you must pass-cilium.testScope=K8s
.Cilium images as full URLs specified with the
--cilium.image
and--cilium.operator-image
options.A working kubeconfig with the
--cilium.kubeconfig
optionA populated K8S_VERSION environment variable set to the version of the cluster
If appropriate, set the
CNI_INTEGRATION
environment variable set to one ofgke
,eks
,eks-chaining
,microk8s
orminikube
. This selects matching configuration overrides for cilium. Leaving this unset for non-matching integrations is also correct.For k8s environments that invoke an authentication agent, such as EKS and
aws-iam-authenticator
, set--cilium.passCLIEnvironment=true
An example invocation is
INTEGRATION_TESTS=true CNI_INTEGRATION=eks K8S_VERSION=1.16 ginkgo --focus="K8s" -- -cilium.provision=false -cilium.kubeconfig=`echo ~/.kube/config` -cilium.image="quay.io/cilium/cilium-ci" -cilium.operator-image="quay.io/cilium/operator" -cilium.operator-suffix="-ci" -cilium.passCLIEnvironment=true
To run tests with Kind, try
K8S_VERSION=1.25 ginkgo --focus=K8s -- -cilium.provision=false --cilium.image=localhost:5000/cilium/cilium-dev -cilium.tag=local --cilium.operator-image=localhost:5000/cilium/operator -cilium.operator-tag=local -cilium.kubeconfig=`echo ~/.kube/config` -cilium.provision-k8s=false -cilium.testScope=K8s -cilium.operator-suffix=
Running in GKE
1- Setup a cluster as in Cilium Quick Installation or utilize an existing cluster.
Note
You do not need to deploy Cilium in this step, as the End-To-End Testing Framework handles the deployment of Cilium.
Note
The tests require machines larger than n1-standard-4
. This can be
set with --machine-type n1-standard-4
on cluster creation.
2- Invoke the tests from cilium/test
with options set as explained in
Running End-To-End Tests In Other Environments via kubeconfig
Note
The tests require the NATIVE_CIDR
environment variable to be set to
the value of the cluster IPv4 CIDR returned by the gcloud container
clusters describe
command.
export CLUSTER_NAME=cluster1
export CLUSTER_ZONE=us-west2-a
export NATIVE_CIDR="$(gcloud container clusters describe $CLUSTER_NAME --zone $CLUSTER_ZONE --format 'value(clusterIpv4Cidr)')"
INTEGRATION_TESTS=true CNI_INTEGRATION=gke K8S_VERSION=1.17 ginkgo --focus="K8sDemo" -- -cilium.provision=false -cilium.kubeconfig=`echo ~/.kube/config` -cilium.image="quay.io/cilium/cilium-ci" -cilium.operator-image="quay.io/cilium/operator" -cilium.operator-suffix="-ci" -cilium.hubble-relay-image="quay.io/cilium/hubble-relay-ci" -cilium.passCLIEnvironment=true
Note
The kubernetes version defaults to 1.23 but can be configured with
versions between 1.16 and 1.23. Version should match the server
version reported by kubectl version
.
AKS (experimental)
Note
The tests require the NATIVE_CIDR
environment variable to be set to
the value of the cluster IPv4 CIDR.
Setup a cluster as in Cilium Quick Installation or utilize an existing cluster. You do not need to deploy Cilium in this step, as the End-To-End Testing Framework handles the deployment of Cilium.
2. Invoke the tests from cilium/test
with options set as explained in
Running End-To-End Tests In Other Environments via kubeconfig
export NATIVE_CIDR="10.241.0.0/16"
INTEGRATION_TESTS=true CNI_INTEGRATION=aks K8S_VERSION=1.17 ginkgo --focus="K8s" -- -cilium.provision=false -cilium.kubeconfig=`echo ~/.kube/config` -cilium.passCLIEnvironment=true -cilium.image="mcr.microsoft.com/oss/cilium/cilium" -cilium.tag="1.12.1" -cilium.operator-image="mcr.microsoft.com/oss/cilium/operator" -cilium.operator-suffix="" -cilium.operator-tag="1.12.1"
AWS EKS (experimental)
Not all tests can succeed on EKS. Many do, however and may be useful. GitHub issue 9678#issuecomment-749350425 contains a list of tests that are still failing.
Setup a cluster as in Cilium Quick Installation or utilize an existing cluster.
Source the testing integration script from
cilium/contrib/testing/integrations.sh
.Invoke the
gks
function by passing whichcilium
docker image to run and the test focus. The command also accepts additional ginkgo arguments.
gks quay.io/cilium/cilium:latest K8sDemo
Adding new Managed Kubernetes providers
All Managed Kubernetes test support relies on using a pre-configured kubeconfig file. This isn’t always adequate, however, and adding defaults specific to each provider is possible. The commit adding GKE support is a good reference.
Add a map of helm settings to act as an override for this provider in test/helpers/kubectl.go. These should be the helm settings used when generating cilium specs for this provider.
Add a unique CI Integration constant. This value is passed in when invoking ginkgo via the
CNI_INTEGRATON
environment variable.Update the helm overrides mapping with the constant and the helm settings.
For cases where a test should be skipped use the
SkipIfIntegration
. To skip whole contexts, useSkipContextIf
. More complex logic can be expressed with functions likeIsIntegration
. These functions are all part of the test/helpers package.
Running End-To-End Tests In Other Environments via SSH
If you want to run tests in an arbitrary environment with SSH access, you can
use --cilium.SSHConfig
to provide the SSH configuration of the endpoint on
which tests will be run. The tests presume the following on the remote
instance:
Cilium source code is located in the directory
/home/vagrant/go/src/github.com/cilium/cilium/
.Cilium is installed and running.
The ssh connection needs to be defined as a ssh-config
file and need to have
the following targets:
runtime: To run runtime tests
k8s{1..2}-${K8S_VERSION}: to run Kubernetes tests. These instances must have Kubernetes installed and running as a prerequisite for running tests.
An example ssh-config
can be the following:
Host runtime
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/eloy/.go/src/github.com/cilium/cilium/test/.vagrant/machines/runtime/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
To run this you can use the following command:
ginkgo -- --cilium.provision=false --cilium.SSHConfig="cat ssh-config"
VMs for Testing
The VMs used for testing are defined in test/Vagrantfile
. There are a variety of
configuration options that can be passed as environment variables:
ENV variable |
Default Value |
Options |
Description |
---|---|---|---|
K8S_NODES |
2 |
0..100 |
Number of Kubernetes nodes in the cluster |
NO_CILIUM_ON_NODE[S] |
none |
* |
Comma-separated list of K8s nodes that should not run Cilium |
NFS |
0 |
1 |
If Cilium folder needs to be shared using NFS |
IPv6 |
0 |
0-1 |
If 1 the Kubernetes cluster will use IPv6 |
CONTAINER_RUNTIME |
docker |
containerd |
To set the default container runtime in the Kubernetes cluster |
K8S_VERSION |
1.18 |
1.** |
Kubernetes version to install |
KUBEPROXY |
1 |
0-1 |
If 0 the Kubernetes’ kube-proxy won’t be installed |
SERVER_BOX |
cilium/ubuntu-dev |
* |
Vagrantcloud base image |
VM_CPUS |
2 |
0..100 |
Number of CPUs that need to have the VM |
VM_MEMORY |
4096 |
d+ |
RAM size in Megabytes |
VM images
The test suite relies on Vagrant to automatically download the required VM
image, if it is not already available on the system. VM images weight several
gigabytes so this may take some time, but faster tools such as aria2 can
speed up the process by opening multiple connections. The script
contrib/scripts/add_vagrant_box.sh can be useful to manually download
selected images with aria2 prior to launching the test suite, or to
periodically update images in a cron
job:
$ bash contrib/scripts/add_vagrant_box.sh -h
usage: add_vagrant_box.sh [options] [vagrant_box_defaults.rb path]
path to vagrant_box_defaults.rb defaults to ./vagrant_box_defaults.rb
options:
-a use aria2c instead of curl
-b <box> download selected box (defaults: ubuntu ubuntu-next)
-d <dir> download to dir instead of /tmp/
-l download latest versions instead of using vagrant_box_defaults
-h display this help
examples:
download boxes ubuntu and ubuntu-next from vagrant_box_defaults.rb:
$ add-vagrant-boxes.sh $HOME/go/src/github.com/cilium/cilium/vagrant_box_defaults.rb
download latest version for ubuntu-dev and ubuntu-next:
$ add-vagrant-boxes.sh -l -b ubuntu-dev -b ubuntu-next
same as above, downloading into /tmp/foo and using aria2c:
$ add-vagrant-boxes.sh -al -d /tmp/foo -b ubuntu-dev -b ubuntu-next
Known Issues and Workarounds
Further Assistance
Have a question about how the tests work or want to chat more about improving the
testing infrastructure for Cilium? Hop on over to the #testing
channel on
Cilium Slack.