Installation using Kubespray¶
The guide is to use Kubespray for creating an AWS Kubernetes cluster running Cilium as the CNI. The guide uses:
- Kubespray v2.6.0
- Latest Cilium released version (instructions for using the version are mentioned below)
$ git clone --branch v2.6.0 https://github.com/kubernetes-sigs/kubespray
Install dependencies from
$ cd kubespray $ sudo pip install -r requirements.txt
We will use Terraform for provisioning AWS infrastructure.
Configure AWS credentials¶
Export the variables for your AWS credentials
export AWS_ACCESS_KEY_ID="www" export AWS_SECRET_ACCESS_KEY ="xxx" export AWS_SSH_KEY_NAME="yyy" export AWS_DEFAULT_REGION="zzz"
Configure Terraform Variables¶
We will start by specifying the infrastructure needed for the Kubernetes cluster.
$ cd contrib/terraform/aws $ cp contrib/terraform/aws/terraform.tfvars.example terraform.tfvars`
Open the file and change any defaults particularly, the number of master, etcd, and worker nodes. You can change the master and etcd number to 1 for deployments that don’t need high availability. By default, this tutorial will create:
- VPC with 2 public and private subnets
- Bastion Hosts and NAT Gateways in the Public Subnet
- Three of each (masters, etcd, and worker nodes) in the Private Subnet
- AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
- Terraform scripts using
CoreOSas base image.
#Global Vars aws_cluster_name = "kubespray" #VPC Vars aws_vpc_cidr_block = "XXX.XXX.192.0/18" aws_cidr_subnets_private = ["XXX.XXX.192.0/20","XXX.XXX.208.0/20"] aws_cidr_subnets_public = ["XXX.XXX.224.0/20","XXX.XXX.240.0/20"] #Bastion Host aws_bastion_size = "t2.medium" #Kubernetes Cluster aws_kube_master_num = 3 aws_kube_master_size = "t2.medium" aws_etcd_num = 3 aws_etcd_size = "t2.medium" aws_kube_worker_num = 3 aws_kube_worker_size = "t2.medium" #Settings AWS ELB aws_elb_api_port = 6443 k8s_secure_api_port = 6443 kube_insecure_apiserver_address = "0.0.0.0"
Apply the configuration¶
terraform init to initialize the following modules
$ terraform init
Once initialized , execute:
$ terraform plan -out=aws_kubespray_plan
This will generate a file,
aws_kubespray_plan, depicting an execution
plan of the infrastructure that will be created on AWS. To apply, execute:
$ terraform init $ terraform apply "aws_kubespray_plan"
Terraform automatically creates an Ansible Inventory file at
Installing Kubernetes cluster with Cilium as CNI¶
Kubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in
ansible_ssh_private_key_file=<path to EC2 SSH private key file>
We recommend using the latest released Cilium version by editing
roles/download/defaults/main.yml. Open the file, search for
cilium_version, and replace the version with the latest released. As an example, the updated version entry will look like:
$ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>
To check if cluster is created successfully, ssh into the bastion host with the user
# Get information about the basiton host $ cat ssh-bastion.conf $ ssh -i ~/path/to/ec2-key-file.pem core@public_ip_of_bastion_host
Execute the commands below from the bastion host. If
kubectl isn’t installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node.
$ kubectl get nodes $ kubectl get pods -n kube-system
You should see that nodes are in
Ready state and Cilium pods are in
Deploy the connectivity test¶
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the
Pending state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
$ cd contrib/terraform/aws $ terraform destroy