RETROSPECTIVE

October 10th, 2021

Writing Kubernetes Tests with Go

Kubernetes

Go

Jenkins

AWS EKS

These days, most of my application infrastructure exists on Docker containers, orchestrated by Kubernetes. My AWS account has a Kubernetes cluster, which is hosted using EKS (Elastic Kubernetes Service). Since two of my production applications (jarombek.com and saintsxctf.com) run on this Kubernetes cluster, the health of their infrastructure is critical. To help ensure that the Kubernetes cluster is running properly, I created tests which check the state of my Kubernetes objects and ensure that they exist on the EKS cluster as expected.

This article explores my Kubernetes test suite, which is written in Go and leverages the Kubernetes Go Client. It also describes how the test suite is run on an automated schedule, alerting me when test failures occur.

My Kubernetes infrastructure and test code is spread across multiple repositories, most notably in jarombek-com-infrastructure, saints-xctf-infrastructure, and global-aws-infrastructure. In this article I focus on the infrastructure tests for jarombek.com, which happens to be the website you are currently viewing! jarombek.com has the following Kubernetes infrastructure:

My Kubernetes infrastructure is created in an automated fashion using Terraform. Specifically, the Kubernetes objects for jarombek.com are configured in two Terraform modules: jarombek-com-kubernetes and jarombek-com-kubernetes-ingress. Both these modules have test code associated with them.

The Kubernetes test code, written using the Kubernetes Go Client, resides in a test-k8s directory. The test code is a Go module, configured by go.mod. The Go module definition file specifies the module name, the Go version used, and the external Go module requirements.

module github.com/ajarombek/jarombek-com-infrastructure/test-k8s go 1.14 require ( github.com/ajarombek/cloud-modules/kubernetes-test-functions v0.2.10 k8s.io/apimachinery v0.17.3-beta.0 k8s.io/client-go v0.17.0 )

The Go version used in the module is 1.14 and the module name is github.com/ajarombek/ jarombek-com-infrastructure/test-k8s; a concatenation of the GitHub repository URL and the directory path to the go.mod file. Code in the module is dependent on two Kubernetes modules: k8s.io/apimachinery and k8s.io/client-go. There is also one dependency called github.com/ajarombek/cloud-modules/kubernetes-test-functions, which is a Go module of my own. This module contains reusable functions for writing Kubernetes tests in Go, and can be found in my cloud-modules repository. I will discuss this Go module in a separate article.

The entrypoint to the test suite is main_test.go. It configures the test suite and initializes the Kubernetes client.

package main import ( "k8s.io/client-go/kubernetes" "os" "testing" ) var ClientSet *kubernetes.Clientset var env = os.Getenv("TEST_ENV") var namespace = GetNamespace() // Setup code for the test suite. func TestMain(m *testing.M) { kubeconfig, inCluster := ParseCommandLineArguments() ClientSet = GetClientSet(kubeconfig, inCluster) os.Exit(m.Run()) } func GetNamespace() string { if env == "dev" { return "jarombek-com-dev" } else { return "jarombek-com" } }

There are multiple pieces to unpack here. First, the entrypoint of the test suite is TestMain(), which is invoked when the CLI command go test is run. The three lines of code in TestMain() parse command line arguments, initialize the Kubernetes client, and run the test suite, respectively.

More specifically, the ParseCommandLineArguments() function, which exists in a separate client.go file, looks for --kubeconfig and --incluster command line flags, and assigns them to variables.

func ParseCommandLineArguments() (*string, *string) { var kubeconfig *string = flag.String("kubeconfig", "", "Absolute path to the kubeconfig file.") var inCluster *string = flag.String("incluster", "", "Whether or not the tests are running in a cluster.") flag.Parse() return kubeconfig, inCluster }

There are two ways to initialize the Kubernetes client and authenticate it with the Kubernetes API: with a kubeconfig file outside a Kubernetes cluster or with a ServiceAccount within a Kubernetes cluster. These two techniques are referred to as out of cluster configuration and in-cluster configuration, respectively1,2. In my ParseCommandLineArguments() function, the kubeconfig flag refers to out of cluster configuration and the incluster flag refers to in-cluster configuration. The value of the kubeconfig flag is a string, representing the file path to a kubeconfig file, which is used for authentication. The value of the incluster flag is simply a boolean. These flags can be used from the CLI like so:

# Both authentication approaches. # Run tests outside a Kubernetes cluster, authenticating with a kubeconfig file. go test --kubeconfig /path/to/kubeconfig # Run tests inside a Kubernetes cluster, authenticating with a ServiceAccount. go test --incluster true

On the second line of TestMain(), a GetClientSet() function is invoked, with kubeconfig and incluster passed as arguments. GetClientSet() is a custom function declared in client.go which configures and initializes the Kubernetes client either in cluster or out of cluster.

import ( "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "k8s.io/client-go/tools/clientcmd" ) func GetClientSet(kubeconfig *string, inCluster *string) *kubernetes.Clientset { var config *rest.Config var err error if *inCluster == "true" { config, err = rest.InClusterConfig() if err != nil { panic(err.Error()) } } else { config, err = clientcmd.BuildConfigFromFlags("", *kubeconfig) if err != nil { panic(err.Error()) } } clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } return clientset }

If the tests are run in the Kubernetes cluster, rest.InClusterConfig() is invoked. Otherwise, clientcmd.BuildConfigFromFlags() is invoked.

With the Kubernetes client configured, MainTest() ends with a call to os.Exit(m.Run()), which runs all the tests. However, before running the tests, there are two final important parts of the main_test.go file. First, the environment that the tests are run in is gathered from the TEST_ENV environment variable in the line var env = os.Getenv("TEST_ENV"). I have infrastructure in development and production environments, so this configuration helps me test Kubernetes objects in both environments separately.

Kubernetes objects for my jarombek.com website exist in two different namespaces. For the production environment, objects exist in the jarombek-com namespace. For the development environment, objects exist in the jarombek-com-dev namespace. The following code in main_test.go initializes a global namespace variable with the proper namespace depending on the environment being tested. This variable is accessible from the test code.

var env = os.Getenv("TEST_ENV") var namespace = GetNamespace() func GetNamespace() string { if env == "dev" { return "jarombek-com-dev" } else { return "jarombek-com" } }

At this point, the Kubernetes testing environment is all configured. The Kubernetes tests reside in two files: namespace_test.go and jarombek_com_test.go. namespace_test.go checks if the namespace for the infrastructure contains the expected number of Kubernetes objects. jarombek_com_test.go tests those objects in more detail.

First, let's look at the namespace tests.

// namespace_test.go package main import ( k8sfuncs "github.com/ajarombek/cloud-modules/kubernetes-test-functions" "testing" ) // TestJarombekComNamespaceDeploymentCount determines if the number of 'Deployment' objects in the 'jarombek-com' // (or 'jarombek-com-dev') namespace is as expected. func TestJarombekComNamespaceDeploymentCount(t *testing.T) { k8sfuncs.ExpectedDeploymentCount(t, ClientSet, namespace, 2) } // TestJarombekComNamespaceServiceCount determines if the expected number of Service objects exist in the 'jarombek-com' // (or 'jarombek-com-dev') namespace. func TestJarombekComNamespaceServiceCount(t *testing.T) { k8sfuncs.NamespaceServiceCount(t, ClientSet, namespace, 2) } // TestJarombekComNamespaceIngressCount determines if the number of 'Ingress' objects in the 'jarombek-com' // (or 'jarombek-com-dev') namespace is as expected. func TestJarombekComNamespaceIngressCount(t *testing.T) { k8sfuncs.NamespaceIngressCount(t, ClientSet, namespace, 1) }

There are three tests specified in this code, each encapsulated in a function. The first test confirms that two Deployment objects exist in the namespace, the second test confirms that two Service objects exist in the namespace, and the third test confirms that one Ingress object exists in the namespace. The actual testing assertions are made in helper functions located in my kubernetes-test-functions Go module. These helper functions are accessed through the k8sfuncs variable.

The three reusable functions used in the namespace test code - ExpectedDeploymentCount(), NamespaceServiceCount(), and NamespaceIngressCount() - are located in the kubernetes-test-functions module here, here, and here. The ExpectedDeploymentCount() function is shown below.

// ExpectedDeploymentCount determines if the number of 'Deployment' objects in a namespace is as expected. func ExpectedDeploymentCount(t *testing.T, clientset *kubernetes.Clientset, namespace string, expectedCount int) { deployments, err := clientset.AppsV1().Deployments(namespace).List(v1meta.ListOptions{}) if err != nil { panic(err.Error()) } var actualCount = len(deployments.Items) if actualCount == expectedCount { t.Logf( "The expected number of Deployments exist in the '%v' namespace. Expected %v, got %v.", namespace, expectedCount, actualCount, ) } else { t.Errorf( "An unexpected number of Deployments exist in the '%v' namespace. Expected %v, got %v.", namespace, expectedCount, actualCount, ) } }

The first parameter of ExpectedDeploymentCount(), t, is of type *testing.T, which contains the state of the testing suite and also enables test logging. The second parameter, clientset, is the Kubernetes client. The last two parameters, namespace and expectedCount, determine the namespace to check for Deployment objects and specify the expected deployment count, respectively.

The first line of the function body uses the Kubernetes client to get all the Deployment objects in a namespace. Then, len(deployments.Items) determines the number of Deployment objects, followed by an if statement which checks if the actual number of objects matches the expected number of objects. t.Logf is used to log a successful test, and t.Errorf is used to log a failed test. NamespaceServiceCount() and NamespaceIngressCount() have similar implementations.

Now, let's look at the remaining tests.

import ( "fmt" k8sfuncs "github.com/ajarombek/cloud-modules/kubernetes-test-functions" v1meta "k8s.io/apimachinery/pkg/apis/meta/v1" "testing" ) // TestJarombekComDeploymentExists determines if a deployment exists in the 'jarombek-com' (or 'jarombek-com-dev') // namespace with the name 'jarombek-com'. func TestJarombekComDeploymentExists(t *testing.T) { k8sfuncs.DeploymentExists(t, ClientSet, "jarombek-com", namespace) } // TestJarombekComDeploymentErrorFree determines if the 'jarombek-com' deployment is running error free. func TestJarombekComDeploymentErrorFree(t *testing.T) { k8sfuncs.DeploymentStatusCheck(t, ClientSet, "jarombek-com", namespace, true, true, 1, 1, 1, 0) } // TestJarombekComServiceExists determines if a NodePort Service with the name 'jarombek-com' exists in the // 'jarombek-com' (or 'jarombek-com-dev') namespace. func TestJarombekComServiceExists(t *testing.T) { k8sfuncs.ServiceExists(t, ClientSet, "jarombek-com", namespace, "NodePort") } // TestJarombekComDatabaseDeploymentExists determines if a deployment exists in the 'jarombek-com' // (or 'jarombek-com-dev') namespace with the name 'jarombek-com-database'. func TestJarombekComDatabaseDeploymentExists(t *testing.T) { k8sfuncs.DeploymentExists(t, ClientSet, "jarombek-com-database", namespace) } // TestJarombekComDatabaseDeploymentErrorFree determines if the 'jarombek-com-database' deployment is running // error free. func TestJarombekComDatabaseDeploymentErrorFree(t *testing.T) { k8sfuncs.DeploymentStatusCheck(t, ClientSet, "jarombek-com-database", namespace, true, true, 1, 1, 1, 0) } // TestJarombekComDatabaseServiceExists determines if a NodePort Service with the name 'jarombek-com-database' exists // in the 'jarombek-com' (or 'jarombek-com-dev') namespace. func TestJarombekComDatabaseServiceExists(t *testing.T) { k8sfuncs.ServiceExists(t, ClientSet, "jarombek-com-database", namespace, "NodePort") } // TestJarombekComIngressExists determines if an ingress object exists in the 'jarombek-com' (or 'jarombek-com-dev') // namespace with the name 'jarombek-com-ingress'. func TestJarombekComIngressExists(t *testing.T) { k8sfuncs.IngressExists(t, ClientSet, namespace, "jarombek-com-ingress") } // TestJarombekComIngressAnnotations determines if the 'jarombek-com-ingress' Ingress object contains the expected annotations. func TestJarombekComIngressAnnotations(t *testing.T) { ingress, err := ClientSet.NetworkingV1beta1().Ingresses(namespace).Get("jarombek-com-ingress", v1meta.GetOptions{}) if err != nil { panic(err.Error()) } var hostname string var environment string if env == "dev" { hostname = "dev.jarombek.com,www.dev.jarombek.com" environment = "development" } else { hostname = "jarombek.com,www.jarombek.com" environment = "production" } annotations := ingress.Annotations // Kubernetes Ingress class and ExternalDNS annotations k8sfuncs.AnnotationsEqual(t, annotations, "kubernetes.io/ingress.class", "alb") k8sfuncs.AnnotationsEqual(t, annotations, "external-dns.alpha.kubernetes.io/hostname", hostname) // ALB Ingress annotations k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/actions.ssl-redirect", "{\"Type\": \"redirect\", \"RedirectConfig\": {\"Protocol\": \"HTTPS\", \"Port\": \"443\", \"StatusCode\": \"HTTP_301\"}}") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/backend-protocol", "HTTP") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/scheme", "internet-facing") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/listen-ports", "[{\"HTTP\":80}, {\"HTTPS\":443}]") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/healthcheck-path", "/") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/healthcheck-protocol", "HTTP") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/target-type", "instance") k8sfuncs.AnnotationsEqual(t, annotations, "alb.ingress.kubernetes.io/tags", "Name=jarombek-com-load-balancer,Application=jarombek-com,Environment=" + environment) // ALB Ingress annotations pattern matching uuidPattern := "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" certificateArnPattern := fmt.Sprintf("arn:aws:acm:us-east-1:739088120071:certificate/%s", uuidPattern) certificatesPattern := fmt.Sprintf("^%s,%s$", certificateArnPattern, certificateArnPattern) k8sfuncs.AnnotationsMatchPattern(t, annotations, "alb.ingress.kubernetes.io/certificate-arn", certificatesPattern) sgPattern := "^sg-[0-9a-f]+$" k8sfuncs.AnnotationsMatchPattern(t, annotations, "alb.ingress.kubernetes.io/security-groups", sgPattern) subnetsPattern := "^subnet-[0-9a-f]+,subnet-[0-9a-f]+$" k8sfuncs.AnnotationsMatchPattern(t, annotations, "alb.ingress.kubernetes.io/subnets", subnetsPattern) expectedAnnotationsLength := 13 annotationLength := len(annotations) if expectedAnnotationsLength == annotationLength { t.Logf( "JarombekCom Ingress has the expected number of annotations. Expected %v, got %v.", expectedAnnotationsLength, annotationLength, ) } else { t.Errorf( "JarombekCom Ingress does not have the expected number of annotations. Expected %v, got %v.", expectedAnnotationsLength, annotationLength, ) } }

One again, all these tests use reusable functions. However, the final function, TestJarombekComIngressAnnotations(), also performs some logic of its own. Since TestJarombekComIngressAnnotations() is different from the rest, let's analyze it in more detail. In Kubernetes, objects can have annotations, which are key-value pairs containing metadata about an object. For Ingress objects, these annotations can have important information which directs ingress controllers to build certain networking infrastructure. In my case, the Ingress object is utilized by an AWS Ingress Controller (now known as an AWS Load Balancer Controller), which creates a load balancer in my AWS account, directing traffic to my website's Kubernetes objects. TestJarombekComIngressAnnotations() tests that the Ingress object contains the proper annotations, so that the load balancer is created as expected.

TestJarombekComIngressAnnotations() starts by using the Kubernetes client to get the Ingress object for my website; storing it in an ingress variable. If the Ingress object isn't found, an error is stored in an err variable. The test ensures that the Ingress object exists before continuing with the if err != nil {} code block. The remainder of the test checks that annotations on the object have expected values and that the number of annotations on the object is as expected.

My Kubernetes tests run every morning in Jenkins jobs. There are two Jenkins jobs - one for the development environment and one for the production environment. My test code is run from within the Kubernetes cluster, since the Jenkins jobs are configured to run on Kubernetes pods. The Jenkins jobs also send me email alerts with statuses of the tests, allowing me to know whether everything is operational when I read emails in the morning. You can view the code for these Jenkins jobs on GitHub.

Automated tests of my Kubernetes infrastructure has proven to be an effective safety blanket. It's great to see successful test results in the morning and feel reassured that the infrastructure for my websites are working as expected. While you can write tests for Kubernetes infrastructure in multiple languages, the Go client library is very easy to work with; I highly recommend it. As previously mentioned, you can view all my Kubernetes test code in my jarombek-com-infrastructure, saints-xctf-infrastructure, and global-aws-infrastructure repositories.