This is part of a series of articles on SaintsXCTF Version 2.0. The first article in the series provides an overview of the application.
The infrastructure for the React/TypeScript frontend and Flask/Python backend for my website saintsxctf.com is hosted on Kubernetes. My Kubernetes infrastructure is hosted on a cluster, which is managed by AWS EKS. This article outlines the Kubernetes infrastructure and walks through Terraform code which configures and builds the infrastructure.
- Architectural Overview
- AWS Infrastructure
- Kubernetes Infrastructure
- React Web Application Overview
- Web Application React and TypeScript
- Web Application Redux State Configuration
- Web Application Cypress E2E Tests
- Web Application JSS Modular Design
- Flask Python API
- Flask API Testing
- Function API Using API Gateway & Lambda
- Auth API Using API Gateway & Lambda
- Database Client on Kubernetes
SaintsXCTF application infrastructure can be grouped into two categories - AWS and Kubernetes. This article only discusses the Kubernetes infrastructure, which has a green background in the diagram below. The AWS infrastructure, which has a red background in the diagram, was discussed in a prior article.
Similar to the AWS infrastructure, the SaintsXCTF Kubernetes infrastructure is logically grouped into Terraform modules. More specifically, there are three Terraform modules for Kubernetes infrastructure. The first is for the web (frontend) application, the second is for the API (backend) application, and the third is for an Ingress object which directs traffic to the web application and API. All three are discussed in this article.
In the prior infrastructure for my SaintsXCTF website, the web application and API were hosted on an EC2 instance in AWS. While this worked okay, it presented a number of issues. First, there was no easy way to update the application without any downtime. Second, since the application was on a virtual machine and not a container, updates to the virtual machine often caused unexpected behavior for the application, sometimes requiring code changes.
While updating the application infrastructure for version 2.0, I wanted to use a more lightweight container approach to my application infrastructure. I also wanted to leverage a container orchestrator with built-in deployment management that could update an application with zero downtime. Docker containers orchestrated with Kubernetes matched this requirement. Since my application was already hosted on AWS, the clear choice was to use AWS EKS to host Kubernetes infrastructure for the application.
Kubernetes infrastructure for the web application consists of a service and a deployment. The service networks traffic to the pods in the deployment. The service YAML configuration is shown below.
This YAML configuration is translated into HCL (Hashicorp Configuration Language) for use in Terraform. The Terraform configuration for the service is shown below.
The service object
saints-xctf-web-service navigates traffic to port 80 of the SaintsXCTF web application, which is hosted on Kubernetes pods as part of a deployment object. The web application deployment,
saints-xctf-web-deployment, is also translated from a YAML file into Terraform configuration.
saints-xctf-web-deployment object creates two pods which host the SaintsXCTF web application (as configured by
replicas). The deployment strategy is a
RollingUpdate, which allows for zero downtime as pods update one by one. The pods are configured with node affinity (configured by
node_affinity), which forces all pods to exist on Kubernetes cluster nodes with a certain label, in this case
production-applications. Node affinity allows me to separate production applications from non-production and prototype applications on my Kubernetes cluster. The pods are configured with readiness probes (
readiness_probe) and liveness probes (
liveness_probe). The readiness probe checks that the web application is accessible via HTTP requests, and the liveness probe checks that the API is accessible from the pod via HTTP requests. If these checks fail, a new pod is started and the current pod is terminated. This helps ensure that my application doesn't face any downtime.
Similar to the web application infrastructure, the API infrastructure consists of Kubernetes service and deployment objects. Unlike the web application, the API has two services and two deployments. One service-deployment pair is for an Nginx reverse proxy, and the other is for a uWSGI application server. The Nginx reverse proxy sits in front of the uWSGI server, routing traffic to it. The uWSGI application server holds the API code. I wrote an article about using Nginx reverse proxies, with the SaintsXCTF application as the case study.
The SaintsXCTF Kubernetes infrastructure has an Ingress object, which creates load balancing infrastructure needed to route traffic from the internet to the web application and API. The Ingress object utilizes an ALB Ingress Controller (now known as an AWS Load Balancer Controller) to create a load balancer on AWS for the SaintsXCTF application. It also uses ExternalDNS to create Route53 DNS records for saintsxctf.com, www.saintsxctf.com, api.saintsxctf.com, and www.api.saintsxctf.com. I discussed ALB Ingress Controllers and ExternalDNS in another article on AWS EKS.
Again, the infrastructure is originally configured in YAML and then translated into Terraform/HCL. The following code is the Ingress YAML configuration. The Terraform configuration is viewable on GitHub.
Four hosts are specified in the configuration - saintsxctf.com, api.saintsxctf.com, and their www prefixed equivalents. Traffic to these domains are appropriately routed to either the SaintsXCTF web application or API via their Kubernetes services. The most interesting configuration fields are found in the
annotations dictionary. All the
alb.ingress.kubernetes.io annotations configure an AWS load balancer to route traffic to the Kubernetes cluster. The
external-dns.alpha.kubernetes.io annotation creates Route53 DNS records for my SaintsXCTF web application & API domains. With these DNS records created, HTTP/HTTPS requests to the four saintsxctf.com domains are routed to the AWS load balancer created by the Ingress object, which in turn routes traffic to my Kubernetes Pods.
With the Ingress object and Service/Deployment objects created in Kubernetes, the website and API are fully functional and accessible from clients browsing the internet!
Maintaining my SaintsXCTF web application and API infrastructure on Kubernetes is a massive improvement over my previous AWS EC2 virtual machine setup. With Kubernetes, I can easily release new versions of my website and API with zero downtime. Terraform also improves my ability to quickly alter Kubernetes infrastructure, since all the infrastructure is configured as code and can be created, updated, or destroyed on demand. All the code for my Kubernetes infrastructure is available on GitHub. I also have test code for my Kubernetes infrastructure, which is discussed further in another article.