RETROSPECTIVE

September 27th, 2020

Jenkins Server Legacy Infrastructure on EC2 and EFS

Jenkins

AWS

Terraform

Amazon EC2

Amazon EFS

Packer

Ansible

DevOps

Back in 2018, I created a Jenkins server which automated tasks for my applications. Jenkins is a continuous integration and continuous delivery (CI/CD) tool which I've written about in the past. When I first created the Jenkins server, I had a few jobs which ran unit tests, but I never took full advantage of them. Over the past two years, I've gained a greater appreciation for CI/CD tools and their ability to save time deploying code and building confidence in codebases by automating tests. Nowadays all my applications have automated test and deployment jobs on Jenkins.

Since 2018 the Jenkins ecosystem has evolved along with my understanding of cloud concepts. My original Jenkins server was hosted on an AWS EC2 instance which utilized AWS EFS for persistent storage. In the spring of 2020, I decided to rewrite the Jenkins server infrastructure. With my added knowledge of containerization with Docker and container orchestration with Kubernetes, I hosted the Jenkins server on AWS EKS as part of a Kubernetes deployment. In this article, I discuss the original EC2 Jenkins server and its creation process with Terraform. In an upcoming article, I'll discuss the Kubernetes Jenkins server infrastructure.

While designing AWS architecture for the Jenkins server, I wanted the server configuration to persist between virtual machine restarts. This way I could schedule my EC2 instance to only run during the day (which optimizes energy consumption and cost) and not lose any data when offline at night. The solution to persisting data between EC2 instances is to store the Jenkins server configuration files on AWS EFS and mount it onto the instances. When the EC2 instance is shut down at night, the filesystem in EFS is not destroyed, allowing it to be remounted onto another instance in the morning.

AWS EFS

AWS Elastic File Storage (EFS) is a filesystem that is highly available and scalable1. It can be mounted on multiple EC2 instances2. The filesystem of EFS uses the Network File System (NFS) protocol, which is distributed (located on a different server than the server which communicates with it)3. From a user perspective EFS behaves like any non-distributed filesystem, making it easy to work with.

Besides for EC2 and EFS, the infrastructure utilizes Route53 for DNS records, AMI for a custom Jenkins virtual machine image, auto scaling for shutting down the EC2 instance at night, and Elastic Load Balancer (ELB) for load balancing to the EC2 instance(s) running Jenkins. All of this infrastructure is configured and created with Terraform (except for the custom AMI, which is created with Packer).

The Jenkins server's Terraform configuration is separated into three modules. The first module creates EC2 related resources, the second creates the EFS filesystem and mount target, and the third creates Route53 records. I will discuss some important pieces of the configuration, but the full code is available on GitHub in the jenkins, jenkins-efs, and jenkins-route53 folders, respectively.

In the jenkins module, the main resource is the launch configuration for the Jenkins server. A launch configuration determines how an autoscaling group creates EC2 instances.

resource "aws_launch_configuration" "jenkins-server-lc" { name = "global-jenkins-server-lc" image_id = data.aws_ami.jenkins-ami.id instance_type = "t2.micro" key_name = "jenkins-key" security_groups = [aws_security_group.jenkins-server-lc-security-group.id] associate_public_ip_address = true iam_instance_profile = aws_iam_instance_profile.jenkins-instance-profile.name # Script to run during instance startup user_data = data.template_file.jenkins-startup.rendered lifecycle { create_before_destroy = true } }

The two aspects of the launch configuration I want to focus on are the image_id and user_data. The image id specifies a custom Amazon Machine Image (AMI) for the Jenkins server. An AMI is a template for creating a virtual machine in AWS. I create (bake) the AMI with Packer, which I'll discuss later. Its important to note that in the Terraform configuration I use a data object to find the custom AMI that I create (as shown below).

data "aws_ami" "jenkins-ami" { # If more than one result matches the filters, use the most recent AMI most_recent = true filter { name = "name" values = ["global-jenkins-server*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["<aws_account_id>"] }

The user data specified on the launch configuration is a Bash script that runs when the virtual machine boots up. It's in this script that I mount the EFS filesystem onto the EC2 instance. This script is passed parameters from Terraform and is found in jenkins-setup.sh

Another important piece of the EC2 setup is the autoscaling group and autoscaling schedules. The Terraform configuration below creates the autoscaling group.

resource "aws_autoscaling_group" "jenkins-server-asg" { name = "global-jenkins-server-asg" launch_configuration = aws_launch_configuration.jenkins-server-lc.id vpc_zone_identifier = [data.aws_subnet.resources-vpc-public-subnet.id] max_size = var.max_size_on min_size = var.min_size_on desired_capacity = var.desired_capacity_on load_balancers = [aws_elb.jenkins-server-elb.id] health_check_type = "ELB" health_check_grace_period = 600 lifecycle { create_before_destroy = true } tag { key = "Name" propagate_at_launch = true value = "global-jenkins-server-asg" } tag { key = "Application" propagate_at_launch = false value = "jenkins-jarombek-io" } }

The capacity arguments, which determine the number of EC2 instances in the autoscaling group, are configured with variables. The same holds true for the autoscaling schedules, which start the Jenkins server in the morning and stop it at night. The start and stop times are configured differently for weekdays and weekends.

# main.tf resource "aws_autoscaling_schedule" "jenkins-server-asg-online-weekday" { autoscaling_group_name = aws_autoscaling_group.jenkins-server-asg.name scheduled_action_name = "jenkins-server-online-weekday" max_size = var.max_size_on min_size = var.min_size_on desired_capacity = var.desired_capacity_on recurrence = var.online_cron_weekday } resource "aws_autoscaling_schedule" "jenkins-server-asg-offline-weekday" { autoscaling_group_name = aws_autoscaling_group.jenkins-server-asg.name scheduled_action_name = "jenkins-server-offline-weekday" max_size = var.max_size_off min_size = var.min_size_off desired_capacity = var.desired_capacity_off recurrence = var.offline_cron_weekday } resource "aws_autoscaling_schedule" "jenkins-server-asg-online-weekend" { autoscaling_group_name = aws_autoscaling_group.jenkins-server-asg.name scheduled_action_name = "jenkins-server-online-weekend" max_size = var.max_size_on min_size = var.min_size_on desired_capacity = var.desired_capacity_on recurrence = var.online_cron_weekend } resource "aws_autoscaling_schedule" "jenkins-server-asg-offline-weekend" { autoscaling_group_name = aws_autoscaling_group.jenkins-server-asg.name scheduled_action_name = "jenkins-server-offline-weekend" max_size = var.max_size_off min_size = var.min_size_off desired_capacity = var.desired_capacity_off recurrence = var.offline_cron_weekend }
# var.tf variable "max_size_on" { description = "Max number of instances in the auto scaling group during an online period" default = 1 } variable "min_size_on" { description = "Min number of instances in the auto scaling group during an online period" default = 1 } variable "max_size_off" { description = "Max number of instances in the auto scaling group during an offline period" default = 0 } variable "min_size_off" { description = "Min number of instances in the auto scaling group during an offline period" default = 0 } variable "desired_capacity_on" { description = "The desired number of intances in the autoscaling group when I am working" default = 1 } variable "desired_capacity_off" { description = "The desired number of intances in the autoscaling group when I am NOT working" default = 0 } # Weekdays: 6:30PM - 8:00PM EST variable "online_cron_weekday" { description = "The cron syntax for when the Jenkins server should go online on a weekday" default = "0 23 * * 1-5" } variable "offline_cron_weekday" { description = "The cron syntax for when the Jenkins server should go offline on a weekday" default = "0 1 * * 2-6" } # Weekends: 12:00PM - 8:00PM EST variable "online_cron_weekend" { description = "The cron syntax for when the Jenkins server should go online on a weekend" default = "0 17 * * 0,6" } variable "offline_cron_weekend" { description = "The cron syntax for when the Jenkins server should go offline on a weekend" default = "0 1 * * 0,1" }

The other pieces of the main Jenkins module, such as the load balancer and security groups, are available on GitHub in main.tf.

In the jenkins-efs module, the EFS filesystem is created along with a mount target, which is located in the same subnet as the EC2 instance.

resource "aws_efs_file_system" "jenkins-efs" { creation_token = "jenkins-fs" tags = { Name = "jenkins-efs" } } resource "aws_efs_mount_target" "jenkins-efs-mount" { file_system_id = aws_efs_file_system.jenkins-efs.id subnet_id = data.aws_subnet.resources-vpc-public-subnet.id security_groups = [aws_security_group.jenkins-efs-security.id] }

And finally in the jenkins-route53 module, a Route53 record is created and bound to the load balancer.

data "aws_route53_zone" "jarombek-io-zone" { name = "jarombek.io." } data "aws_elb" "jenkins-server-elb" { name = "global-jenkins-server-elb" } resource "aws_route53_record" "jenkins-jarombek-io-a" { name = "jenkins.jarombek.io" type = "A" zone_id = "${data.aws_route53_zone.jarombek-io-zone.zone_id}" alias { evaluate_target_health = true name = "${data.aws_elb.jenkins-server-elb.dns_name}" zone_id = "${data.aws_elb.jenkins-server-elb.zone_id}" } }

Packer is a tool which allows developers to configure and create custom machine images4. It has the ability to create images for multiple different platforms, but the image I'm creating is specifically for AWS (an Amazon Machine Image). To build an image with Packer, the first task is to create a JSON template which configures the image. I created a JSON template called jenkins-image.json, which has the following content:

{ "variables": { "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}", "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}" }, "builders": [{ "type": "amazon-ebs", "access_key": "{{user `aws_access_key`}}", "secret_key": "{{user `aws_secret_key`}}", "region": "us-east-1", "source_ami_filter": { "filters": { "virtualization-type": "hvm", "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*", "root-device-type": "ebs" }, "owners": ["099720109477"], "most_recent": true }, "instance_type": "t2.micro", "ssh_username": "ubuntu", "ami_name": "global-jenkins-server {{timestamp}}" }], "provisioners": [ { "type": "shell", "script": "./setup-jenkins-image.sh" }, { "type": "ansible-local", "playbook_file": "./setup-playbook.yml" } ] }

The template is split into three pieces - variables, builders, and provisioners. The variables section defines variables which can be used throughout the template. In my template, I set variables for my AWS SDK/CLI credentials. These variables are passed to the builder, allowing it to push the AMI to my AWS account.

The builders section defines Builders, which are components of Packer that are able to create machine images for a specific platform5. In my case, I use the amazon-ebs builder which creates an Elastic Block Storage (EBS) backed Amazon EC2 image6. I configure the builder to use a base Ubuntu image in the source_ami_filter object. When the AMI is created, it will exist in my AWS account with the name specified in the ami_name field (note the dynamic timestamp which will resolve to the current time).

The provisioners section configures software that runs on top of the base image. There are multiple different provisioner types, such as Shell scripts or Ansible playbooks. Multiple provisioners can be used for a single image. In my case, the first shell provisioner runs a shell script which installs Ansible on the image. This is a required step in order to use the ansible-local provisioner.

#!/usr/bin/env bash # Make sure Ubuntu has enough time to initialize (https://www.packer.io/intro/getting-started/provision.html) sleep 30 apt-add-repository ppa:ansible/ansible -y sudo apt-get update sudo apt-get -y install ansible

The second provisioner runs an Ansible playbook on the image, installing dependencies such as Java, Python, and Jenkins in the process. At its most basic level, Ansible Playbooks can be run locally or remotely, running tasks on a machine. Playbooks are YAML files which specify the host to run on and the tasks to execute. Below is a snippet of my Playbook installing dependencies such as Java from the apt package manager (Jenkins is written in Java). The full playbook code can be found in setup-playbook.yml.

- hosts: localhost connection: local become: yes tasks: - name: Install Java8, Python3, Git, Wget & Unzip become: yes apt: pkg={{item}} state=installed with_items: - openjdk-8-jdk - git - wget - unzip - software-properties-common

To build an AMI with the Packer template, a packer build jenkins-image.json command is run from the terminal. Optionally, the template can be verified prior to the build with a packer validate jenkins-image.json command.

One of the nice aspects of this infrastructure design was that I had a functional Jenkins server to work with but didn't have to pay for a constantly running server due to the autoscaling schedules. It was also all configured as code, so destroying and rebuilding the infrastructure was as simple as running terraform destroy and terraform apply commands, along with packer build jenkins-image.json if the AMI changed.

However, with time I realized there were better approaches to building a Jenkins server on the cloud. One of the biggest reasons I needed to use EFS was to avoid manually reconfiguring the Jenkins server and reinstalling Jenkins plugins every time I wanted to make an infrastructure change. Luckily, Jenkins provides a plugin called Jenkins Configuration as Code (JCasC), which automates the configuration of a Jenkins server7. Jenkins also allows plugins to be pre-installed with a plugins.txt file. I'll discuss both of these approaches in my follow-up article, but in summary, the end result of using both these approaches is that I no longer need EFS.

The JCasC plugin was in its infancy when I created the original EC2/EFS Jenkins infrastructure in 2018, so I can't really blame myself for lack of knowledge about it. One of the things I should have known at the time was the power of using containers instead of virtual machines for cloud infrastructure. After using Docker and Kubernetes a good amount in the past year and a half, I decided to move my Jenkins server to a Docker container, which is orchestrated by Kubernetes on EKS in production. The benefits of containers over virtual machines are well documented, but in short, containers are more lightweight, energy efficient, and require less maintenance (since you are virtualizing the operating system only, not an entire server). I also find Dockerfiles easier to work with and read than Packer templates, but that is more of a personal preference.

The EC2 and EFS infrastructure discussed in this article was a good foundation to improve upon with Docker and Kubernetes. While this infrastructure no longer exists in my cloud, the repository is tagged at the time of its existence and its code can be found on GitHub. In a follow-up article, I will discuss the Kubernetes Jenkins server infrastructure that I'm using in my cloud today.

[1] "Amazon Elastic File System", https://aws.amazon.com/efs/

[2] Michael Wittig & Andreas Wittig, Amazon Web Services In Action, 2nd ed (Shelter Island, NY: Manning, 2019), 275

[3] "Network File System", https://en.wikipedia.org/wiki/Network_File_System

[4] "What is Packer?", https://www.packer.io/intro#what-is-packer

[5] "Packer Terminology: Builders", https://www.packer.io/docs/terminology#builders

[6] "AMI Builder (EBS backed)", https://www.packer.io/docs/builders/amazon/ebs

[7] "Jenkins Configuration as Code (a.k.a. JCasC) Plugin", https://github.com/jenkinsci/configuration-as-code-plugin/blob/master/README.md