April 28th, 2019

Docker Part III: Containerizing an Application





Containerized Application


In my previous two Docker articles, I explored container environment basics and created a playground to run Docker on AWS. In this article, I'm creating a containerized application that is publicly accessible from the internet.

Containerized Application

A software application which is running inside a container. Configuring an application to run inside a container is known as "containerizing an application.1" This process usually includes creating a Dockerfile in the root directory of an application repository. Dockerfiles are blueprints for building Docker images, and commonly declare application dependencies and execution processes2.

The first thing needed before containerizing an application is the application itself. I created a basic Node.js application which prints out some JSON. It consists of a single main.js file.

// main.js const express = require('express'); const app = express(); const port = process.env.port || 3000; app.get('/', (req, res) => { res.json({title: 'Dockerized Node.js App'}); }); module.exports = app.listen(port, () => {`Started Containerized App on Port ${port}`); });

With the Node.js application in place, I created a Dockerfile which configures a container to run the main.js file.

# Dockerfile # The 'FROM' instruction defines the base layer of the image. # This image uses the Alpine Linux distro as the base image. FROM alpine # The 'LABEL' instruction is used to add metadata about the image. LABEL maintainer="" # The 'RUN' instruction adds a new layer to an image. It executes commands on the image. RUN apk add --update nodejs nodejs-npm # The 'COPY' instruction copies files in the build context onto the image COPY . /src # The 'WORKDIR' instruction sets the directory to execute the remaining commands from WORKDIR /src # Install npm inside the /src directory RUN npm install # The 'EXPOSE' instruction determines which port is exposed for the containerized application EXPOSE 3000 # The 'ENTRYPOINT' instruction declares the main application that the container runs ENTRYPOINT ["node", "./main.js"]

As previously mentioned, a Dockerfile is a blueprint for an image. A container is a running image. The Dockerfile for my application begins by building on top of the alpine image. alpine is an extremely small Docker image based on the Alpine Linux distribution. On top of the Alpine OS, I install Node.js and the application npm dependencies. Finally I declare the node process that runs on the container in the ENTRYPOINT instruction.

Now it's time to containerize the application in the Docker playground I built in my previous post. The first steps are connecting to the playground EC2 instance and cloning the Git repository which contains the Node.js application.

# Connect to Docker Playground EC2 ssh -A git clone cd devops-prototypes/docker/nodejs-docker-app/

With the Node.js application on the EC2 instance, it's time to containerize the application using Docker. The following command does the trick:

docker image build -t nodejs-docker-app:latest .

The final dot (.) in this command is important since it tells Docker to build an image based on the Dockerfile in the current directory. List all the Docker images to confirm that a new nodejs-docker-app image exists.

docker image ls # REPOSITORY TAG IMAGE ID CREATED SIZE # nodejs-docker-app latest 1510b5182aaa 2 seconds ago 50.3MB # alpine latest cdf98d1859c1 6 days ago 5.53MB

Notice there is also an alpine image. This is pulled from DockerHub during the FROM alpine instruction of my Dockerfile.

The next step is to start a Docker container from the nodejs-docker-app image.

docker container run -d --name nodejs-app -p 80:3000 nodejs-docker-app:latest

Confirm the container is running by listing the containers with docker container ls. The application is now running in a container on port 3000 and accessible from the Docker playground EC2 instance on port 80. Using the public DNS name of the EC2 instance, the application is viewable from a web browser.

While this is a very basic application, the same containerizing process can be used for complex pieces of software. In a production system, you won't just run a single container. Replicas will be created through a container orchestrator such as Kubernetes or Docker Swarm. Kubernetes will be the topic of future articles since I've been using it extensively lately. All the code from this article is available on GitHub.

[1] Nigel Poulton, Docker Deep Dive (Nigel Poulton, 2018), 133

[2] Poulton., 136