
Building Docker Images
Docker images consist of applications with their dependencies and they are ready to be launched at scale. In addition, they are suitable to run on cloud servers and data centers because of their lightweight architecture. Docker images are created from the steps defined in Dockerfile, where each instruction forms a layer on top of the previous one. This layered design of images is the prominent feature that makes Docker images lightweight and quick to start. The underlying technology of layered Docker images is the union file system (UFS). The UFS can be considered as stackable layers of files and directories. Each layer is traceable back to its parent layer in a tree structure so that different branches can share the same root. In other words, if two container images have the same base image of ubuntu:18.10, this base image will not be replicated twice; Docker Engine will reuse the same base image to run these two containers. In the next sections, we will present Dockerfiles, how containers are defined, and how they are released in registries.
Dockerfiles
A Dockerfile consists of the necessary commands that are required to build a Docker image in a sequential scheme. Docker Engine uses the text file in Dockerfile format to create the Docker image and this file consists of the steps defined with the commands including but not limited to:
- FROM: The base image for the container as a starting phase
- ADD: To copy files from the host system into the container filesystem
- ENV: The environment variables for the container
- RUN: To execute commands in the container, such as running commands in the Terminal
- WORKDIR: The working directory to run the container commands
- CMD: The executable command to run every time the container starts
Note
A complete list of supported commands in Dockerfile is available in the official reference document at https://docs.docker.com/engine/reference/builder/.
An example Dockerfile is defined for a web server using the following script:
- The base image of ubuntu:18.10.
- The RUN commands to update the apt-get repositories, install nodejs and npm, and install http-server.
- WORKDIR is defined as the /usr/apps/hello-world/ folder that is used for HTML files in the future.
- The executable command to run http-server on port 8080. Since WORKDIR was defined previously, the CMD command will run in the /usr/apps/hello-world/ folder:
FROM ubuntu:18.10
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
WORKDIR /usr/apps/hello-world/
CMD ["http-server", "-p", "8080"]
The Docker Registry
Docker registries are the solution for building and delivering containers in a cloud-native way. A Docker registry is a content delivery and storage solution for Docker images, which are tagged, given specific versions, and different tags of the same Docker image are kept in the same repository. Docker registries play a crucial role in continuous delivery and deployment. They make it possible to run hundreds of instances in a distributed cluster by storing them efficiently and delivering them in a scalable fashion. Cloud registries provide a high level of security features that could work for startups and large enterprises. There are various cloud registry services, and some of the most popular ones are as follows:
- Docker Hub: https://hub.docker.com/
- Quay: https://quay.io/
- AWS EC2 Container Registry: https://aws.amazon.com/ecr/
- Google Container Registry: https://cloud.google.com/container-registry/
In the following exercise, we will build a Docker image and push it to the Docker registry. This will demonstrate how to build and deliver images in a cloud-native way, which is a prerequisite for running containerized microservices in cloud systems such as Kubernetes.
Note
You will need a Docker Hub account to push the images into the registry in the following exercise. Docker Hub is a free service and you can sign up to it at https://hub.docker.com/signup.
Exercise 6: Building a Docker Image and Pushing it to Docker Hub
In this exercise, we aim to build and push a web server container to Docker Engine.
To complete the exercise, we need to ensure the following steps are executed:
- Create a text file with the Dockerfile name, and include the following content:
FROM ubuntu:18.10
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
WORKDIR /usr/apps/hello-world/
CMD ["http-server", "-p", "8080"]
Note
Dockerfile is already available at https://github.com/TrainingByPackt/Introduction-to-DevOps-with-Kubernetes/blob/master/Lesson02/Dockerfile.
- Build the Docker image with the tag including your Docker Hub username:
docker build -t <USERNAME>/webserver:latest .
Figure 2.11: The output of docker build (end of the run)
Building the image includes installing various libraries, therefore, a long output is to be expected. At the end of the build, you will see Successfully built 1e54f0e11db7 and Successfully tagged onuryilmaz/webserver:latest on the screen, which indicate successful completion.
- Create a repository in Docker Hub with the webserver name:
Figure 2.12: The repository view in Docker Hub
Figure 2.13: Create a repository in Docker Hub
Fill the name field with webserver and ensure that you select Public under Visibility. Click on the Create button, and you will be redirected to the new repository page:
Figure 2.14: The new repository in Docker Hub
Note
If this is the first time that you are using Docker client with your Docker Hub account, you need to log in using the docker login command from the Terminal.
- Push the image to the Docker hub registry, as follows:
docker push <USERNAME>/webserver

Figure 2.15: The output of docker push
Following successful completion, all layers inside the Docker image should be uploaded. The new image with the latest tag can be checked from the Docker Hub in the Tags section of the repository:

Figure 2.16: Tags of the new repository in Docker Hub
In this exercise, we demonstrated how to build a Docker image and push the image to the Docker registry in Docker Hub. In the following section, we will explain how you can run a Docker container and share resources from the host system in order to demonstrate the fundamentals of managing containers in the cloud.