Hands-On Docker for Microservices with Python
上QQ阅读APP看书,第一时间看更新

Building a web service container

We have a specific objective, to create a container that is capable of running our microservice, ThoughtsBackend. To do so, we have a couple of requirements:

  • We need to copy our code to the container.
  • The code needs to be served through a web server.

So, in broad strokes, we need to create a container with a web server, add our code, configure it so it runs our code, and serve the result when starting the container.

We will store most of the configuration files inside subdirectories in the ./docker directory.

As a web server, we will use uWSGI (https://uwsgi-docs.readthedocs.io/en/latest/). uWSGI is a web server capable of serving our Flask application through the WSGI protocol. uWSGI is quite configurable, has a lot of options, and is capable of serving HTTP directly.

A very common configuration is to have NGINX in front of uWSGI to serve static files, as it's more efficient for that. In our specific use case, we don't serve many static files, as we're running a RESTful API, and, in our main architecture, as described in Chapter 1, Making the Move – Design, Plan, and Execute, there's already a load balancer on the frontend and a dedicated static files server. This means we won't be adding an extra component for simplicity. NGINX usually communicates to uWSGI using the uwsgi protocol, which is a protocol specifically for the uWSGI server, but it can also do it through HTTP. Check the NGINX and uWSGI documentation.

Let's take a look at the docker/app/Dockerfile file. It has two stages; the first one is to compile the dependencies:

########
# This image will compile the dependencies
# It will install compilers and other packages, that won't be carried
# over to the runtime image
########
FROM alpine:3.9 AS compile-image

# Add requirements for python and pip
RUN apk add --update python3

RUN mkdir -p /opt/code
WORKDIR /opt/code

# Install dependencies
RUN apk add python3-dev build-base gcc linux-headers postgresql-dev libffi-dev

# Create a virtual environment for all the Python dependencies
RUN python3 -m venv /opt/venv
# Make sure we use the virtualenv:
ENV PATH="/opt/venv/bin:$PATH"
RUN pip3 install --upgrade pip

# Install and compile uwsgi
RUN pip3 install uwsgi==2.0.18
# Install other dependencies
COPY ThoughtsBackend/requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt

This stage does the following steps:

  1. Names the stage compile-image, inheriting from Alpine.
  2. Installs python3.
  3. Installs the build dependencies, including the gcc compiler and Python headers (python3-dev).
  4. Creates a new virtual environment. We will install all the Python dependencies here.
  5. The virtual environment gets activated.
  6. Installs uWSGI. This step compiles it from code.
You can also install the included uWSGI package in the Alpine distribution, but I found the compiled package to be more complete and easier to configure, as the Alpine uwsgi package requires you to install other packages such as  uwsgi-python3, uwsgi-http, and so on, then enable the plugin in the uWSGI config. The size difference is minimal. This also allows you to use the latest uWSGI version and not depend on the one in your Alpine distribution.
  1. Copy the requirements.txt file and install all the dependencies. This will compile and copy the dependencies to the virtual environment.

The second stage is preparing the running container. Let's take a look:

########
# This image is the runtime, will copy the dependencies from the other
########
FROM alpine:3.9 AS runtime-image

# Install python
RUN apk add --update python3 curl libffi postgresql-libs

# Copy uWSGI configuration
RUN mkdir -p /opt/uwsgi
ADD docker/app/uwsgi.ini /opt/uwsgi/
ADD docker/app/start_server.sh /opt/uwsgi/

# Create a user to run the service
RUN addgroup -S uwsgi
RUN adduser -H -D -S uwsgi
USER uwsgi

# Copy the venv with compile dependencies from the compile-image
COPY --chown=uwsgi:uwsgi --from=compile-image /opt/venv /opt/venv
# Be sure to activate the venv
ENV PATH="/opt/venv/bin:$PATH"

# Copy the code
COPY --chown=uwsgi:uwsgi ThoughtsBackend/ /opt/code/

# Run parameters
WORKDIR /opt/code
EXPOSE 8000
CMD ["/bin/sh", "/opt/uwsgi/start_server.sh"]

It carries out the following actions:

  1. Labels the image as runtime-image and inherits from Alpine, as previously.
  2. Installs Python and other requirements for the runtime. 
Note that any runtime required for compilation needs to be installed. For example, we install libffi in the runtime and  libffi-dev to compile, required by the cryptography package. A mismatch will raise a runtime error when trying to access the (non-present) libraries. The dev libraries normally contain the runtime libraries.
  1. Copy the uWSGI configuration and script to start the service. We'll take a look at that in a moment.
  1. Create a user to run the service, and set it as the default using the USER command.
This step is not strictly necessary as, by default, the root user will be used. As our containers are isolated, gaining root access in one is inherently more secure than in a real server. In any case, it's good practice to not configure our public-facing service accessing as root and it will remove some understandable warnings.
  1. Copy the virtual environment from the compile-image image. This installs all the compiled Python packages. Note that they are copied with the user to run the service, to have access to them. The virtual environment is activated. 
  2. Copy the application code.
  3. Define the run parameters. Note that port 8000 is exposed. This will be the port we will serve the application on.
If running as root, port 80 can be defined. Routing a port in Docker is trivial, though, and other than the front-facing load balancer, there's not really any reason why you need to use the default HTTP port. Use the same one in all your systems, though, which will remove uncertainty.

Note that the application code is copied at the end of the file. The application code is likely going to be the code that changes most often, so this structure takes advantage of the Docker cache and recreates only the very few last layers, instead of having to start from the beginning. Take this into account when designing your Dockerfiles.

Also, keep in mind that there's nothing stopping you from changing the order while developing. If you're trying to find a problem with a dependency, and so on, you can comment out irrelevant layers or add steps later once the code is stable.

Let's build our container now. See that there are two images created, though only one is named. The other is the compile image, which is much bigger as it contains the compilers, and so on:

$ docker build -f docker/app/Dockerfile --tag thoughts-backend .
...
---> 027569681620
Step 12/26 : FROM alpine:3.9 AS runtime-image
...
Successfully built 50efd3830a90
Successfully tagged thoughts-backend:latest
$ docker images | head
REPOSITORY TAG IMAGE ID CREATED SIZE
thoughts-backend latest 50efd3830a90 10 minutes ago 144MB
<none> <none> 027569681620 12 minutes ago 409MB

Now we can run the container. To be able to access the internal port 8000, we need to route it with the -p option:

$ docker run -it  -p 127.0.0.1:8000:8000/tcp thoughts-backend

Accessing our local browser to 127.0.0.1 shows our application. You can see the access logs in the standard output:

You can access a running container from a different Terminal with docker exec and execute a new shell. Remember to add -it to keep the Terminal open. Inspect the currently running containers with docker ps to find the container ID:

$ docker ps
CONTAINER ID IMAGE COMMAND ... PORTS ...
ac2659958a68 thoughts-backend ... ... 127.0.0.1:8000->8000/tcp
$ docker exec -it ac2659958a68 /bin/sh
/opt/code $ ls
README.md __pycache__ db.sqlite3 init_db.py pytest.ini requirements.txt tests thoughts_backend wsgi.py
/opt/code $ exit
$

You can stop the container with Ctrl + C, or, more gracefully, stop it from another Terminal:

$ docker ps
CONTAINER ID IMAGE COMMAND ... PORTS ...
ac2659958a68 thoughts-backend ... ... 127.0.0.1:8000->8000/tcp
$ docker stop ac2659958a68
ac2659958a68

The logs will show graceful stop:

...
spawned uWSGI master process (pid: 6)
spawned uWSGI worker 1 (pid: 7, cores: 1)
spawned uWSGI http 1 (pid: 8)
Caught SIGTERM signal! Sending graceful stop to uWSGI through the master-fifo
Fri May 31 10:29:47 2019 - graceful shutdown triggered...
$

Capturing SIGTERM properly and stopping our services gracefully is important for avoiding abrupt terminations of services. We'll see how to configure this in uWSGI, as well as the rest of the elements.