Docker on Windows
上QQ阅读APP看书,第一时间看更新

Compiling the application before the build

Building the application first fits in neatly with existing build pipelines. Your build servers need to have all the application platforms and build tools installed, but your finished container image only has the minimum it needs to run the app. With this approach, the Dockerfile for my .NET Core app becomes even simpler:

FROM microsoft/dotnet:1.1-runtime-nanoserver

WORKDIR
/dotnetapp
COPY ./src/bin/Debug/netcoreapp1.1/publish .

CMD
["dotnet", "HelloWorld.NetCore.dll"]

This Dockerfile uses a different FROM image, one that contains just the .NET Core 1.1 runtime and not the tooling (so it can run a compiled application, but it can't compile one from source). You can't build this image without building the application first, so you'll need to wrap the docker image build command in a build script that also runs the dotnet publish command to compile the binaries.

A simple build script that compiles the application and builds the Docker image looks like this:

dotnet restore src; dotnet publish src

docker image build --file Dockerfile.slim --tag dockeronwindows/ch02-dotnet-helloworld:slim .
If you put your Dockerfile instructions in a file called something other than Dockerfile, you can build it by specifying the filename with the --file option, as shown in this example: image build --file Dockerfile.slim.

I've moved the requirements for the platform tooling from the image to the build server, and that results in a smaller final image: 1.15 GB for this version compared to 1.68 GB for the previous one. You can see the size difference by listing images, and filtering on the image repository name:

> docker image ls --filter reference=dockeronwindows/ch02-dotnet-helloworld

REPOSITORY TAG IMAGE ID CREATED SIZE
dockeronwindows/ch02-dotnet-helloworld latest ebdf7accda4b 6 minutes ago 1.68GB
dockeronwindows/ch02-dotnet-helloworld slim 63aebf93b60e 13 minutes ago 1.15GB

This new version is also a more restricted image. The source code and the .NET Core SDK aren't packaged in the image, so you can't connect to a running container and inspect the application code, or make changes to the code and recompile the app.

For enterprise environments, or for commercial applications, you're likely to already have a well-equipped build server, and packaging the built app can be part of a more comprehensive workflow:

In this pipeline, the developer pushes their changes to the central source code repository (1). The build server compiles the application and runs unit tests - if they pass, then the container image is built and deployed in a staging environment (2). Integration tests and end-to-end tests are run against the staging environment, and if they pass, then your versioned container image is a good release candidate for testers to verify (3).

You deploy a new release by running a container from the image in production, and you know that your whole application stack is the same set of binaries which passed all the tests.

The downside with this approach is that you need to have the application SDK installed on all your build agents, and the versions of the SDK and all its dependencies need to match what the developers are using. Often in Windows projects, you find CI servers with Visual Studio installed, to ensure the server has the same tools as the developer. That makes for heavy build servers which take a lot of effort to commission and maintain.

It also means that you can't build this Docker image yourself unless you have the .NET Core 1.1 SDK installed on your machine.

You can get the best of both options by using a multi-stage build, where your Dockerfile defines one step to compile your application, and another step to package it into the final image. Multi-stage Dockerfiles are portable, so anyone can build the image with no pre-requisites, but the final image only contains the minimum needed for the app.