Optimizing your Microservices build by utilizing Docker build cache
How do I know when layer cache was used?
When can/cannot a cached layer be used?
How can I apply this to my microservices builds?
How much did this improve our build times?
The build process of Docker based microservices can be a time-consuming task for your build server, leading to a sluggish development pace. However, there is a solution to this problem - by leveraging the power of the Docker build cache, you can significantly minimize the time it takes to go from code push to build completion. In this article, we will explore how utilizing the Docker build cache can streamline your microservices build process and enhance your overall development speed.
What is Docker Build Cache?
When running the “docker build [contextPath]” command, layers are created and cached to your local machine for each executed step in your Dockerfile. Later, when you run the build command again, those cached layers can be used instead of needing to re-run the steps in the Dockerfile. This is the essence of the docker build cache.
Cached layers can only be utilized when a certain set of conditions are met. And when a layer cache cannot be used, ALL subsequent steps in the Dockerfile need to be rerun! Having a good understanding of those conditions will give you the best chance of optimizing your build performance.
(Docker’s documentation on build cache is great: https://docs.docker.com/build/cache/)
How do I know when layer cache was used?
Note: for brevity, some of the command output is omitted
Starting with the following Dockerfile, let’s see the build cache in action:
FROM ubuntu:20.04 COPY MyContent.txt . ENTRYPOINT ["echo", "Hello World!"]
Now when I run “docker build .” in the directory of this Dockerfile, I see the output of it building the image:
=> [1/2] FROM docker.io/library/ubuntu:20.04 => [2/2] COPY MyContent.txt .
And if I make no changes and run “docker build .” again, I can now see that the cache layer is used:
(As seen with CACHED [2/2])
=> [1/2] FROM docker.io/library/ubuntu:20.04 => CACHED [2/2] COPY MyContent.txt .
Excellent, we have a way for us to verify when we utilize or don’t utilize cache at every step of the Dockerfile!
When can/cannot a cached layer be used?
Let’s run a few experiments to figure out some of these conditions.
Experiment 1:
So what happens when we change the base image to FROM ubuntu:18.04?
=> [1/2] FROM docker.io/library/ubuntu:18.04 => [2/2] COPY MyContent.txt .
As you would probably guess, the cache was not used.
These other scenarios have the same effect (cache not used):
Change the contents within the file MyContent.txt
Change the COPY command to *.txt
Now that we copy *.txt, renaming MyContent.txt to MyContent2.txt
(contents within the file unchanged)
Experiment 2:
Now what happens if we add a command:
FROM ubuntu:20.04 COPY *.txt . RUN apt-get update ENTRYPOINT ["echo", "Hello World!"]
And then change the order of those commands:
FROM ubuntu:20.04 RUN apt-get update COPY *.txt . ENTRYPOINT ["echo", "Hello World!"]
We see output from the second (after running the first):
[1/3] FROM docker.io/library/ubuntu:20.04 [2/3] RUN apt-get update [3/3] COPY *.txt .
Again, the cache is not used.
Experiment 3:
Or if we change anything about the run command:
(even though it will have no effect on the resulting files it would produce)
FROM ubuntu:20.04 RUN apt-get update > /dev/null COPY *.txt . ENTRYPOINT ["echo", "Hello World!"]
Again, the cache is not used.
Experiment 4:
Consider the following files duplicated across two directories with some other files that are different in those directories.
Dockerfile:
FROM python:3.10 COPY requirements.txt . RUN pip install -r requirements.txt COPY . .
requirements.txt:
boto3
From the first directory, we get output:
C:\Temp\Test1>docker build . => [1/4] FROM docker.io/library/python:3.10 => [2/4] COPY requirements.txt . => [3/4] RUN pip install -r requirements.txt => [4/4] COPY . .
And from the second directory, we get output:
C:\Temp\Test2>docker build . => [1/4] FROM docker.io/library/python:3.10 => CACHED [2/4] COPY requirements.txt . => CACHED [3/4] RUN pip install -r requirements.txt => [4/4] COPY . .
Ohhh, now that’s an interesting result! This tells us that we can utilize cache when a package manager’s config file (in this case, requirements.txt) is identical across directories and when the Dockerfile steps up to that point are identical. Or even when they were cloned from different git repositories.
How can I apply this to my microservices builds?
As alluded to above, package management is the first suggestion. If your microservices use the same technology stack and they have largely the same referenced packages, then you can cache the pulling of your referenced packages. As we discovered earlier, this can work even when your microservices are in different repositories. That is, as long as your package management configuration file is identical.
So what if your package management configuration file is not identical across services? Then you can use one common package management configuration file and then a second that lists the packages that are in addition to the common.
Python
pip install -r requirements.common.txt
(this should be cached when run across multiple services)pip install -r requirements.serviceA.txt
(service specific additions/overrides to common)Node.js
cd /path/to/common/ && npm install /path/to/serviceA/
(this should be cached when run across multiple services)cd /path/to/serviceA/ && npm install /path/to/serviceA/
(service specific additions/overrides to common).NET
Recommended to keep your services in the same solution, then using the command: [dotnet restore solutionPath] in each Dockerfile will allow subsequent service builds to utilize cache
I’d also highly recommended to use the new Central Package Management feature
Knowing more details about the constraints of the cache can give you insight into other improvements too! Check out (a slightly simplified version of) our .NET Dockerfile:
# Define the base of the final image first (for Visual Studio's use) FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base WORKDIR /app # Create an intermediate image for building the solution FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build WORKDIR /src # NuGet package management configuration COPY global.json global.json COPY NuGet.config ./NuGet.config # Dotnet tools package configuration and restore COPY .config .config RUN dotnet tool restore # Dotnet project files # (minimal files needed for library package dependencies) # NOTE: this does not preserve the original directory structure COPY Source/**/*.*proj . # Move the copied files back to their original structure (by convention) RUN for file in $(ls *.*proj) \ do mkdir -p ./Source/${file%.*}/ && \ mv $file ./Source/${file%.*}/ \ done COPY Source/*.sln Source RUN dotnet restore Source # Copy the source files and build COPY Source Source RUN dotnet build --no-restore Source -c Release # Create the final image FROM base as final WORKDIR /app COPY --from=build /src/Source/ServiceA/bin . ENTRYPOINT ["dotnet", "ServiceA.dll"]
The last two lines are the only lines that are different from one service to another. Which means our solution builds once on the first service and then the remaining services use the build cache from that first build all the way to the layer that copies into the final image!
Also, because we wait until a very late stage in the Dockerfile to copy in the actual source files, a new build that only changes source files can utilize the build cache from a previous build all the way to the layer of copying source and building (tool and library restore are cached).
How much did this improve our build times?
By optimizing our Dockerfile build process and leveraging the Docker build cache, we were able to reduce our microservices build times from approximately 16 minutes to just 4 minutes. Although this may not seem significant on an individual basis, the time savings add up when considering the continuous verification of builds with every push to pull requests.
So, just remember a build cache layer (and all subsequent cached layers) will NOT be used when:
The base image changes
A command in the Dockerfile changes
Order changes of the commands in the Dockerfile
Contents of local files changes (on the COPY step that includes the changed files)
New files are added or existing files are deleted (on their COPY step)
And from that we came up with these recommendations:
Use identical package management configuration files across services
Make your Dockerfiles (per service) as identical as possible
The only differences in your Dockerfiles should be as far down in the file as possible
COPY only the minimal amount of files necessary to run the next command
Periodic docker cache cleanup with command: “docker builder prune -a”
(But not too aggressively - not after every build - weekly or monthly works well)
By implementing these recommendations, you can optimize your microservices build process and improve your overall development speed.