Container image construction with Kaniko

Yannig Perré
9 min readDec 27, 2019

Disclaimer: what follows is a feedback on the use of Kaniko compared to my use of Docker. If you see any errors / inconsistencies, please let me know.

My team recently started a new project. From the start, our choice fell on the use of worker Gitlab deployed in Kubernetes for the construction of images.

Previously, this construction relied on the Docker daemons of our Kubernetes machines. For that, we expose the control point of the daemon to the pod (/var/run/docker.sock) in charge of the compilation to be able to start the construction.

However, this approach have several drawbacks:

  • Use of Docker for compilation when its presence is not mandatory (we could use another container manager)
  • Bypass Kubernetes resource allocation (not so good)
  • Introduction of a security flaw due to the direct manipulation of the underlying Docker daemon (no so goodeither)
  • Gitlab workers are — by default — incompatible with this mechanism

In order to respond to these problems, we started to experiment with Kaniko. This tool can build container images from a Dockerfile while remaining inside a container (in a Kubernetes cluster) and you can keep — in principle — your old Dockerfile.

In what follows, we will try to quantify this and see the workarounds/optimizations that we used.

Test image

For the first tests, we will use the following Dockerfile:

FROM node:12 AS build
RUN echo "hello" > /result.txt
RUN cat /result.txt
FROM node:12
COPY --from=build /result.txt /

It is a multi-stage image which reproduces the following operations:

  • Construction of the artefacts in a build image
  • Availability of the compilation process in a minimal image

This type of technique is used to compile programs in Docker without weighing down an image with a compiler and avoiding leaving tools lying around that can have security concerns.

Docker compilation test

First, go to the same directory in the Dockerfile file and preload node:12 image using the following command:

$ docker pull node:12

Now start creating the image using the following command preceded by the command time in order to know the time taken:

$ time docker build .

This operation should take about 3 to 5 seconds:

Sending build context to Docker daemon 24.58kB
Step 1/5 : FROM node:12 AS build
— -> 5ad9a7363da8
Step 2/5 : RUN echo “hello” > /result.txt
— -> Running in 0cab570944f1
Removing intermediate container 0cab570944f1
— -> 412ad7de7cc5
Step 3/5 : RUN cat /result.txt
— -> Running in 5a4c71d110ee
hello
Removing intermediate container 5a4c71d110ee
— -> 888bdd7fded5
Step 4/5 : FROM node:12
— -> 5ad9a7363da8
Step 5/5 : COPY — from=build /result.txt /
— -> 00291feb766e
Successfully built 00291feb766e
real 0m3,131s
user 0m0,044s
sys 0m0,020s

Start the compilation a second time:

$ time docker build .

This time, this operation should take from 200 to 500 ms :

Sending build context to Docker daemon 33.28kB
Step 1/5 : FROM node:12 AS build
— -> 5ad9a7363da8
Step 2/5 : RUN echo “hello” > /result.txt
— -> Using cache
— -> 412ad7de7cc5
Step 3/5 : RUN cat /result.txt
— -> Using cache
— -> 888bdd7fded5
Step 4/5 : FROM node:12
— -> 5ad9a7363da8
Step 5/5 : COPY — from=build /result.txt /
— -> Using cache
— -> 00291feb766e
Successfully built 00291feb766e
real 0m0,198s
user 0m0,011s
sys 0m0,041s

The difference in execution time is explained by the Docker cache to avoid launching unneeded steps (we see it in the lines containing the message ---> Using cache).

Compilation with Kaniko

Now that know the time take by Docker to construct this image, we will do the same with Kaniko.

For that, we can launch Kaniko using Docker. To do that, we can use the following image: gcr.io/kaniko-project/executor:v0.15.0.

Note: By default, there is no shell in this Docker image. To have one (it may be necessary in the case of Gitlab for example), you can use the debug image (gcr.io/kaniko-project/executor:debug-v0.15.0).

In order to do that, we must use the following options with docker command:

  • Launch a container with run
  • Have a tty terminal-it
  • Remove our container with --rm
  • Finaly, the Kaniko image name (gcr.io/kaniko-project/executor:v0.15.0)

Here is the complete command to launch:

$ docker run -it --rm gcr.io/kaniko-project/executor:v0.15.0

This command will send back the following message:

Usage:
executor [flags]
executor [command]
Available Commands:
help Help about any command
version Print the version number of kaniko
...

In order to be able to start building the image, it will mount the current directory ($PWD) in a container directory (/workspace). For this purpose, use option -v option followed by the path to mount and the destination location separated by :).

Below the same command with the mounting of the current directory in the container:

$ docker run -it --rm -v $PWD:/workspace \
gcr.io/kaniko-project/executor:v0.15.0

In order to compile the image, add the following options:

  • Worspace location with -c option (-c /workspace)
  • --no-push option to indicate that we will not push the image anywhere

Change --no-push option by -d NOM_IMAGE:tag or --destination NOM_IMAGE:tag to push the result in the image registry.

The complete command to launch will therefore be as follows:

$ time docker run -it --rm -v $PWD:/workspace \
gcr.io/kaniko-project/executor:v0.15.0 \
-c /workspace --no-push

When we use thetime command, it takes about 80 seconds to execute…

Here an extract of the result:

INFO[0000] Resolved base name node:12 to node:12         
INFO[0000] Resolved base name node:12 to node:12
INFO[0000] Resolved base name node:12 to node:12
INFO[0000] Resolved base name node:12 to node:12
INFO[0000] Downloading base image node:12
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:11a46719b3aa4314c39eb41e7638308c9c1976fac55bfe2a6cbb0aa7ab56a95f: no such file or directory
INFO[0001] Downloading base image node:12
INFO[0002] Downloading base image node:12
INFO[0003] Error while retrieving image from cache: getting file info: stat /cache/sha256:11a46719b3aa4314c39eb41e7638308c9c1976fac55bfe2a6cbb0aa7ab56a95f: no such file or directory
INFO[0003] Downloading base image node:12
INFO[0004] Built cross stage deps: map[0:[/result.txt]]
INFO[0004] Downloading base image node:12
INFO[0004] Error while retrieving image from cache: getting file info: stat /cache/sha256:11a46719b3aa4314c39eb41e7638308c9c1976fac55bfe2a6cbb0aa7ab56a95f: no such file or directory
INFO[0004] Downloading base image node:12
INFO[0005] Unpacking rootfs as cmd RUN echo "hello" > /result.txt requires it.
INFO[0139] Taking snapshot of full filesystem...
INFO[0143] RUN echo "hello" > /result.txt
INFO[0143] cmd: /bin/sh
INFO[0143] args: [-c echo "hello" > /result.txt]
INFO[0143] Taking snapshot of full filesystem...
INFO[0144] RUN cat /result.txt
INFO[0144] cmd: /bin/sh
INFO[0144] args: [-c cat /result.txt]
hello
INFO[0144] Taking snapshot of full filesystem...
INFO[0146] No files were changed, appending empty layer to config. No layer added to image.
INFO[0146] Saving file /result.txt for later use.
INFO[0146] Deleting filesystem...
INFO[0147] Downloading base image node:12
INFO[0148] Error while retrieving image from cache: getting file info: stat /cache/sha256:11a46719b3aa4314c39eb41e7638308c9c1976fac55bfe2a6cbb0aa7ab56a95f: no such file or directory
INFO[0148] Downloading base image node:12
INFO[0149] Unpacking rootfs as cmd COPY --from=build /result.txt / requires it.
INFO[0284] Taking snapshot of full filesystem...
INFO[0286] COPY --from=build /result.txt /
INFO[0286] Taking snapshot of files...
INFO[0286] Skipping push to container registry due to --no-push flag

Why is it so long

The way Kaniko works is very different from Docker. In fact, Docker relies heavily on the images present on the machine.

Kaniko does not have access at this layer of the Docker daemon. It must therefore retrieve this content from the Docker registry which inevitably adds time and network consumption.

However, when we look at the size of the Docker image (node:12), it consumes a lot of space: 900MB (decompressed)

To make matters worse, in addition to downloading this image, Kaniko must also extract the image locally before it can start working.

All of these operations slow down the process and take time.

In order to reduce the construction time, we will try to test several techniques.

Ways to reduce compilation times

In order to reduce compilation times, several techniques can be used:

  • Use of basic image with reduced footprint (alpine version for example)
  • Merging of calls to RUN
  • Delete local caches after build
  • Activating cache (based on a Docker image registry)

Small footprint image (aka Alpine flavor)

First step: try to reduce the size of the images used. Launch the following command to know the size of the image node:12:

$ docker image ls node:12

This command should return the following informations:

REPOSITORY     TAG   IMAGE ID       CREATED       SIZE
node 12 7be6a8478f5f 2 weeks ago 908MB

This image take about 900 Mb (!?!).

Now, the same command on node:12-alpine image:

$ docker image ls node:12-alpine

Below the informations returned:

REPOSITORY   TAG         IMAGE ID       CREATED       SIZE
node 12-alpine 3fb8a14691d9 2 weeks ago 80.2MB

The Alpine flavour takes 10 times less space than the “standard” image!

To use this image, replace node:12 by node:12-alpine in the Dockerfile:

FROM node:12-alpine AS build 
RUN echo "hello" > /result.txt
RUN cat /result.txt

FROM node:12-alpine
COPY --from=build /result.txt /

Restart Kaniko:

$ time docker run -it --rm -v $PWD:/workspace \
gcr.io/kaniko-project/executor:v0.15.0 \
-c /workspace --no-push

The same operation take now about 14 seconds.

Merging RUN calls

To understand the problem, we will take an example where simple command are launched multiple times using an Alpine image:

FROM alpine
RUN echo command1
RUN echo command2
RUN echo command3

With Kaniko, this build take about 10 seconds.

Let’s try an optimized version to see the possible gains:

FROM alpine
RUN echo command1 && \
echo command2 && \
echo command3

This time, the build takes about 7 seconds.

This gain is explained by the fact that between each call, Kaniko no longer needs to extract/inspect the content of the working directory.

Delete local caches in the image

When building an image (as in the case of Javascript Yarn, npm, gradle or Maven dependencies), the process can quickly need to download several thousand files (notably through the content of the node_modules directory for Javascript dependencies).

The construction is therefore cluttered by these files which can take several hundred MB which will have two consequences:

  • slow down build process
  • increase the size of the resulting builds

To reduce this impact, cache directories should be deleted (with rm -rf node_modules for Javascript libraries) or by cleaning cache (apt clean all for exemple when installing a package with apt).

Another important aspect, commands must be chained after construction operations (as seen above). Indeed, if this is not done, Kaniko will store intermediate layers and increase the size of the resulting images.

Below is an example of RUN statement that meets these guidelines:

RUN yarn --frozen-lockfile && \
yarn build && \
yarn cache clean && \
rm -rf node_modules

Cache using Docker registry

Last way to save time: the use of a cache.

Unlike Docker which is able to store the different build stages, Kaniko has no way of doing it and loses its context each time.

In order to keep the result of the operations launched, we can use a Docker register. This activation is done using the following options:

  • cache activation with --cache option
  • Optional specification of a registry location that will serve as a cache (--cache-repo option)

The second option is not mandatory but allows you to specify a different location to store the intermediate layers that will serve as an execution cache.

Note that cache is not compatible with --no-push option. You have to specify the name of the image with -d NOM_IMAGE:tag option.

The only difficulty comes from the fact that it is necessary to have rights to push on this register and therefore to be authenticated. Unfortunately, Kaniko has no options for performing this authentication. To work around the problem, you must populate the contents of /kaniko/.docker/config.json in the Kaniko container.

Here is an example of a task in Gitlab allowing you to do this:

my-build:
image:
name: gcr.io/kaniko-project/executor:debug-v0.15.0
entrypoint: [""]
before_script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
script:
- /kaniko/executor --cache -c $CI_PROJECT_DIR -d $CI_REGISTRY_IMAGE/my-image:$CI_COMMIT_REF_NAME

This job involves generating a Docker configuration file allowing connection to the Docker image registry.

If you are localy already connected (with docker login), you can mount your ~/.docker directory in Kaniko. This operation is done using -v $HOME/.docker:/kaniko/.docker options.

Here an example showing the complete command to launch:

$ docker run -it --rm -v $PWD:/workspace \
-v $HOME/.docker:/kaniko/.docker \
gcr.io/kaniko-project/executor:v0.15.0 \
-c /workspace --no-push

Go further

Overall, Kaniko is still young and suffers from a few flaws:

  • Slow compilation (compared to Docker)
  • Bugs vs Docker behaviour (especially with cache management in version 0.14)

The good news is that things are improving (cache behavior is fixed in 0.15) and differences are reducing.

Even if everything is not perfect, it can be a good opportunity to rethink the construction of your images in order to remove long-standing things.

--

--

Yannig Perré

Sysadmin for many years in France, I’m working at AIOS (http://aios.sh). I’m a big fan of Prometheus, Kubernetes and Ansible.