Things no one tells you about.

One of Docker’s killer features is the environment
parity, yet it feels like one little detail was left untold: how to handle
configuration files
.

Unless you are using the same configuration between development, quality,
production, etc. you will end up with different endpoints, API keys, secret
tokens and feature switches for each environment.

Available Options

There are a couple of different ways to handle configuration in Docker. Below,
you will find a non-exhaustive list.

Bake into the image

The simplest way is to ignore all the complexity and bake the configuration
files inside the container, by adding explicitly in the
Dockerfile:

COPY secrets.yml /var/www/config/secrets.yml

This approach could be compared to committing secrets to a public
repository.

Environment Variables

Following twelve-factor methodology, point
3
, configs should be set using environment
variables:

$ docker run --env secret=foo --env othersecret=bar myapp

This approach lacks expressiveness and it’s quite easy to leak into logs. If you
commit the running container, it’s the same as scenario 1. Several articles
(article
1
,
article
2
)
have been written on why this is a bad idea for storing secrets.

Bind mount

Mounting a host directory/file directly in the container:

$ docker run -v /config/config.yml:/var/www/config.yml:ro myapp

The configuration files are mounted from the host server, probably copied using
a configuration management tool like Chef* *or
Puppet.

This approach is the most similar to pre-docker days, issues like rollback
configuration and keeping the configuration updated still persist.

With this approach the docker deployment process (besides running the
containers) should ensure that the right configuration is available and updated.

Kubernetes uses a similar
solution.

One possible variation is using tools like
ejson or
blackbox. Using this type of tools,
configuration files are added along with the code. During container boot, by
using a bind mounted secret key, configuration files are decrypted.

Data Volumes

Instead of mounting a host file directly, another solution is loading the
configuration from another container:

$ docker run --volumes-from app-conf myapp

This approach is usually applicable with a entry-point script that copies the
configurations into the right location before starting the application. This
solution is the simplest in a rollback situation, although it is basically the
same as scenario 1.

Key-Value Stores

Using key-value stores is possible to have a single source of
truth
. A couple of
solutions are available:

The vault solution is the most interesting one. The simplest way is using *Vault
*with consul-template and the
most advanced scenarios will need application rewrite to use native
libraries
(and to a
certain degree coupling).

However with this approach developers will need to run a *Vault *container and
some bootstrap process to load all the needed configurations.

Conclusion

The most interesting approaches from the operational point of view are
relatively complex for developer’s everyday use and end up sacrificing
environment parity.

How are you shipping configurations, any different approach?