Type Book
Date 2020-05
Pages 231
Tags nonfiction

Docker Deep Dive

Part 1: The big picture stuff

1: Containers from 30,000 feet

Containers are similar to extra-slim VMs. They run apps on the host OS kernel using modern kernel functions to provide separation between different containers (and the host). Importantly, this means that e.g. linux apps need to run on a linux host--there is no virtualized linux OS inside docker.

2: Docker

The docker daemon, dockerd, communicates with containerd to run containers. Each individual instance of a container is started by an instance of runc.

Orchestrating multiple containers needs a higher-level tool. Docker has an option called Swarm. Google has Kubernetes, which performs a similar function.

3: Installing Docker

Needs windows 10 Pro.

4: The big picture

  • List running containers: docker container ls
  • Run a command in a container: docker container run -it ubuntu:latest /bin/bash
  • Exit a docker container, leaving it running: C-p C-q
  • Run a command on a running container: docker container exec -it boring_yonath bash
  • Stop a container: docker container stop boring_yonath
  • Delete a container docker container rm boring_yonath
  • Build an image: docker image build -t test:latest .
  • Run an image, mapping ports: docker container run -d --name web1 --publish 8080:8080 test:latest
    • The -d option runs as a daemon, rather than attaching to the terminal.
    • The port format is HOST:CONTAINER, mapping a port on the host to a port inside the container.

Part 2: The technical stuff

5: The Docker Engine

When a container is prepared and run by runc, its parent is set to a shim process and runc terminates when it has done its work. This allows for docker to be updated without disturbing running containers.

dockerd receives commands from the client over a REST api (on a local socket, typically, though it can be exposed to the network; default port 2375, TLS port 2376).

There's an extended explanation of how to setup the client and server components to require TLS, including setting up a CA and generating and signing keys. Certainly valuable, but I'd have put this in an appendix.

6: Images

Docker images are made of multiple layers stacked on top of each other. So files on a higher layer can override files from lower layers. Different images can, and often do, share layers.

Images can be build for multiple platforms, and docker will automatically pull the correct version for the host.

  • specify images to pull by digest: docker image pull alpine@sha256:9a839e...
    • Images and layers are ultimately identified by content hash.
  • display image digests: docker image ls --digests
  • details on layers, etc: docker image inspect
  • remove images: docker image rm alpine:latest

7: Containers

A container is the runtime instance of an image. One image can be instantiated into multiple containers.

A major benefit of containers vs. VMs is that it is not necessary to install multiple copies of an OS, incurring the cost of running multiple copies of an OS, in order to have isolated applications.

However, the level of isolation granted by containers is not as great as that granted by VMs, so the problem of security remains. There's a chapter on the subject later in this book.

Stopping a container with docker container stop sends the process inside a SIGTERM, then in 10 seconds a SIGKILL. Doing docker container rm -f sends a SIGKILL immediately.

8: Containerizing an app

A sample Dockerfile:

FROM alpine
LABEL maintainer="example@example.com"
RUN apk add --update nodejs nodejs-npm
COPY . /src
RUN npm install
ENTRYPOINT ["node", "./app.js"]

This creates an image with four layers: the base layer, alpine, and a new layer created for each RUN and COPY line.

  • build an image: docker image build -t web:latest .
  • add a tag: docker image tab web:latest foo/web:latest
  • push an image: docker image push foo/web:latest

Images can be built in multiple stages, copying only necessary components to the final stage, in order to avoid filling production images with temporary files, build tools, etc. Sample Dockerfile:

FROM node:latest AS storefront
WORKDIR /usr/src/atsea/app/react-app
COPY react-app .
RUN npm install
RUN npm run build

FROM maven:latest AS appserver
WORKDIR /usr/src/atsea
COPY pom.xml .
RUN mvn -B -f pom.xml -s /usr/share/maven/ref/settings-docker.xml dependency:resolve
COPY . .
RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package -DskipTests

FROM java:8-jdk-alpine
RUN adduser -Dh /home/gordon gordon
WORKDIR /static
COPY --from=storefront /usr/src/atsea/app/react-app/build/ .
COPY --from=appserver /usr/src/atsea/target/AtSea-0.0.1-SNAPSHOT.jar .
ENTRYPOINT ["java", "-jar", "/app/AtSea-0.0.1-SNAPSHOT.jar"]
CMD ["--spring.profiles.active=postgres"]

Docker maintains a cache of layers built during these intermediate steps, and if you attempt to build the image again, it will reuse the cached layers rather than re-executing the build instructions. It will compare checksums of files copied during COPY commands to ensure that the contents of the files have not changed since the cache was created. If anything has changed, the cache will not be used for the remainder of the build.

9: Deploying Apps with Docker Compose

Docker Compose supports deploying multi-container applications on Docker nodes running in single-engine mode.

Docker Compose defines multi-service applications in a YAML file, docker-compose.yml. Sample:

version: "3.8"
    build: .
    command: python app.py
      - target: 5000
        published: 5000
      - counter-net
      - type: volume
        source: counter-vol
        target: /code
    image: "redis:alpine"



In the example above, the volume counter-vol is mounted inside the web-fe container at /code. The Dockerfile for web-fe copies the application into /code, so the application is stored in, and run from, the volume. As a result, the code can be edited on the volume, from outside the container, and the changes will be reflected inside the container.


This points to how one might develop an app running inside a container, but this is clearly not an ergonomic way to effect immediate changes during development. One assumes that there is an alternative available.

Answer: you can just point the volume at the current directory, rather than copying stuff manually. Don't know why the book goes through the trouble.

10: Docker Swarm

A docker swarm is a group of nodes that run services. A node is simply a server with docker installed, and each node may be either a manager or a worker.

  • Create a swarm on a manager node: docker swarm init --advertise-addr --listen-addr
  • Get a token to join the swarm: docker swarm join-token worker or docker swarm join-token manager
  • Join using those tokens: docker swarm join --token SWMTKN-1-0uahebax... --advertise-addr --listen-addr
  • List nodes in the swarm: docker node ls

A swarm should have an odd number of managers, preferably 3 or 5, in order to better handle netsplits.

For security, lock a swarm with docker swarm update --autolock=true. Now managers joining the swarm must present the unlock key.

  • Create a service: docker service create --name web-fe -p 8080:8080 --replicas 5 foo/web
    • This tells the swarm to create 5 replicas of the foo/web service, named web-fe and with port 8080 forwarded.
    • The swarm manager will aim to keep the swarm in this desired state. If a worker goes down, it will direct a new replica to be started.
  • List service replica information: docker service ps web-fe
  • Detailed information: docker service inspect --pretty web-fe
  • Alter the desired number of replicas: docker service scale web-fe=10
  • Update a service: docker service update --image foo/web:v2 --update-parallelism 2 --update-delay 20s foo-svc
    • This updates the service foo-svc with tag v2 of the image foo/web. It will push to 2 replicas at a time with 20s between waves.

The chapter describes how to backup and restore swarm date (stop the service and tar it up!).

11: Docker Networking

The Container Network Model (CNM) spec describes Docker networking, as implemented by libnetwork.

The three major components of Docker networking are sandboxes, endpoints, and networks. A sandbox is an isolated network stack, including the interfaces, routing tables, etc. The endpoints are the virtual interfaces that connect the sandbox to the network. And a network is a group of endpoints that can communicate.

libnetwork implements these networking concepts, as well as service discovery and load balancing.

  • docker network ls
  • docker network inspect NAME
  • docker network create -d bridge NAME
    • on Windows, use the nat driver rather than bridge
  • docker network create -d overlay NAME
  • docker container run -d --network NAME COMMAND
  • docker network prune
    • deletes all unused networks on the host
  • docker network rm

The default bridge network doesn't support automatic name resolution for attached containers, but user-created networks do.

12: Docker overlay networking

Given a swarm, create an overlay network on the manager: docker network create -d overlay NAME. The workers will be given access to the network when they are running services that are attached to it. If standalone containers will need access to the network, give it the --attachable flag on creation.

The chapter goes into some detail about how the networking is accomplished.

13: Volumes and persistent data

  • docker volume create NAME
    • by default, this uses the local driver, which makes the volume available only to containers on the same node
  • docker volume create -d overlay2 NAME
    • other third-party drivers are available to enable different storage solutions
  • docker volume prune
    • deletes all volumes that are not mounted on a container or service
  • docker volume rm
  • docker volume ls
  • docker volume inspect
  • docker plugin install
    • install plugins (auth, logging, network, or volume) from Docker Hub
  • docker plugin ls

Data corruption is a possibility with shared storage, as usual. The book instructs "you need to write your applications in a way to avoid things like this", so one assumes that there are no special mitigations in place in Docker.

14: Deploying apps with Docker Stacks

Docker Stacks offers useful deployment tools. A section of a sample config:

  image: dockersamples/atsea_app
    - front-tier
    - back-tier
    - payment
    replicas: 2
      parallelism: 2
      failure_action: rollback
        - 'node.role == worker'
      condition: on-failure
      delay: 5s
      max_attempts: 3
      window: 120s
    - postgres_password
  • docker secret create SECRET_NAME FILE_NAME
  • echo foo | docker secret create SECRET_NAME -
    • secrets available on a service will be mounted at /run/secrets/SECRET_NAME as a regular file
  • docker stack deploy -c STACK_FILE STACK_NAME
  • docker stack ls
  • docker stack ps STACK_NAME
  • docker stack rm


Can secrets be specified directly in the stack file (e.g. for fake dev-mode secrets)?

15: Security in Docker

    • force the host to verify that images are signed


  • how to setup DCT
  • how to configure the other security stuff, too

16: What next

Name Role
Nigel Poulton Author


Relation Sources
  • Docker (2013-03-20)