Containers are similar to extra-slim VMs. They run apps on the host OS kernel using modern kernel functions to provide separation between different containers (and the host). Importantly, this means that e.g. linux apps need to run on a linux host--there is no virtualized linux OS inside docker.
The docker daemon,
dockerd, communicates with
containerd to run containers. Each individual instance of a container is started by an instance of
Orchestrating multiple containers needs a higher-level tool. Docker has an option called Swarm. Google has Kubernetes, which performs a similar function.
Needs windows 10 Pro.
docker container ls
docker container run -it ubuntu:latest /bin/bash
docker container exec -it boring_yonath bash
docker container stop boring_yonath
docker container rm boring_yonath
docker image build -t test:latest .
docker container run -d --name web1 --publish 8080:8080 test:latest
-doption runs as a daemon, rather than attaching to the terminal.
HOST:CONTAINER, mapping a port on the host to a port inside the container.
When a container is prepared and run by
runc, its parent is set to a
shim process and
runc terminates when it has done its work. This allows for docker to be updated without disturbing running containers.
dockerd receives commands from the client over a REST api (on a local socket, typically, though it can be exposed to the network; default port 2375, TLS port 2376).
There's an extended explanation of how to setup the client and server components to require TLS, including setting up a CA and generating and signing keys. Certainly valuable, but I'd have put this in an appendix.
Docker images are made of multiple layers stacked on top of each other. So files on a higher layer can override files from lower layers. Different images can, and often do, share layers.
Images can be build for multiple platforms, and docker will automatically pull the correct version for the host.
docker image pull alpine@sha256:9a839e...
docker image ls --digests
docker image inspect
docker image rm alpine:latest
A container is the runtime instance of an image. One image can be instantiated into multiple containers.
A major benefit of containers vs. VMs is that it is not necessary to install multiple copies of an OS, incurring the cost of running multiple copies of an OS, in order to have isolated applications.
However, the level of isolation granted by containers is not as great as that granted by VMs, so the problem of security remains. There's a chapter on the subject later in this book.
Stopping a container with
docker container stop sends the process inside a
SIGTERM, then in 10 seconds a
docker container rm -f sends a
A sample Dockerfile:
FROM alpine LABEL maintainer="email@example.com" RUN apk add --update nodejs nodejs-npm COPY . /src WORKDIR /src RUN npm install EXPOSE 8080 ENTRYPOINT ["node", "./app.js"]
This creates an image with four layers: the base layer,
alpine, and a new layer created for each
docker image build -t web:latest .
docker image tab web:latest foo/web:latest
docker image push foo/web:latest
Images can be built in multiple stages, copying only necessary components to the final stage, in order to avoid filling production images with temporary files, build tools, etc. Sample Dockerfile:
FROM node:latest AS storefront WORKDIR /usr/src/atsea/app/react-app COPY react-app . RUN npm install RUN npm run build FROM maven:latest AS appserver WORKDIR /usr/src/atsea COPY pom.xml . RUN mvn -B -f pom.xml -s /usr/share/maven/ref/settings-docker.xml dependency:resolve COPY . . RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package -DskipTests FROM java:8-jdk-alpine RUN adduser -Dh /home/gordon gordon WORKDIR /static COPY --from=storefront /usr/src/atsea/app/react-app/build/ . WORKDIR /app COPY --from=appserver /usr/src/atsea/target/AtSea-0.0.1-SNAPSHOT.jar . ENTRYPOINT ["java", "-jar", "/app/AtSea-0.0.1-SNAPSHOT.jar"] CMD ["--spring.profiles.active=postgres"]
Docker maintains a cache of layers built during these intermediate steps, and if you attempt to build the image again, it will reuse the cached layers rather than re-executing the build instructions. It will compare checksums of files copied during COPY commands to ensure that the contents of the files have not changed since the cache was created. If anything has changed, the cache will not be used for the remainder of the build.
Docker Compose supports deploying multi-container applications on Docker nodes running in single-engine mode.
Docker Compose defines multi-service applications in a YAML file,
version: "3.8" services: web-fe: build: . command: python app.py ports: - target: 5000 published: 5000 networks: - counter-net volumes: - type: volume source: counter-vol target: /code redis: image: "redis:alpine" networks: counter-net: networks: counter-net: volumes: counter-vol:
In the example above, the volume
counter-vol is mounted inside the
web-fe container at
/code. The Dockerfile for
web-fe copies the application into
/code, so the application is stored in, and run from, the volume. As a result, the code can be edited on the volume, from outside the container, and the changes will be reflected inside the container.
This points to how one might develop an app running inside a container, but this is clearly not an ergonomic way to effect immediate changes during development. One assumes that there is an alternative available.
Answer: you can just point the volume at the current directory, rather than copying stuff manually. Don't know why the book goes through the trouble.
A docker swarm is a group of nodes that run services. A node is simply a server with docker installed, and each node may be either a manager or a worker.
docker swarm init --advertise-addr 10.0.0.1:2377 --listen-addr 10.0.0.1:2377
docker swarm join-token workeror
docker swarm join-token manager
docker swarm join --token SWMTKN-1-0uahebax... 10.0.0.1:2377 --advertise-addr 10.0.0.2:2377 --listen-addr 10.0.0.2:2377
docker node ls
A swarm should have an odd number of managers, preferably 3 or 5, in order to better handle netsplits.
For security, lock a swarm with
docker swarm update --autolock=true. Now managers joining the swarm must present the unlock key.
docker service create --name web-fe -p 8080:8080 --replicas 5 foo/web
web-feand with port 8080 forwarded.
docker service ps web-fe
docker service inspect --pretty web-fe
docker service scale web-fe=10
docker service update --image foo/web:v2 --update-parallelism 2 --update-delay 20s foo-svc
v2of the image
foo/web. It will push to 2 replicas at a time with 20s between waves.
The chapter describes how to backup and restore swarm date (stop the service and tar it up!).
The Container Network Model (CNM) spec describes Docker networking, as implemented by
The three major components of Docker networking are sandboxes, endpoints, and networks. A sandbox is an isolated network stack, including the interfaces, routing tables, etc. The endpoints are the virtual interfaces that connect the sandbox to the network. And a network is a group of endpoints that can communicate.
libnetwork implements these networking concepts, as well as service discovery and load balancing.
docker network ls
docker network inspect NAME
docker network create -d bridge NAME
natdriver rather than
docker network create -d overlay NAME
docker container run -d --network NAME COMMAND
docker network prune
docker network rm
The default bridge network doesn't support automatic name resolution for attached containers, but user-created networks do.
Given a swarm, create an overlay network on the manager:
docker network create -d overlay NAME. The workers will be given access to the network when they are running services that are attached to it. If standalone containers will need access to the network, give it the
--attachable flag on creation.
The chapter goes into some detail about how the networking is accomplished.
docker volume create NAME
localdriver, which makes the volume available only to containers on the same node
docker volume create -d overlay2 NAME
docker volume prune
docker volume rm
docker volume ls
docker volume inspect
docker plugin install
docker plugin ls
Data corruption is a possibility with shared storage, as usual. The book instructs "you need to write your applications in a way to avoid things like this", so one assumes that there are no special mitigations in place in Docker.
Docker Stacks offers useful deployment tools. A section of a sample config:
appserver: image: dockersamples/atsea_app networks: - front-tier - back-tier - payment deploy: replicas: 2 update_config: parallelism: 2 failure_action: rollback placement: constraints: - 'node.role == worker' restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s secrets: - postgres_password
docker secret create SECRET_NAME FILE_NAME
echo foo | docker secret create SECRET_NAME -
/run/secrets/SECRET_NAMEas a regular file
docker stack deploy -c STACK_FILE STACK_NAME
docker stack ls
docker stack ps STACK_NAME
docker stack rm
Can secrets be specified directly in the stack file (e.g. for fake dev-mode secrets)?