Docker is a tool which helps developers build and ship high quality applications, faster, anywhere. Source
With Docker, developers can build any app in any language using any toolchain. Dockerized apps are completely portable and can run anywhere.
Developers can get going by just spinning any container out of list on Docker Hub. Docker manages and tracks changes and dependencies, making it easier for sysadmins to understand how the apps that developers build work. And with Docker Hub, developers can automate their build pipeline and share artifacts with collaborators through public or private repositories.
If you are a complete Docker newbie, you should probably follow the series of tutorials now.
First command after installation
docker run hello-world
That’s it, you have a running Docker container.
If you are a complete Docker newbie, you should probably follow the series of tutorials now.
Docker implements a high-level API to provide lightweight containers that run processes in isolation.
docker createcreates a container but does not start it.
docker renameallows the container to be renamed.
docker runcreates and starts a container in one operation.
docker rmdeletes a container.
docker updateupdates a container’s resource limits.
Normally if you run a container without options it will start and stop immediately, if you want keep it running you can use the command,
docker run -td container_id this will use the option
-t that will allocate a pseudo-TTY session and
-d that will detach automatically the container (run container in background and print container ID)
If you want a transient container,
docker run --rm will remove the container after it stops.
Another useful option is
docker run --name customname docker_image because when you specify the
--name inside the run command this will allow you to start and stop a container by calling it with the name the you specified when you created it.
Starting and Stopping
docker startstarts a container so it is running.
docker stopstops a running container.
docker restartstops and starts a container.
docker pausepauses a running container, “freezing” it in place.
docker unpausewill unpause a running container.
docker waitblocks until running container stops.
docker killsends a SIGKILL to a running container.
docker attachwill connect to a running container.
If you want to integrate a container with a host process manager, start the daemon with
-r=false then use
docker start -a.
If you want to expose container ports through the host, see the exposing ports section.
Restart policies on crashed docker instances are covered here.
docker psshows running containers.
docker logsgets logs from container. (You can use a custom log driver, but logs is only available for
docker inspectlooks at all the info on a container (including IP address).
docker eventsgets events from container.
docker portshows public facing port of container.
docker topshows running processes in container.
docker statsshows containers’ resource usage statistics.
docker diffshows changed files in the container’s FS.
docker ps -a shows running and stopped containers.
docker stats --all shows a running list of containers.
docker cpcopies files or folders between a container and the local filesystem.
docker exportturns container filesystem into tarball archive stream to STDOUT.
docker execto execute a command in container.
To enter a running container, attach a new shell process to a running container called foo, use:
docker exec -it foo /bin/bash.
Images are just templates for docker containers.
docker imagesshows all images.
docker importcreates an image from a tarball.
docker buildcreates image from Dockerfile.
docker commitcreates image from a container, pausing it temporarily if it is running.
docker rmiremoves an image.
docker loadloads an image from a tar archive as STDIN, including images and tags (as of 0.7).
docker savesaves an image to a tar archive stream to STDOUT with all parent layers, tags & versions (as of 0.7).
While you can use the
docker rmi command to remove specific images, there’s a tool called docker-gc that will clean up images that are no longer used by any containers in a safe manner.
Load an image from file:
docker load < my_image.tar.gz
Save an existing image:
docker save my_image:my_tag | gzip > my_image.tar.gz
Import a container as an image from file:
cat my_container.tar.gz | docker import - my_image:my_tag
Export an existing container:
docker export my_container | gzip > my_container.tar.gz
Difference between loading a saved image and importing an exported container as an image
Loading an image using the
load command creates a new image including its history.
Importing a container as an image using the
import command creates a new image excluding the history which results in a smaller image size compared to loading an image.
You can specify a specific IP address for a container:
# create a new bridge network with your subnet and gateway for your ip block docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic # run a nginx container with a specific ip in that block $ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx # curl the ip from any other place (assuming this is a public ip block duh) $ curl 203.0.113.2
Registry & Repository
A repository is a hosted collection of tagged images that together create the file system for a container.
A registry is a host – a server that stores repositories and provides an HTTP API for managing the uploading and downloading of repositories.
Docker.com hosts its own index to a central registry which contains a large number of repositories. Having said that, the central docker registry does not do a good job of verifying images and should be avoided if you’re worried about security.
docker loginto login to a registry.
docker logoutto logout from a registry.
docker searchsearches registry for image.
docker pullpulls an image from registry to local machine.
docker pushpushes an image to the registry from local machine.
Run local registry
Also see the mailing list.
The configuration file. Sets up a Docker container when you run
docker build on it. Vastly preferable to
Here are some common text editors and their syntax highlighting modules you could use to create Dockerfiles: * If you use jEdit, I’ve put up a syntax highlighting module for Dockerfile you can use. * Sublime Text 2 * Atom * Vim * Emacs * TextMate * VS Code * Also see Docker meets the IDE
- FROM Sets the Base Image for subsequent instructions.
- MAINTAINER (deprecated - use LABEL instead) Set the Author field of the generated images.
- RUN execute any commands in a new layer on top of the current image and commit the results.
- CMD provide defaults for an executing container.
- EXPOSE informs Docker that the container listens on the specified network ports at runtime. NOTE: does not actually make ports accessible.
- ENV sets environment variable.
- ADD copies new files, directories or remote file to container. Invalidates caches. Avoid
- COPY copies new files or directories to container. Note that this only copies as root, so you have to chown manually regardless of your USER / WORKDIR setting. See https://github.com/moby/moby/issues/30110
- ENTRYPOINT configures a container that will run as an executable.
- VOLUME creates a mount point for externally mounted volumes or other containers.
- USER sets the user name for following RUN / CMD / ENTRYPOINT commands.
- WORKDIR sets the working directory.
- ARG defines a build-time variable.
- ONBUILD adds a trigger instruction when the image is used as the base for another build.
- STOPSIGNAL sets the system call signal that will be sent to the container to exit.
- LABEL apply key/value metadata to your images, containers, or daemons.
- Best practices for writing Dockerfiles
- Michael Crosby has some more Dockerfiles best practices / take 2.
- Building Good Docker Images / Building Better Docker Images
- Managing Container Configuration with Metadata
The versioned filesystem in Docker is based on layers. They’re like git commits or changesets for filesystems.
Volumes are useful in situations where you can’t use links (which are TCP/IP only). For instance, if you need to have two docker instances communicate by leaving stuff on the filesystem.
You can mount them in several docker containers at once, using
docker run --volumes-from.
Because volumes are isolated filesystems, they are often used to store state from computations between transient containers. That is, you can have a stateless and transient container run from a recipe, blow it away, and then have a second instance of the transient container pick up from where the last one left off.
docker run -v /Users/wsargent/myapp/src:/src
You can use remote NFS volumes if you’re feeling brave.
You may also consider running data-only containers as described here to provide some data portability.
This has been deprected to some extent by user-defined networks.
NOTE: If you want containers to ONLY communicate with each other through links, start the docker daemon with
-icc=false to disable inter process communication.
If you have a container with the name CONTAINER (specified by
docker run --name CONTAINER) and in the Dockerfile, it has an exposed port:
Then if we create another container called LINKED like so:
docker run -d --link CONTAINER:ALIAS --name LINKED user/wordpress
Then the exposed ports and aliases of CONTAINER will show up in LINKED with the following environment variables:
And you can connect to it that way.
To delete links, use
docker rm --link.
Generally, linking between docker services is a subset of “service discovery”, a big problem if you’re planning to use Docker at scale in production. Please read The Docker Ecosystem: Service Discovery and Distributed Configuration Stores for more info.
Exposing incoming ports through the host container is fiddly but doable.
This is done by mapping the container port to the host port (only using localhost interface) using
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage
You can tell Docker that the container listens on the specified network ports at runtime by using EXPOSE:
Note that EXPOSE does not expose the port itself – only
-p will do that. To expose the container’s port on your localhost’s port:
iptables -t nat -A DOCKER -p tcp --dport <LOCALHOSTPORT> -j DNAT --to-destination <CONTAINERIP>:<PORT>
If you’re running Docker in Virtualbox, you then need to forward the port there as well, using forwarded_port. Define a range of ports in your Vagrantfile like this so you can dynamically map them:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| ... (49000..49900).each do |port| config.vm.network :forwarded_port, :host => port, :guest => port end ... end
If you forget what you mapped the port to on the host container, use
docker port to show it:
docker port CONTAINER $CONTAINERPORT
This is where general Docker best practices and war stories go:
- The Rabbit Hole of Using Docker in Automated Tests
- Bridget Kromhout has a useful blog post on running Docker in production at Dramafever.
- There’s also a best practices blog post from Lyst.
- A Docker Dev Environment in 24 Hours!
- Building a Development Environment With Docker
- Discourse in a Docker Container
This is where security tips about Docker go. The Docker security page goes into more detail.
First things first: Docker runs as root. If you are in the
docker group, you effectively have root access. If you expose the docker unix socket to a container, you are giving the container root access to the host.
Docker should not be your only defense. You should secure and harden it.
For an understanding of what containers leave exposed, you should read is Understanding and Hardening Linux Containers by Aaron Grattafiori. This is a complete and comprehensive guide to the issues involved with containers, with a plethora of links and footnotes leading on to yet more useful content. The security tips following are useful if you’ve already hardened containers in the past, but are not a substitute for understanding.
For greatest security, you want to run Docker inside a virtual machine. This is straight from the Docker Security Team Lead – slides / notes. Then, run with AppArmor / seccomp / SELinux / grsec etc to limit the container permissions. See the Docker 1.10 security features for more details.
Docker image ids are sensitive information and should not be exposed to the outside world. Treat them like passwords.
ince docker 1.11 you can easily limit the number of active processes running inside a container to prevent fork bombs. This requires a linux kernel >= 4.3 with CGROUP_PIDS=y to be in the kernel configuration.
docker run --pids-limit=64
Also available since docker 1.11 is the ability to prevent processes from gaining new privileges. This feature have been in the linux kernel since version 3.5. You can read more about it in this blog post.
docker run --security-opt=no-new-privileges
Turn off interprocess communication with:
docker -d --icc=false --iptables
Set the container to be read-only:
docker run --read-only
Verify images with a hashsum:
docker pull debian@sha256:a25306f3850e1bd44541976aa7b5fd0a29be
Set volumes to be read only:
docker run -v $(pwd)/secrets:/secrets:ro debian