Learning Docker – Compose yourself

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we learned how to use volumes to store data outside of a container, enabling us to upgrade an application without losing its data.

This lesson, we cover how to use the docker-compose command and the docker-compose.yml file to manage container configuration. We will also cover how multiple containers “talk” to each other.

Docker allows for us to attach volumes and host ports to containers. While these are all neat features, they also really complicate the command needed to start the container. For example, take a look at the Nginx page on Docker Hub at https://hub.docker.com/_/nginx and see how it talks about mounting two directories if you run the image in read-only mode. Let’s reconfigure it to also listen on port 443 and give it some SSL certs, map to the local copy of our website so we can develop and add a location to save webserver logs so that we can read them easily.

This is your command:

docker run -d -p 80:80 -p 443:443 --read-only -v $(pwd)/nginx-cache:/var/cache/nginx -v $(pwd)/nginx-pid:/var/run -v $(pwd)/log:/var/log/nginx -v $(pwd)/content:/usr/share/nginx/html:ro -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro -v $(pwd)/certs:/etc/nginx/certs:ro nginx

This is 5 directories and a file plus two ports in the command line. Yes, you can use your comand history to rerun it later. But this is not well organized, and it does not make things easy for others to run your image. And if you need to start more than one container, did you actually start the right one, or try to run the one image twice? Are you sure everything will work together?

The docker-compose command and the docker-compose.yml file can help make this more manageable. The YAML (.yml) file is a written description of how to start one or more containers as a single unit, including defining how volumes and ports are attached to each container. This not only makes things more readable, but it is also more easily shared with others and more easily managed- the file (and not your shell history) has the most current configuration of your containers. Also, you can use it to visually verify which services will be exposed to the open network.

Let’s get started with a docker-compose.yml file by creating a new project folder. The options for a docker-compose.yml file can be found at https://docs.docker.com/compose/compose-file/. We will start by defining a MySQL container using the directions at https://registry.hub.docker.com/_/mysql:

version: "3.8"
volumes:
  mysql_data: {}
services:
  mysql-container:
    image: mysql:8.0
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    ports:
     - "3306:3306"
    volumes:
     - "mysql_data:/var/lib/mysql"
    environment:
      MYSQL_ROOT_PASSWORD: example

YAML (.yml) files are a markup syntax, and takes a little getting used to for non-programmers. I’m not going to discuss the syntax, but will highlight what each line does:

  • The first fine defines the version of docker-compose syntax you want to use. The version should be at least 3.0, with the current version as of when I wrote this lesson is 3.8
  • The volumes line defines the named volumes that you want created.
  • The following line creates a volume named mysql_data. The curly braces indicate that we are passing no parameters to the volume create command. We might need parameters if we need to specify a driver or if the driver needs additional configuration information.
  • The services line indicates that the following elements define services, and each service starts at least one container copy of an image. A service can specify that more than one copy should be ran at the same time, allowing for redundancy.
  • The next line is the name of the service. If Docker is aware of this service, then it will control the existing container that provides this service.
  • The image line defines the name of the image to use. If the desired image does not already exist on the Docker host, it is fetched from known registries. If the image name in the file does not match the name of the current image, then the container is destroyed and rebuilt.
  • The command line replaces the content of the CMD function in the image. It can do this in conjunction with the ENTRYPOINT option in a Dockerfile, which we have not covered yet. This specific line passes a parameter to the MySQL server.
  • This restart line tells Docker that this image should always restart when Docker starts. This is very useful and important if you are using Docker to run a program and do not want to rely on manual intervention to start it back up.
  • The ports line specifies that we are defining ports. Since we could need multiple ports to be accessible, we can define more than one port to map. In example, Nginx can also listen on 443 for HTTPS connections, and we might want to make both ports 80 and 443 available for external use. The line after that specifies this is a port mapping value of port 3306 to port 3306. That is how we will access this container externally.
  • Last, the environment line indicates the environment variables that will be set in the container. Again, as shown in the last lesson, these values only work if the image is expecting to use them. This environment variable sets the initial password for the root account in the server.

Now, from the same directory as the .yml file, run:

docker-compose up

You will see Docker spring to life, creating a MySQL server instance. You will also get to see the initialization of MySQL (which we didn’t see last lesson). When it is done (and should say “ready for connections”), use Control-C to get terminate the Docker container. Go back to your docker-compose.yml file, and change the image line so that it just uses “mysql” instead of “mysql:8.0”. Run “docker-compose up” again. Note that this time, there is a lot less text. This is because (like last lesson) our database data is on our mysql_data volume and that is already built from the first run of docker-compose.

Now let’s add our phpMyAdmin image to our docker-compose.yml file. But instead of calling an image we have already built, let’s import a Dockerfile for this resource. In the directory with your docker-config.yml file, create a new folder called my_phpmyadmin, and copy the Dockerfile from lesson six into that folder. Your file tree will look like this:

Project
 |
 +- docker-compose.yml
 |
 +- my_phpmyadmin
    |
    +- Dockerfile

Next, add the following service to the docker-compose.yml file (note the indent before the service name):

  phpmyadmin-container:
    build: my_phpmyadmin/.
    ports:
     - "8080:80"
    environment:
      IP_ADDRESS: mysql-container

Now run the following to commands to get both containers running:

docker-compose build
docker-compose up

The first command builds all out-of-date images using their docker files, which is needed for the phpMyAdmin image. The second command then runs both containers at the same time. Now you can go to http://localhost:8080 and log in as root with the password “example”. To stop running your containers, use Control-C.

Important to point out: we gave the name of the mysql service, mysql-container, as the IP_ADDRESS used by our phpMyAdmin container. Containers can reference each other using the name of the service of the container. This enables the containers to communicate with each other, and is essential because we don’t know what the container’s IP will be in advance. Also, services can communicate with each other even if the port is not exposed to the host. Don’t believe me? Then remove the “ports:” section from the mysql-container service in the docker-compose.yml file, rerun docker-compose up, and log into phpMyAdmin again. It should work fine. Those who are security conscious will immediately jump on this as a security flaw. However, by default, containers can only communicate with each other in this manner if they are defined in the same docker-compose.yml file. I verified this by bringing up a new container manually, and then trying to communicate with the other containers. It did not work, even by IP addresses. Additionally, Docker does allow the creation of virtual networks to allow you to further limit which containers in this file can talk to each other. If you wanted network isolation from other containers, then you need to begin defining networks for each set of containers and assigning those containers to the intended network.

That wasn’t too bad. With a 20 line file, we created two containers and a volume, and defined how they all work together. To run them as a background process and then stop them, you can run these two commands from inside your project directory:

docker-compose up -d
docker-compose down

To summarize, we covered how to write a docker-compose.yml file to manage the desired configuration of multiple containers and how to use the docker-compose command to build, start, and stop those containers.

For the next lesson, we will examine how to leverage Docker for development, including making a full image that could be used for production.

Tagged :