Learning Docker – Getting out

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we covered how use an existing custom image as a building block for an application. Easy-peasy.

This lesson, we are covering how to make a container accessible on the network. We are also covering how to use a generic application image from the Docker Hub to build an application.

Docker has a couple neat network features that make it network friendly:

  1. Programs in a container are free to access the network outside the container, host and network firewall permissions permitting.
  2. Programs in a container that can accept network traffic may only do so if Docker is told that the program may be accessed from outside the container.

Some security experts will question the usefulness of the first point, but it makes your job as an application developer easier if your application needs to get data over the network from somewhere else. But the second feature is cool because helps protect your applications container from unwanted access. However, this means that you are respnsible for identifying how the application can be accessed.

We are going to grab an image of the Nginx HTTP server, stash a custom home page into it, and allow access to the server from the network. However, the Dockerfile does not control network access to the image.

Docker splits out the data of the application (your image) from the definition of how to run it. This provides flexibility on how to run an image: you can mount directories from the docker host into the container to allow for data persistence (does not go away when the container is deleted) or for rapid development purposes (use the web server image to test your application from your local computer’s filesystem, before baking it into one application image). You can define persistent volumes, which allows you to carry forward container data from one image to another (like upgrading MySQL). And you can specify what ports you want the local host to use to allow network communication with a program in your container.

The good (even great) news is that you do not have to use the same host port number as the program in your image expects. This means that you could run four copies of nginx at the same time; even though all four expect to use port 80 in each container, you can map the container host’s ports 8000, 8080, 8888, and of course, 80 to each of the different containers. The admin running your containers becomes responsible for communicating to users which host port connects with which container. If you are running the containers on your computer, then you are that admin 🙂

Let’s get started. First, create a project directory and create an index.html file:

<html>
<head><title>Hello world!</title></head>
<body><p>You tagged me!</p></body>
</html>

Next, create your Dockerfile:

FROM nginx
COPY index.html /usr/share/nginx/html

We omit the CMD statement, so the CMD statement from the base image is used instead. The makers of this image intended for their image to be used as a building block for other developers’ applications, so there are directions on how to use this image at https://hub.docker.com/_/nginx. The highlights:

  1. The web files are stored at /usr/share/nginx/html, so you can copy your website there or mount a local directory that has the files to serve.
  2. You can replace the Nginx config file at /etc/nginx/nginx.conf.
  3. They have a template system for embedding environment variable values into the configuration of the webserver. This is useful if you want to further customize how the webserver runs without making a new image every time.
  4. There is an image that supports Perl scripts, allowing you to write a Perl web application hosted on Nginx without any additional software needed.

Generate and run your image:

docker build -t my_nginx .
docker run -d --name go_nginx -p 8080:80 my_nginx

This docker run command has a couple different options worth talking about. First, the --name parameter is used to name the container. It still contains the 10 character alphanumeric name, but you can also refer to it by name now. This will make it easier to stop the container when we are done. We gave the container a slightly different name than the image to make it clear which was the container name and which was the image name.

Second, there is -d instead of -it. This container is being run as a daemon -d instead of interactive with a terminal -it. Without the -d, docker would wait until the application stopped on its own (it won’t for this image) or until you used Control-C to terminate it. Instead, you get the full identifier of your new container and your prompt back immediately; Docker runs this container in the background.

Last, there’s the -p, which maps a host port into the container. In this case, we are mapping port 8080 on the container host to port 80 inside the container, where Nginx is expecting network traffic at. If you get an error about the port being in use by another program, you can always select a different host port, like 8888, 12345, or anything between 1025 and 49151 (don’t ask how I know those numbers). Do not change the :80; Nginx is setup to listen on that port inside the container and you would then need to reconfigure it to use a different port.

Let’s test that this is working. Visit http://localhost:8080 on the computer your container is running on. If you can connect to that computer over the local network, you can try http://<ip_address>:8080 from another computer on your network. You should see your webpage.

You can stop your container from running with the following:

docker container stop go_nginx

You can also restart it later:

docker container start go_nginx

Note that this command does not need the -d option; your container knows that it runs in the background.

When you are finished, you can clean up your container with the following command:

docker container rm -f go_nginx

To summarize, we used an image from the Docker Hub that is designed to be used as the basis for an application to build a webserver with our (one page) website. We enabled computer on the network to access the weserver running in the container. We also learned how to run the container as a background (daemon) process.

For the next lesson, we cover how to add an application to an image without using a package manager and how to add options that can be set when an image is run.

Tagged :