Learning Docker – Fun in MUD

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the last lesson, we created a private Docker Registry using the OpenStack Swift storage driver to store our registry in a storage bucket externally. We also enabled SSL encryption with help from Let’s Encrypt and setup password authentication.

For this lesson, we will create a web-accessible MUD in as well-architected Docker application. This is a HUGE project with two Docker stacks.

What’s a MUD? MUD is short for Multi-User Dungeon, and it is an online text game based on the telnet protocol (that has been around for over 35 years). A MUD software is usually designed to allow for players to participate in the game, and for builders (empowered players) to modify and improve the game. It involves job classes, monsters (mobs for short), items, and most importantly, other people.

So why do this? Well, I’ve always wanted to run my own MUD. And I needed a topic for this Docker lesson… So I’m going to create a composer application that will run and update the MUD software. And while searching, I happened to come across a new release (5 days old) of someone that converted DikuMUD to use websockets. The project is at https://github.com/Seifert69/DikuMUD3 and we’re going to use that for this tutorial. And a special thanks to Michael Seifert, who made one very important change to his source code that made running DikuMUD in Docker (and this tutorial) possible.

This project will require five containers as two Docker stacks. The first container will be its own stack and will act as the manager for the application, and it will check for updated source code, compile the code as needed, and update the application. The other stack will run the other four containers. One container will run the actual MUD. Two containers will provide telnet and websocket access to the MUD container. The last container will be a nginx webserver to host the website.

The manager container is going to do a lot. It will need a cron application, Git, a C++ compiler, some Docker management binaries, and access to the docker host running the application. Let’s start with the cron application, as I feel this will be the most difficult. Why? Because cron normally runs as a background service. Docker containers do not advocate “runs in the background”, as these processes may not properly terminate when the main container program finishes. One possibility I came across is how the GlusterFS team created their container- they run the init process, which is the native Linux programm for managing background services. But they made this work by starting with a full CentOS image and removing services from that, and I don’t want to create a container that looks like a hacked OS installation. There’s also the Supervisord container, which acts as a supervisor service to background jobs. But a third and common option is to run the service in the foreground as the only task for the container. This is the easiest way to go in my optinion, so we’ll grab a vanila OS container and install cron into it and run that in foreground mode.

Let’s create a project directory. In there, create a directory called “docker”. In there, create two more directories, “updater” and “diku”. We need this layout to separate the files for each stack because when we run docker-compose, the containing directory name is prepended to the image name when generating the container name. If the directory name was in use for another stack, Docker mistakenly assumes that you are updating that stack configuration, creating all sorts of problems. So the main take-away is to use a unique directory name for storing your docker-compose.yml files for each stack in your environment.

Create a Dockerfile named Dockerfile.manager in the updater folder:

FROM ubuntu:latest
RUN export DEBIAN_FRONTEND=noninteractive && \
  apt-get update && \
  apt-get install -y cron && \
  apt-get install -y g++ && \
  apt-get install -y libboost-dev && \
  apt-get install -y libboost-system-dev && \
  apt-get install -y libboost-filesystem-dev && \
  apt-get install -y libboost-regex-dev && \
  apt-get install -y bison && \
  apt-get install -y flex && \
  apt-get install -y make && \
  apt-get install -y git && \
  apt-get install -y docker-compose && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/*
CMD [ "/sbin/cron", "-f" ]

If you are familiar with the apt-get command, you know that you can specify more than one package to install in a single command. Why only list one package per command? If there’s an error, I can easily tell which package I need is puking. Otherwise I need to weed through the output to identify what’s not working. As for why I picked these specific packages, I ran the container shell commands below and kept getting errors with the compile. I figured that it would be nice for you to have an easier time than I did getting this setup.

From your DikuMUD3 project directory, build it and run this image with:

docker build -t diku_manager -f docker/manager/Dockerfile.manager .
docker run --name throw_away -v /var/run/docker.sock:/var/run/docker.sock -d diku_manager

Again, I am starting my process of figuring out what commands need to be run in the Dockerfile in order to build the image. Now connect to the container shell:

docker exec -it throw_away /bin/bash

Using this throw away container, I figured out the commands that I needed to get the source code for this MUD installed. I followed the build directions at https://github.com/Seifert69/DikuMUD3 to clone the repository:

cd /opt; git clone https://github.com/Seifert69/DikuMUD3.git

Next, I ran the compile commands. The -j finds the number of cores available to the container and runs make multithreaded. It is a lot faster that way. This is also where I was seeing all sorts of make and compiler errors, including missing libraries and missing build utilities. I have already added these to the Dockerfile above so that we could skip those pain points:

cd /opt/DikuMUD3/vme/src && make all -j$(ls /sys/bus/cpu/devices/|wc -l)

Pre-process the .def files:

cd /opt/DikuMUD3/vme/etc && make all

Now we compile the zones:

cd /opt/DikuMUD3/vme/zone && /opt/DikuMUD3/vme/bin/vmc -m -I/opt/DikuMUD3/vme/include/ *.zon

We should also test out the docker cli commands. Remember that our goal is to have this container manage other containers, so we should test that, Let’s list the running Docker cotainers:

docker ps|grep throw_away

Good. The container can see itself. Onward!

At this point, per the directions, the game could be started. However, we now need to build our game container infrastructure. We are going to implement a several things:

  • Create a script to check for source code updates from git and to download and compile as needed.
  • Create a script to notify users then recreate all managed containers.
  • Create a script to regenerate the manager container.
  • Create a Dockerfile.telnet for the telnet server
  • Create a Dockerfile.websocket for the websocket server
  • Create a Dockerfile.nginx for the webserver
  • Create a script to check for source code updates from git
  • Create a docker-compose.yml file that defines how our containers work together
  • Update the Dockerfile.manager to copy all these components into the manager image, so that it can rebuild

The important thing to realize here is that our container will build containers. So we will not run any of the new Dockerfiles from the development box, but instead will run them from the master container on the development box. So let’s write some files. We will use the updater directory for these following files.

Start with check_updates.sh, which will clone or update from your git repository as needed and recreate the other containers:

#!/bin/bash
echo Running update check at `date`
export GIT_DISCOVERY_ACROSS_FILESYSTEM=1
# Confirm that the git repository exists
if [ ! -d /opt/DikuMUD3/.git ]; then
    echo Cloning repo
    git clone $GIT_REPO /opt/DikuMUD3 && \
    cd /opt/DikuMUD3 && \
    git checkout $GIT_BRANCH && \
    touch /opt/changes-detected && \
    echo Clone created.
fi
# Check for updates
cd /opt/DikuMUD3
if git pull>/tmp/git && ! grep "Already up to date." /tmp/git >/dev/null; then
    echo Repo has been updated
    # Create a flag that there are changes if we successfully download updates.
    touch /opt/changes-detected
    # Remove this flag if we have changes.
    rm /opt/rebuild-now 2>/dev/null    
fi 
# If there are changes and we are not compiling, then try to compile.
if [ -f /opt/changes-detected ] && tempfile -n /opt/updating-now; then
    echo Compiling updated source    
    cd /opt/DikuMUD3/vme/src && \
    make clean && \
    make all  -j$(ls /sys/bus/cpu/devices/|wc -l) && \
    cd /opt/DikuMUD3/vme/etc && \
    make clean && \
    make all && \
    cd /opt/DikuMUD3/vme/zone && \
    /opt/DikuMUD3/vme/bin/vmc -m -I/opt/DikuMUD3/vme/include/ *.zon && \
    touch /opt/rebuild-now && \
    rm /opt/changes-detected && \
    echo Source compiled successfully.
    # Always remove this flag; it allows the compile process to start the next time the script runs.
    rm /opt/updating-now
fi
# If we have a clean compile, then recreate our images.
if [ -f /opt/rebuild-now ]; then
    echo Rebuilding game images
    cd /opt
    # If the next four build commands run, then tag all four new images as "test"
    # The testing docker-compose uses the "test" tag to ensure that stuff works.
    docker image ls |grep diku_listener |grep live >/dev/null && docker image tag diku_listener:live diku_listener:old
    docker image ls |grep diku_nginx |grep live >/dev/null && docker image tag diku_nginx:live diku_nginx:old
    docker image ls |grep diku_mud |grep live >/dev/null && docker image tag diku_mud:live diku_mud:old
    docker-compose -f ./docker/diku/docker-compose.yml build && \
    date -d'1 hour' > /opt/restart-ready && \
    echo Game images rebuilt 
    # Always remove the rebuild flag.
    rm /opt/rebuild-now
    # The restart flag will get seen by a separate cron job that will notify users for an hour that the world is rebooting, then do the deed.
fi
echo Completed update check at `date`.

An important feature of this script are the GIT_REPO and GIT_BRANCH environment variables. We need this to be able to select a Git repository to download code from. More on this in a bit. Another important feature is the use of tags to rotate the images from live to old: this is our revert case if there are any problems. To revert to your previous images, you would only need to log into the manager container shell and run the following:

docker image tag diku_listener:old diku_listener:live
docker image tag diku_nginx:old diku_nginx:live
docker image tag diku_mud:old diku_mud:live
docker-compose -f /opt/docker/diku/docker-compose.yml up -d

Next, we need a check_restart.sh script. This will look for the /opt/rebuild-now flag file, and if present, will regenerate the other four game containers using freshly built images.

#!/bin/bash
echo Checking if restart is requested.
if [ -f /opt/restart-ready ]; then
    echo Restart is ready, calculating time left.
    REBOOT_TIME=$(date -d"$(cat /opt/restart-ready)" +%s)
    NOW=$(date +%s)
    MIN_LEFT=$(( (REBOOT_TIME - NOW) / 60 + 1 ))
    if [[ $MIN_LEFT -le 0 ]] || ! docker-compose -f /opt/docker/diku/docker-compose.yml ps|grep container>/dev/null ; then
        echo Restarting MUD now
        cd /opt && \
        rm /opt/restart-ready && \
        docker-compose -f /opt/docker/diku/docker-compose.yml up -d && \
        echo Restart initiated suceessfully && \
        sleep 60 && \
        nc -q 3 `docker network inspect updater_default|grep Gateway|grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*'` 4282 </dev/null  | grep . && \
        curl http://`docker network inspect updater_default|grep Gateway|grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*'`/  | grep . && \
        echo Restart was successful. && \
        exit 0
        echo There was a problem restarting the diku stack...
    fi
    if [[ $(( $MIN_LEFT % 15 )) -eq 0 || ( $MIN_LEFT -lt 15 && $(( $MIN_LEFT % 5 )) -eq 0 ) || $MIN_LEFT -le 3 ]]; then
        # Notify players of remaining time
        echo $MIN_LEFT minutes before reboot...
    fi
fi
echo Restart not done.

We also need a restart_manager.sh script. This will cause the manager to update its own image and to trigger its own rebuild. The good news is that this is not so sensitive, because if the manager container crashes, the rest of the game will continue running, without updates.

#!/bin/bash
cd /opt
# Tag the current running manager image as old, rebuild, then relaunch.
docker image tag diku_master:live diku_master:old && \
docker-compose -f ./docker/updater/docker-compose.yml build && \
docker-compose -f /opt/docker/updater/docker-compose.yml up -d
# This container should die if the last command runs successfully while a new one launches in its place.

Last, before we start working on Dockerfiles, we need a crontab file. This will make the above three scripts run on a schedule. You are welcome to update this to your own liking. As the file is here, the check_update script runs every hour at the 45 minute mark, the check_restart runs every minute, and restart_manager runs on the 3rd of every month at 12:05 in the morning. The output of all these commands is written to the STDOUT of the cron program, which will be accessible through the docker logs command.

45  *   *   *   *   root    /opt/docker/check_updates.sh >/proc/1/fd/1 2>&1
*   *   *   *   *   root    /opt/docker/check_restart.sh >/proc/1/fd/1 2>&1
5   0   3   *   *   root    /opt/docker/restart_manager.sh >/proc/1/fd/1 2>&1

Now we can start work on our second stack: the game itself. What makes this tricky is that we are automating all the commands we would normally do by hand to compile our app and then create new images based on the finished application. So, just like we would once our program is compiled and ready to be turned into an image, we create Dockerfiles that put files where we need them in order to run correctly. Let’s place all these files into the diku directory we made earlier.

We luck out for the websocket and telnet listeners; they use the same executable with different parameters. We’ll create one generic listener image then define two containers with the different command line options needed to support each listening mode. However, the container needs to be told the IP or name of the MUD container. So we need an start script to manage the needed IP address. Create this file as entrypoint_listener.sh:

#!/bin/bash

echo MUD Host: $MUD_HOST
COMMAND="./mplex -l /dev/stdout -a $MUD_HOST $@"
echo Command: "$COMMAND"

exec $COMMAND

Now write this into Dockerfile.listener:

FROM ubuntu:latest
COPY ./DikuMUD3/vme/bin/ /opt/DikuMUD3/vme/bin/
COPY ./docker/diku/entrypoint_listener.sh /entrypoint_listener.sh
WORKDIR /opt/DikuMUD3/vme/bin
ENTRYPOINT [ "/entrypoint_listener.sh" ]
CMD [ "-w", "-p", "4280" ]

The start script and this ENTRYPOINT argument forces the container to always start the listener. Override command arguments are passed into this program. We have also defined a couple mandatory arguments in the start script to force logging to STDOUT. This is what is output when you run docker logs. More on this in a bit. The default CMD is the arguments needed for the websocket listener. We can pass the alternate command arguments -p 4282 to get it to work for telnet. We also use the MUD_HOST environment variable as a means to communicate the MUD containers IP to this program.

We also have the option to use the EXPOSE command in this Dockerfile. This tells Docker that this container should allow for external connections to these specified ports. This is not needed for containers managed by docker-compose, as all containers are added to their own isolated network and can freely communicate on any port with each other. We will use our compose file to specify which ports on which containers are available for external clients to connect to.

The contents of Dockerfile.mud:

FROM ubuntu:latest
RUN export DEBIAN_FRONTEND=noninteractive && \
  apt-get update && \
  apt-get install -y --no-install-recommends dnsutils && \
  apt-get install -y --no-install-recommends libboost-system1.71 && \
  apt-get install -y --no-install-recommends libboost-filesystem1.71 && \
  apt-get install -y --no-install-recommends libboost-regex1.71 && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/*
COPY ./DikuMUD3/vme/bin/ /opt/DikuMUD3/vme/bin/
COPY ./DikuMUD3/vme/etc/ /opt/DikuMUD3/vme/etc/
COPY ./DikuMUD3/vme/log/ /opt/DikuMUD3/vme/log/
COPY ./DikuMUD3/vme/zone/ /opt/DikuMUD3/vme/zone/
COPY ./DikuMUD3/vme/lib/ /opt/DikuMUD3/
COPY ./docker/diku/entrypoint_mud.sh /entrypoint_mud.sh
VOLUME [ "/opt/DikuMUD3/vme/lib" ]
WORKDIR /opt/DikuMUD3/vme/bin
ENTRYPOINT [ "/entrypoint_mud.sh" ]
CMD [ "./vme", "-l", "/dev/stdout" ]

Another important observation for this application: we are copying the vme/lib off the master container directly into the DikuMUD3 directory on the MUD container. This is because the lib folder contains the state data for the MUD, including player data. We don’t want to blow that out every time we recompile and rebuild the containers; in fact we will mount a volume there to persist the data between compiles. However, there are some initial files that MUST exist there in order for the vme binary to properly start up. So we need a way to selectively copy over the initial set of files. We will manage that using the ENTRYPOINT script, entrypoint_mud.sh:

#!/bin/bash

if [ `realpath $1` == "/opt/DikuMUD3/vme/bin/vme" ]; then
    # First make a backup. Old backup is moved to /tmp, then a new backup 
    # is made in /tmp, then both moved back to lib.
    rm /opt/DikuMUD3/vme/lib/lib.old.tar.xz 2>/dev/null
    mv /opt/DikuMUD3/vme/lib/lib.tar.xz /tmp/lib.old.tar.xz 2>/dev/null
    tar -cJC /opt/DikuMUD3/vme/lib -f /tmp/lib.tar.xz
    mv /tmp/*.tar.xz /opt/DikuMUD3/vme/lib
    # Copy files in /opt/DikuMUD3/lib if needed
    pushd /opt/DikuMUD3/lib
    # Make all directories
    for file in `find ./* -type d`; do
        mkdir -p /opt/DikuMUD3/vme/lib/$file
    done
    # Copy files if they don't exist.
    for file in `find ./* -type f`; do
        cp -n $file /opt/DikuMUD3/vme/lib/$file
    done
    popd
fi

# Update the server config to allow our container multiplexors to connect.
IP_TELNET=`dig +short $MPLEX_TELNET`
if [ -z "$IP_TELNET" ]; then IP_TELNET=$MPLEX_TELNET; fi
IP_WEBSOCKET=`dig +short $MPLEX_WEBSOCKET`
if [ -z "$IP_WEBSOCKET" ]; then IP_WEBSOCKET=$MPLEX_WEBSOCKET; fi
echo Telnet: $MPLEX_TELNET $IP_TELNET
echo Websockets: $MPLEX_WEBSOCKET $IP_WEBSOCKET
sed -i "s/mplex hosts = .*/mplex hosts = ~${IP_TELNET}~ ~${IP_WEBSOCKET}~/" /opt/DikuMUD3/vme/etc/server.cfg

exec "$@"

From an application management perspective, this script makes a backup of the lib folder every time it starts, and it keeps the last two backups, in case the MUD doesn’t start.

This script also uses magical environment variables. In this case, it expects the DNS name or IP address of the telnet and websocket containers. The vme executable actually whitelists incomming connections, so we need to let it know about our websocket and telnet containers. We need the IP address support for testing, as we can’t resolve the IP addresses of the containers from outside the container. However, we can only rely on the DNS name of the container when we get this running through docker-compose. So we need to support both options. For the record, when testing manually, you will need to get the internal container IP addresses of the listeners before you create the mud container.

Lastly, while I was working on this script, I realized that some of the files in here were more like settings than game data. I reached out to Michael for a list of the files that needed to be replaced when the game was updated, and he was awesome enough to move those into the /etc folder, making the /lib file exclusively for game data. This made this update script simple and this project possible. Thanks, Michael!

Last for the game containers, we need a docker-compose.yml file. This is relatively straightforward, connecting the dots between our four containers:

version: "3.4"
volumes:
  dikumud_lib: {}
#  dikumud_log: {}
services:
  mud-container:
    image: diku_mud:live
    build:
      context: ../..
      dockerfile: ./docker/diku/Dockerfile.mud
    restart: always
    volumes:
     - "dikumud_lib:/opt/DikuMUD3/vme/lib"
#     - "dikumud_lib:/opt/DikuMUD3/vme/log"
    environment:
     - MPLEX_TELNET=telnet-container
     - MPLEX_WEBSOCKET=websocket-container
#    command: [ "./vme" ]
  telnet-container:
    image: diku_listener:live
    build:
      context: ../..
      dockerfile: ./docker/diku/Dockerfile.listener
    restart: always
    command: [ "-p", "4282" ]    
    environment:
     - MUD_HOST=mud-container
    ports:
     - "4282:4282"
  websocket-container:
    image: diku_listener:live
    restart: always
    environment:
     - MUD_HOST=mud-container
    ports:
     - "4280:4280"
  nginx-container:
    image: diku_nginx:live
    build:
      context: ../..
      dockerfile: ./docker/diku/Dockerfile.nginx
    restart: always
    ports:
     - "80:80"

This docker-compose.yml file exposes two ports: 80 for web access and 4282 for direct telnet access. Per the DikuMUD project maintianer, the telnet access is only intended for debugging. The dikumud_lib volume will store your game data so that it is available after updates.

If you notice the version number it is older than the “3.8” I have been using up to this point. That is because the Docker installed in the image is an older version maintained by the distribution, instead of the current version Docker offers. The Ubuntu container will not process the version 3.8 files, but will process 3.4. Since I’m not using any features newer than what’s supported in version 3.4, this all works out.

I have commented out the means to have DikuMUD log to a persistent log volume if you uncomment all the commented parts of this file. However, then you become responsible for cleaning up that log. If you don’t comment out these lines, then the executables all write to STDOUT, which is available through the docker logs command. You could see what’s going on by running:

docker ps
docker logs <container_name>

The downside to using docker logs is that you lose the logs when the container is stopped. It’s your call; I chose not to save the logs to disk, that’s all.

At this point, we only need to update the Dockerfile for the manager container to get it to automatically compile and deploy the other containers and create a docker-compose file to rule – I mean run – them all. Update Dockerfile.manager in the updater directory to the following:

FROM ubuntu:latest
ENV GIT_REPO=https://github.com/Seifert69/DikuMUD3.git
ENV GIT_BRANCH=master
RUN export DEBIAN_FRONTEND=noninteractive && \
  apt-get update && \
  apt-get install -y cron && \
  apt-get install -y g++ && \
  apt-get install -y libboost1.71-dev && \
  apt-get install -y libboost-system1.71-dev && \
  apt-get install -y libboost-filesystem1.71-dev && \
  apt-get install -y libboost-regex1.71-dev && \
  apt-get install -y bison && \
  apt-get install -y flex && \
  apt-get install -y make && \
  apt-get install -y git && \
  apt-get install -y curl && \
  apt-get install -y docker-compose && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/*
COPY ./docker /opt/docker
RUN chmod +x /opt/docker/*/*.sh && \
  crontab < /opt/docker/updater/mud.cron && \
  cp -p /opt/docker/updater/entrypoint_manager.sh /entrypoint_manager.sh
ENTRYPOINT [ "/entrypoint_manager.sh" ]
VOLUME [ "/opt/DikuMUD3" ]
CMD [ "/sbin/cron", "-f" ]

This Dockerfile now contains the ENV command, which sets default values for two of the environment variables used by this container, ensuring that the default repository, if not defined, is the main DikuMUD git repo we started with. It also sets the default branch to master. Why do this? Because someone, somewhere, is going to want to run their own MUD using Docker. After all, who doesn’t want to run their own game? But unless you want to use everything exactly like the main repository does, including the /etc/server.cfg file, which tells EVERYONE ON THE INTERNET the name of your only in-game immortal, then you need to be able to point to your own cloned git repository. This also will allow you to add your own zones, items, classes, etc to the game, as you don’t want to change this directly in your game container. Every time the source code updates, then you would lose all your changes when the game container gets rebuilt. The only requirement is that your Docker manager container must be able to access your git repository over the network. So you could clone to a new repo on GitHub, or use your dev station if the repo you setup can be accessed over SSH, or even find a Docker git repository image and use that. Managing the repository, including logins and how to craft that into your GIT_REPO URL is beyond the scope of this tutorial.

Also, the DikuMUD3 directory is now defined as a volume mount. This will keep us from losing the current state of the repository. If we didn’t do this, then the whole game would recompile and relaunch every time the manager container was rebuilt every month.

Last, we need a compose file for the manager, called docker-compose.manager.yml:

version: "3.4"
volumes:
  manager_repo: {}
services:
  master-container:
    image: diku_master:live
    build:
      dockerfile: ./docker/updater/Dockerfile.manager
      context: ../..
    restart: always
    volumes:
     - "manager_repo:/opt/DikuMUD3/vme/lib"
     - "/var/run/docker.sock:/var/run/docker.sock"
    environment:
     - GIT_REPO=https://github.com/Seifert69/DikuMUD3.git
     - GIT_BRANCH=master

If you wanted to customize the git repo used for the game’s source code, you would change the environment variables to point at your repo, including any login configuration you would need. Also, if you leverage the git repo similarly to how we tag the images, you can make a tag or branch called “live”, and only update that branch in git when you are ready to have your main game recompile.

Now that we have all of our files staged, we can finally go to our DikuMUD3 project directory and run the following:

docker-compose -f ./docker/updater/docker-compose.manager.yml build
docker-compose -f ./docker/updater/docker-compose.manager.yml up -d
docker-compose -f ./docker/updater/docker-compose.manager.yml logs -f

If you noticed, we used docker-compose to build the image. I was unable to get docker-compose to start a stack that used a custom image unless it was built by docker-compose. However, images built with docker-compose can still be tagged and otherwise manipulated just like any other Docker image.

The two -f parameters on the logs command is tricky and the placing is important. The first is a parameter for docker-compose, indicating where the config file is located. The command always comes after that. The second -f is to follow the log, so the software will sit there reporting all log output. As you watch your container’s logs, you should see it download and build a fresh copy of the DikuMUD3 source code from GitHub. You can also adapt this command to watch the logs from all four game containers as well:

docker-compose -f ./docker/diku/docker-compose.yml logs -f

Finally, for the moment you’ve been waiting for. Open a browser and go to http://localhost. On the webpage that appears, click Connect, and BAM! You’re in! Welcome to your own copy of DikuMud!

If you want to continue to work on this MUD, I would suggest that you create a private git repo that the containers can access, clone the project’s code base, and completely rebuild your containers with the docker-compose.manager.yml file updated to point to your new repo. I was able to do this by downloading a container for GitLab. You can also enjoy the game in it’s current state; I suggest adding some friends to make the journey more enjoyable.

To recap, in this lesson, we created a Docker stack running a web-browser accessible MUD, and a second stack that builds and maintains the first. We started this by developing an image with the tools needed to compile the source code from scratch and planned for how that image would call Docker commands to recreate the other stack. We also created Dockerfiles to deploy the MUD software into images like we would for any app, but then added scripts to the compiling image to build the game images for us and deploy them, too.

For now, I don’t have a next lesson in mind. But I will be back soon with more to share!

Tagged :

Learning Docker – Your home Docker installation

Earlier today I was talking with some of my coworkers about what I learned about Docker over the last three weeks, and it dawned on me that they did’t have Docker installed on their computers. My first reaction was to want to say, “Oh, it’s so easy!” While this is mostly true, I wanted to review the setups that I’m aware of to help anyone that wanted to use Docker at home. If you need Docker installed for your work environment, then you can use this as a starting point, but you will want more than I am going to write about today.

I can think of five broad scenarios that you might install Docker in:

  • Docker on Linux
  • Docker on Mac
  • Docker on Windows Pro or better using Hyper-V
  • Docker on Windows 10 2004 Pro or better using WSL2
  • Running a virtual machine that runs one of the above scenarios

Docker on Linux

Most distributions of Linux offer a Docker package as part of their standard library. This makes the barrier to entry minimal if you use Linux. However, most distributions freeze their version of Docker at the version it is at when they make a release. Case in point, Ubuntu Xenial ofers 18.09.7, Ubuntu Focal provides Docker 19.03.8, while Docker has released 19.03.12 as of this writing. Please remember, however, that while the distribution packages are of older versions, the package maintainers do backport security updates into their software, so they normally are only lacking in new functionality.

If you want to be on the cutting edge (or even bleeding edge), you can visit the Docker Hub for Linux package repositories and add their latest release or beta versions to your package management software. This has the benefit of providing you with the latest version of Docker. However, if there is a problem, you will likely need to work with the Docker community and your distribution’s community to work through any issues you encounter.

Docker on Mac

Full disclosure: I have no idea; I don’t own a Mac. However, you can download a copy of Docker Desktop from the Docker Desktop download page. My guess is that it works like the Windows counterpart, but since OS X is a Unix flavor, you should not have any problems with using it. At least fewer problems than Windows users, anyway.

Docker on Windows Pro or better using Hyper-V

If you want to download Docker Desktop for Windows, you will need Windows Pro or better. This is because Docker Desktop requires the use of Hyper-V, Microsoft’s virtualization software, to create any Linux containers that you would want to run. This feature is not available in Windows Home. This will also allocate a chunk of RAM from your OS so the VM can breathe while running your containers. If you have 8GB or less of RAM, you will find that you need to manually start (and stop) Docker Desktop so that your other applications have enough RAM while you are not using it. However, it runs very reliably. I’ve read that Docker Desktop downloads a VM of Alpine Linux to run the actual Linux containers and that the Windows service acts as a bridge into that.

Docker on Windows 10 2004 using WSL2

This is extremely new; I actually opted into Microsoft’s beta program so I could get a copy of Windows 10 that runs WSL 2 back in December. Now that Win10 2004 is out, Windows 10 users can now get access to WSL2. What is WSL2? It is the Windows Subsystem for Linux version 2 a it allows you to run native Linux binaries on Windows. No joke. It works great. I’ve been using WSL at work now for over a year to get access to common Linux packages like grep and openssl without needing to hunt down a Windows port of the programs. WSL2 allows for tighter integration, and Docker Desktop is able to leverage those new features to use the Linux software you install on Windows to run Docker. On any version of Windows 10 later than 1607, I believe you can go to the Windows Store and download a copy of Ubuntu, SUSE, Cali, and Debian Linux. Unlike Hyper-V, WSL2 is available to all version of Windows 10, according to Microsoft.

There’s a couple tricks to getting WSL2 working on your computer if it meets the requirements. I found this excellent guide by Scott Hanselman and Microsoft’s WSL2 installation instructions. Do the Dew and get WSL2 installed on your computer, and you’re ready to run native Linux containers on Windows. Back in January, I also tried to launch the Kubernetes on Docker option, which crashed Docker royally. When I tried again at the start of June, it ran flawlessly. I was also able to run kind to install multiple Kubernetes nodes as containers on my Docker Desktop.

Running a virtual machine that runs one of the above scenarios

If you are still running an older version of Windows or have a virtual environment, you can spin up a VM running Linux or Windows Server and run Docker on that. It is just a little more effort, because you need to install and maintain yet another OS. I recommend using a Linux distribution that you can find a lot of help with for Docker if you have to go this route, if you’re not familiar with Docker or Linux. (Truth: Linux is getting easier to use, but is still not meant for people that swear by Computers For Dummies.) On Windows, you will likely want to download a copy of VirtualBox from Oracle and will need to make sure that you have enabled virtualization in your BIOS. If you need help, you will need to search for specific directions on how to do this, as it varies widely from manufacturer to manufacturer.

Other tools you will need

This is strange to say, but you only need whatever tools you currently use to write software or configure computers. Except maybe a solid text editor to edit your Dockerfiles and other config files. I like Notepad++, and others at work swear by TextPad. If you’re running Linux or OS X, you also have the software available on those systems. Scott Hanselman can show you how to get VSCode setup so you can develop in your Windows containers. But the reality is that you will still likely to continue to use the tools you already do. Until you figure out how to get a container to do it for you.

What else do I need for my work environment?

Leveraging containers is more than just installing Docker. It is providing a container infrastructure that can run those containers in production. It is adopting new resources to store images for team and enterprise use and potentially adopting or adjusting new change control processes. You need to consider high availability and backup. So if you want to start using Docker at work but your work has not picked up on it, your use will be severely limited to development you do, and then you will need to get it out of the container to use the way your work currently does. Based on what I’m seeing what my work, this needs to be a business level decision with buy-in at all levels: management to get the shiny new toys to run the containers and the okay to move to that architecture, devs to learn “the new way to do the old thing”, ops to figure out and implement the architecture needed to reliably run the containers, and your customers’ patience as you grow into this, because the change will initially slow you and your projects down as you start learning and doing. Don’t worry, you’ll get faster than before when the change is done. Making containers work right, like most things in life, ends up being a team effort.

Tagged :

Learning Docker – Registration, please

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we will saw how to use Docker for development and deployment, including how to make a full image that could be used for production based on your development environment and how to upgrade and revert a production container. We will also learned a couple different ways to clone a Docker volume, including some of the challenges of cloning a live volume.

This lesson, we will create a private Docker Registry. We will also use the OpenStack Swift storage driver to store your registry in a storage bucket. We will also enable SSL encryption using the Let’s Encrypt service and setup password authentication.

Up to this point, we have used the local computer to store your working images. This is fine for a personal development environment, but at some point, we need a dedicated place to store your images that can be accessed by other systems, and this is a requirement for production environments. This is even helpful if you are working exclusively in development, or even exclusively on your own computer. This is because, if done right, you have a means to use the registry as an image backup system, too.

To make this reality of a “backed up” registry possible, we are going to leverage the Docker Swift driver and mount a bucket to store your images. While you could use a cloud provider, there are other private solutions: my QNAP NAS system supports acting as an Swift and S3 storage provider, and so I am using that to store my images. (For the record, I tried this initially as S3 storage, but there was a protocol error. 😛 ) When I am done, I will have a means to store my images off computer and can survive a hard drive failing.

Docker has directions for creating a registry at https://docs.docker.com/registry/deploying/ and some configuration options for using the OpenStack drivers at https://github.com/docker/docker.github.io/blob/master/registry/storage-drivers/swift.md. We also cover some authentication options and setup SSL encryption. However, as of June 2020, the registry binary in the Docker image does not use the current version of the Let’s Encrypt ACME v2.0 protocol and has not worked since November 2019, so we’re going to compile a fresh copy for your image.

Let’s start there. Create a folder for this project, and run the following:

docker run --rm -v $(pwd):/mnt -it golang:alpine /bin/bash -c "apk update; apk upgrade; apk add --no-cache bash git openssh;go get github.com/docker/distribution/cmd/registry; mkdir /mnt/bin; cp -v /go/bin/registry /mnt/bin"

or on Windows, updating <Full Window Path>:

docker run --rm -v "<Full Windows Path>:/mnt" -it golang:alpine /bin/bash -c "apk update; apk upgrade; apk add --no-cache bash git openssh;go get github.com/docker/distribution/cmd/registry; mkdir /mnt/bin; cp -v /go/bin/registry /mnt/bin"

This sets up a go environment to compile the latest version of the registry executable and builds it for us.

With all the changes we need to make, we are now going to build your own custom registry image with your settings baked in. If we didn’t need the updated registry binary, we could instead pass in environment variables to configure the container. So now we need a couple things: a Dockerfile, a config file for the registry, and a password file. We also need to copy in the updated registry executable. Now let’s create your own version of the registry image. Create a Dockerfile:

FROM registry:latest
COPY ./bin/registry /bin/registry

And run:

docker build -t my_registry .

This gets us a stock-configured registry container with a fresh registry binary.

Let’s get a copy of the stock config file. The registry documentation at https://docs.docker.com/registry/configuration/ indicates the file is stored at /etc/docker/registry/config.yml. Let’s create a registry folder to hold your new config files, and make a copy of the one file in the registry folder:

docker run --rm my_registry cat /etc/docker/registry/config.yml > registry/config.yml

Or if on Windows:

docker run --rm my_registry cat /etc/docker/registry/config.yml > registry\config.yml

Now open the file in your text editor of choice (but remember to save with Unix newlines). The default settings are pretty sparse. We are going to add your settings for Swift storage, using Let’s Encrypt for an SSL certificate, and a password file. Set the config.yml to the following, correcting placeholders as needed:

version: 0.1
log:
  accesslog:
    disabled: true
storage:
  cache:
    blobdescriptor: inmemory
#  filesystem:
#    rootdirectory: /var/lib/registry
#    maxthreads: 100
#  s3:
#    accesskey: <accesskey>
#    secretkey: <secretkey>
#    region: US-East-1
#    regionendpoint: <S3 URL>
#    bucket: <bucket_name>
#    secure: true
#    v4auth: true
#    rootdirectory: /
  swift:
    username: <username>
    password: <password>
    authurl: https://storage.myprovider.com/auth/v1.0 or https://storage.myprovider.com/v2.0 or https://storage.myprovider.com/v3/auth
    insecureskipverify: true    
    container: <bucket_name>
    rootdirectory: /
http:
  addr: :5000
  host: https://<dns_name>:5000
  tls:
    letsencrypt:
      cachefile: /etc/docker/registry/letsencrypt.json
      email: <your_email>
      hosts: [ "<dns_name>" ]
  headers:
    X-Content-Type-Options: [nosniff]
auth:
  htpasswd:
    realm: docker-registry
    path: /etc/docker/registry/htpasswd
health:
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3

I left two commented out storage drivers: the S3 driver (which I think would have worked if my NAS was behaving) and the filesystem driver. You may only have one storage driver configured per registry, but I wanted to share other options as well. The HTTP options should reflect the externally available DNS name for your computer; this is a requirement for Let’s Encrypt. Also, you will need for your registry to be available over teh Internet at port 443 for the initial certificate handshake to work. If you are using your home network, then you should be able to log into your home router and setup a TCP firewall rule connecting 443 from the Internet to your computer’s port 5000; directions on how to do this are beyond the scope of this tutorial. However, after the certificate is received, you can edit the firewall rule to use port 5000 instead of 443.

Next we will configure authentication. This allows you to password protect your registry. If you don’t plan on this being a web-accessible registry, you could skip this. I’m including this because I am trying to implement some security. Let’s create an htpassword file, replacing username and password with your choices:

docker run --rm httpd htpasswd -nbB username password >> registry/htpasswd

Or on Windows:

docker run --rm httpd htpasswd -nbB username password >> registry\htpasswd

Run the above command for as many users as you would like.

We can test everything using bind mounts to place your files where needed. On Linux:

docker run --rm -v $(pwd)/registry:/etc/docker/registry -p "5000:5000" --name registry my_registry

And on Windows:

docker run --rm -v "<Full Windows Path>/registry:/etc/docker/registry" -p "5000:5000" --name registry my_registry

Quickly run the following command to force the container to get an SSL certificate:

docker login https://<dns_name>

Once this is done and you can successfully log in, you can update your firewall rule, including deleting it. You will need the rule back when your cert expires in a year, though.

Now test out your login and ability to push files:

docker login <dns_name>:5000
docker image tag my_registry <dns_name>:5000/registry:latest
docker image push <dns_name>:5000/registry:latest

The process to push a file is to login to the registry (I think this only needs to be done once, until the password expires), then tag the image to the new registry, then to push the image with it’s new tag name.

At this point, it works for me. Now we can make your permanent image, and upload it to the registry for safe keeping. Lets update the Dockerfile with your two config files:

FROM registry:latest
COPY ./bin/registry /bin/registry
ADD ./registry/config.yml /etc/docker/registry/
ADD ./registry/htpasswd /etc/docker/registry/

Now rebuild your image:

docker build -t my_registry:latest .

Last, let’s create a docker-compose.yml file for a clean container definition:

version: "3.8"
volumes:
  letsencrypt_data: {}
#  registry_data: {}
services:
  registry-container:
    image: my_registry
    restart: always
    ports:
     - "5000:5000"
    volumes:
#     - "registry_data:/var/lib/registry"
     - "letsencrypt_data:/etc/docker/registry/letsencrypt.json"

Again, I left comments here in case you are using your local computer as registry storage. Uncomment these lines in order to build a named volume and to mount it for use with your registry. You don’t need this if you’re using the S3 or Swift storage drivers.

Now let’s stop your test container and launch docker-compose:

docker container stop registry
docker-compose up -d

Now at this point, the container is up, but your SSL certificate is on your local computer. You need to copy it back. Fortunately, there is a docker command to copy files into and out of the container:

docker container ps
docker cp ./registry/letsencrypt.json <container_name>:/etc/docker/registry

Let’s restart the container to get the SSL cert loaded:

docker-compose stop
docker-compose start

Now re-tag your newest my_registry image and push it. For safe keeping.

docker login <dns_name>:5000
docker image tag my_registry <dns_name>:5000/registry:latest
docker image push <dns_name>:5000/registry:latest

You will notice that some layers are already in your registry if you were using the Swift or S3 drivers. This is because the original layers of your image were uploaded to your external storage while we were testing, and so they were skipped.

To summarize, we created a private Docker Registry using the OpenStack Swift storage driver to store your registry in a storage bucket externally. We also enabled SSL encryption with help from Let’s Encrypt and setup password authentication.

For the next lesson, we will create a web-accessible MUD.

Tagged :

Learning Docker – Developing with Docker

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we covered how to write a docker-compose.yml file to manage the desired configuration of multiple conatiners and how to use the docker-compose command to build, start, and stop those containers.

This lesson, we will examine how to leverage Docker for development and deployment, including making a full image that could be used for production and how to upgrade and revert your production container. We will also cover how to clone a Docker volume.

For application developers, one of the most challenging problems is ensuring that what you develop and test works the same way in production where your customers are using it. I’ve seen it plenty of times where you have a “testing environment” that has all the components of your production environment, but the software versions are out of date, the data in production is not what’s in test, and even “test stopped working”, usually because of a lack of ongoing maintenance in that environment. Normal means of combating this is to make a copy of the production environment that can be used to develop on. However, when you’re looking at cloning a database, 6 application servers and their supporting processes, things can get messy.

However, using Docker can change this paradigm in two ways. You can at anytime create a new image using the docker container commit command, making a full image of the container that can then be run on any other Docker host. Also, Docker volumes can be cloned by mounting the named container volume into a different container and using the tar command to make a full copy (per https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes). Second, if efforts are made to segregate data from the application being worked on, it becomes possible to update the application using the volume attachment method we used in making the MySQL database from the last two lessons.

So how do we achieve development zen? Divide and conquer, baby.

First, you want to ensure that all application data is stored in its own volume. The problem with docker container commit is the data in your application gets baked into the image. While this might sound appealing because you’ll always have a point to go back to, when you want to test your app against fresh data, you do not have a clean way to inject it into your app. You end up running docker container commit again and then either get and copy the fresh data out into your development container, or copy your application changes into the new clone. Both options can be very ugly on a good day.

Second, you want to isolate your application (the code you added to the image to make it an application) from the base image you are developing with. This does not mean another volume, necessarily. Rather, you want to be in a position that you can identify specific directories (not files) that you need to populate for your application to work.

To demonstrate, let’s build a PHP application that allows you to upload and download files. Note that using an app like this in real life is HIGHLY insecure, but the development concepts here can be applied easily. We will use the php:apache image, identify the /var/www/html direcotry as the place that your application will live, and create the directory /var/www/files as where the application data will live, backed by a mounted volume.

So let’s start by creating two of the three volumes:

docker volume create --name files_dev
docker volume create --name files_test

Defining your environments:
files_dev is the development environment used while building your application
files_test is the testing environment and will be a copy of production when we are testing how an upgrade will work.
files_prod is the production environment and is what your users see today (and hopefully see tomorrow…). This will get created when we make a production container.

Now let’s build your development environment. Start by creating a project directory, a php directory in there, and an empty index.php file in the php directory. We don’t need a custom image, we’ll just use php:apache. But attach your data volume at /var/www/files, and attach the php directory to /var/www/html. Unfortunately, the command is different if you ware on Windows vs Linux. Run these commands from inside your project directory.
Linux:

docker run -d -v files_dev:/var/www/files -v ./php:/var/www/html -p 8080:80 --name php_dev php:apache

Windows:

docker run -d -v files_dev:/var/www/files -v "<FULL_PATH_TO>\php:/var/www/html" -p 8080:80 --name php_dev php:apache

Connect to the container shell with the following command:

docker exec -it php_dev /bin/bash


You can now run ls /var/www/html and see the empty PHP file there. However, if you run ls -l /var/www, you will notice that the owner of your html and files directories is none other than root. This will break your webserver, so quickly run the following to fix ownership:

chown www-data:www-data /var/www/*

Now exit the container shell and save the following in your index.php file:

<?php
$files_path = '/var/html/files';
$add_path = isset($_GET['dir'])?$_GET['dir']:'/';
?>
<html>
  <head>
    <title>File Share</title>
    <style>
table, th, td { border: 2px solid black }
span { border: 2px solid black; padding:4px 2px}
    </style>
  </head>
  <body>
    <h1>Location: <?=$add_path?></h1>
    <form method="POST">
      <span><input type="file" name="file" /><input type="submit" name="action" value="Add File" /></span>
    </form>
    <table>
      <thead><tr><th>Name</th><th>Size</th><th>Date</th></tr></thead>
    </table>
  </body>
</html>

If you go to http://localhost:8080, you should see your bare-bones webpage. There is no PHP code that has any real effect, so you should see an empty table and a file upload bar.

Now for the real reason we mounted the local directory: you can work on your app to your heart’s content and don’t need to create a new image in order to test code changes. There’s nothing holding you back, and you’re developing on the same software you will deploy to. If you discover that you need more software in your core image, you add to it like we did in previous lessons, build, and redeploy the upgraded image with the same mounts, and then development continues.

So let’s finish version 1 of the app (entire file):

<?php
$files_path = '/var/www/files';
$add_path = isset($_GET['path'])?$_GET['path']:'';
$action = isset($_POST['action'])?$_POST['action']:'';
$path = $files_path . $add_path;
$realpath = realpath($path);
$status = '';
if ($realpath != $path ) {
    $add_path = '';
    $path = $files_path . $add_path;
    $action='';
}
switch ($action) {
    case 'Add File':
        $struct = $_FILES['file'];
        if ($struct['error'] == UPLOAD_ERR_OK) {
            $tmp_name = $struct['tmp_name'];
            // basename() may prevent filesystem traversal attacks;
            // further validation/sanitation of the filename may be appropriate
            $name = basename($struct['name']);
            if (move_uploaded_file($tmp_name, "$path/$name")) {
                $status = "<span class='ok'>File $name uploaded.</span>";
            } else {
                $status = "<span class='bad'>File $name not uploaded.</span>";
            }
        }
        break;
    case 'Remove File':
        if (!isset($_POST['tgt_path']) || !$_POST['tgt_path']) { break; }
        $tgt_path = $_POST['tgt_path'];
        $name = basename($tgt_path);
        $full_path = realpath($files_path . $tgt_path);
        if (!$full_path) { break; }
        if (unlink($full_path)) {
            $status = "<span class='ok'>File $name deleted.</span>";            
        }
        else {
            $status = "<span class='bad'>File $name not deleted.</span>";            
        }
        break;
    default:
        if ($realpath && is_file($realpath) && !is_dir($realpath)) {
            header('Content-Type: application/octet-stream');
            header('Content-Disposition: attachment; filename="' . basename($realpath) .'"');
            readfile($realpath);
            exit;
        }
}
?>
<html>
  <head>
    <title>File Share</title>
    <style>
table, tr { border: 1px solid black }
span { border: 2px solid black; padding:4px 2px}
form { margin: 10px }
.r { text-align:right }
.g { background-color: lightgray }
.ok { background-color: limegreen; color: green }
.bad { background-color: coral; color: red }
    </style>
  </head>
  <body>
    <?=$status?>
    <h1>Location: <?=$add_path?></h1>    
    <form enctype="multipart/form-data" method="POST">
      <span><input type="file" name="file" /><input type="submit" name="action" value="Add File" /></span>
    </form>    
    <table>
      <thead><tr><th>Name</th><th>Size</th><th>Date</th><th>Action</th></tr></thead><tbody>
      <?php 
$count = 0;
foreach (scandir($path) as $filename) {
    $bg = (++$count % 2) ? 'g' : '';
    if ($filename == '.' || (!$add_path && $filename == '..')) { continue; }    
    $filepath = realpath($path . '/' . $filename); 
    $newpath = str_replace($files_path, '', $filepath);
    $size = filesize($filepath);
    $mod = strftime('%Y-%m-%d %H:%M:%S',filemtime($filepath));        
    $value = 'Remove File';
    echo "<tr class='$bg'><td><a target='_blank' href='?path=$newpath'>$filename</a></td><td class='r'>$size</td><td>$mod</td><td><form method='POST'><input type='hidden' name='tgt_path' value='$newpath'><input type='submit' name='action' value='$value' /></form></td></tr>";
}
      ?>
      </tbody></table>
  </body>
</html>

This working PHP program lists and allows for the upload of files. It will also delete them. Uploading a file with the same name replaces the file. So for now, upload some files, delete one or two, and make sure it works.

Now we’re ready to deploy to production. Create a Dockerfile and save the following into it:

FROM php:apache
COPY ./php/* /var/www/html
RUN mkdir /var/www/files && \
	chown -R www-data:www-data /var/www && \
    chmod a-w,o-rwx /var/www/html && \
    chmod go-rwx /var/www/files

This grabs your base image, copies your PHP application into the correct directory in the image, pre-creates your files directory, and sets file ownership permissions on all folders that need them. It also specifically removes write access for all accounts and any access for users not in the www-data group from your webserver directory. Let’s build the image now:

docker build -t my_php_app:1.0 .

This builds your image using the Dockerfile as my_php_app and tags it as version 1.0. Since we’re also deploying to production, let’s also create a docker-compose.yml file:

version: "3.8"
volumes:
  files_prod: {}
services:
  php-container:
    image: my_php_app:prod
    restart: always
    ports:
     - "80:80"
    volumes:
     - "files_prod:/var/www/files"

If you are paying attention, the tag on your image should have caught your eye. Your docker-compose.yml file is looking for a my_php_app:prod image, but we just made a my_php_app:1.0 image. Why do this? The answer has to do with recovering from a failed upgrade. We are going to use the “prod” tag to identify the specific image we want to run in prod, even if it is not the latest one. Your docker-compose.yml file is the authoritative configuration for how your production environment should work, and it has been my experience that configuration files like this should not be touched as much as possible. Also, images can have multiple tags, but a specific tag can only be on one image. So if your new prod image does not work as you want, you can set the prod tag back on the old image and rerun “docker-compose up” to put things back the way they were before. So let’s add the prod tag to your image:

docker tag my_php_app:1.0 my_php_app:prod

Now lets build and deploy your production server:

docker-compose up -d

A look at http://localhost should give you your app in all its file-storing spendor. Watch out Google! Let’s add some files to this container, but make sure that they are not the same files that you uploaded with your dev instance. Also, your production volume was created. Let’s see what’s out there:

docker volume ls

If you’ve noticed, there are already a few volumes here. Most of them are from prior lessons. However, the one we are looking for ends with “_files_prod”. The name of the volume starts with the name of the project folder that your docker-compose.yml file was located in, without spaces and all lowercase. We will need the full name of that volume in a moment.

Google sent us a message: Drive is way superior because users can make folders to organize their files. Guess we need to upgrade your app!

Go back to your index.php file and update it to the following:

<?php
$files_path = '/var/www/files';
$add_path = isset($_GET['path'])?$_GET['path']:'';
$action = isset($_POST['action'])?$_POST['action']:'';
$path = $files_path . $add_path;
$realpath = realpath($path);
$status = '';
if ($realpath != $path ) {
    $add_path = '';
    $path = $files_path . $add_path;
    $action='';
}
switch ($action) {
    case 'Add File':
        $struct = $_FILES['file'];
        if ($struct['error'] == UPLOAD_ERR_OK) {
            $tmp_name = $struct['tmp_name'];
            // basename() may prevent filesystem traversal attacks;
            // further validation/sanitation of the filename may be appropriate
            $name = basename($struct['name']);
            if (move_uploaded_file($tmp_name, "$path/$name")) {
                $status = "<span class='ok'>File $name uploaded.</span>";
            } else {
                $status = "<span class='bad'>File $name not uploaded.</span>";
            }
        }
        break;
    case 'New Dir':
        if (!isset($_POST['dir']) || !$_POST['dir']) { break; }        
        $dir = $_POST['dir'];
        $name = basename($dir);        
        $full_path = realpath($files_path);
        if ($dir != $name || !$full_path) { break; }
        mkdir ($full_path . '/' . $name);
        break;
    case 'Remove Dir':
        if (!isset($_POST['tgt_path']) || !$_POST['tgt_path']) { break; }        
        $tgt_path = $_POST['tgt_path'];
        $name = basename($tgt_path);
        $full_path = realpath($files_path . $tgt_path);
        if (!$full_path) { break; }
        if (count(scandir($full_path)) != 2) {
            $status = "<span class='bad'>Directory $name not deleted, is not empty.</span>";            
            break;
        }
        if ($full_path && rmdir($full_path)) {
            $status = "<span class='ok'>Directory $name deleted.</span>";            
        }
        else {
            $status = "<span class='bad'>Directory $name not deleted.</span>";            
        }
        break;
    case 'Remove File':
        if (!isset($_POST['tgt_path']) || !$_POST['tgt_path']) { break; }
        $tgt_path = $_POST['tgt_path'];
        $name = basename($tgt_path);
        $full_path = realpath($files_path . $tgt_path);
        if (!$full_path) { break; }
        if (unlink($full_path)) {
            $status = "<span class='ok'>File $name deleted.</span>";            
        }
        else {
            $status = "<span class='bad'>File $name not deleted.</span>";            
        }
        break;
    default:
        if ($realpath && is_file($realpath) && !is_dir($realpath)) {
            header('Content-Type: application/octet-stream');
            header('Content-Disposition: attachment; filename="' . basename($realpath) .'"');
            readfile($realpath);
            exit;
        }
}
?>
<html>
  <head>
    <title>File Share</title>
    <style>
table, tr { border: 1px solid black }
span { border: 2px solid black; padding:4px 2px}
form { margin: 10px }
.r { text-align:right }
.g { background-color: lightgray }
.ok { background-color: limegreen; color: green }
.bad { background-color: coral; color: red }
    </style>
  </head>
  <body>
    <?=$status?>
    <h1>Location: <?=$add_path?></h1>    
    <form enctype="multipart/form-data" method="POST">
      <span><input type="file" name="file" /><input type="submit" name="action" value="Add File" /></span>
    </form>
    <form method="POST">
      <span><label for="dir">New directory name</label><input id="dir" name="dir" size="20" max-size="20" /><input type="submit" name="action" value="New Dir" /></span>
    </form>
    <table>
      <thead><tr><th>Name</th><th>Size</th><th>Date</th><th>Action</th></tr></thead><tbody>
      <?php 
$count = 0;
foreach (scandir($path) as $filename) {
    $bg = (++$count % 2) ? 'g' : '';
    if ($filename == '.' || (!$add_path && $filename == '..')) { continue; }    
    $filepath = realpath($path . '/' . $filename); 
    $newpath = str_replace($files_path, '', $filepath);
    $is_dir = is_dir($filepath);
    if ($is_dir){
        $size = '';
        $mod = '';  
        $button = ($filename != '..') ? "<input type='submit' name='action' value='Remove Dir' />" : '';
    }
    else {
        $size = filesize($filepath);
        $mod = strftime('%Y-%m-%d %H:%M:%S',filemtime($filepath));        
        $button = "<input type='submit' name='action' value='Remove File' />";
    }
    echo "<tr class='$bg'><td><a target='_blank' href='?path=$newpath'>$filename</a></td><td class='r'>$size</td><td>$mod</td><td><form method='POST'><input type='hidden' name='tgt_path' value='$newpath'>$button</form></td></tr>";
}
      ?>
      </tbody>
    </table>
  </body>
</html>

Now let’s turn this into an image candidate and test it out. This will take a few steps. First, let’s make the image:

docker build --name my_php_app:2.0 .

Now things get tricky. We are going to create a new container that clones the files_prod data into files_test. What’s tricky is that the volume for files_prod is in use, so while we are copying the data, the data can change, or worse, the application can hang. This is a challenge for any application. The safest way is to stop the main application so no data is in use while copying. The way your application is configured may also allow for the use of storage snapshots or backups to recover data in a usable state. However, in this Docker environment, we can safely copy your files over because unless you are editing a file while the copy is happening, there will be no problems. First get the name of volume that has your production file data:

docker volume ls

Now, run the following to clone your data:

docker run --rm -v ":/mnt/prod" -v "files_test:/mnt/test" ubuntu /bin/bash -c "rm -fr /mnt/test/* ; tar -cf - -C /mnt/prod . | tar -xvf - -C /mnt/test"

The Docker website offers the –volumes-from option to grab the volumes. This is a quicker way to mount the volumes on your container, but this will mount all volumes from the specified container in the same locations on your container. I prefer having control of where the volumes are mounted. Also, the Docker website commands will write a tarball on the Docker host. This is normally preferable to a straight volume-to-volume copy, as you can use the tarball file to “start over” if needed. This is doubly helpful if you have to stop your prod environment in order to get your production data. However, since this is a tutorial and Windows users would need to jump through additional hoops, I opted for a direct copy.

Last, create a testing container:

docker run -d -v "files_test:/var/www/files" -p "4444:80" --name test_my_php_app my_php_app:2.0

Notice that I used port 4444 to test with. I tried to make the port number as different from prod and the development container as possible to ensure that I know which container I’m working with. Now visit http://localhost:4444 and see if it works. You should see what looks like a copy of your production environment, but with the ability to create a directory. Make a directory or two. Excellent.

Now let’s upgrade your production website. Apply the prod tag to your newest app image, and rerun docker-compose:

docker tag my_php_app:2.0 my_php_app:prod
docker-compose up -d

You should get a notice about rebuilding the container. Remember that while the container is being rebuilt, the app is down. Once it’s done, go to http://localhost, and test your upgraded application.

If you want to simulate a rollback, tag the old version and rerun docker-compose:

docker tag my_php_app:1.0 my_php_app:prod
docker-compose up -d

Now if you created a directory with version 2.0 in production then went back, you will notice that the app does not work exactly as planned. That is because the v1.0 app did not do anything with directories, but they are now in your file data. This brings up an important point: you are still responsible for fixing any new data that conflicts with your old app. In this case, we could connect to the container shell and remove any directories that were created. However, sometimes this is a lot harder, like in the case of a failed database upgrade. You will still need to plan for what to do if that happens. Docker only makes it easier to revert your application code in this scenario.

To summarize, we will saw how to use Docker for development and deployment, including how to make a full image that could be used for production based on your development environment and how to upgrade and revert a production container. We will also learned a couple different ways to clone a Docker volume, including some of the challenges of cloning a live volume.

For the next lesson, we will create a private Docker Registry. We will also use the OpenStack Swift storage driver to store the registry in a storage bucket.

Tagged :

Learning Docker – Compose yourself

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we learned how to use volumes to store data outside of a container, enabling us to upgrade an application without losing its data.

This lesson, we cover how to use the docker-compose command and the docker-compose.yml file to manage container configuration. We will also cover how multiple containers “talk” to each other.

Docker allows for us to attach volumes and host ports to containers. While these are all neat features, they also really complicate the command needed to start the container. For example, take a look at the Nginx page on Docker Hub at https://hub.docker.com/_/nginx and see how it talks about mounting two directories if you run the image in read-only mode. Let’s reconfigure it to also listen on port 443 and give it some SSL certs, map to the local copy of our website so we can develop and add a location to save webserver logs so that we can read them easily.

This is your command:

docker run -d -p 80:80 -p 443:443 --read-only -v $(pwd)/nginx-cache:/var/cache/nginx -v $(pwd)/nginx-pid:/var/run -v $(pwd)/log:/var/log/nginx -v $(pwd)/content:/usr/share/nginx/html:ro -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro -v $(pwd)/certs:/etc/nginx/certs:ro nginx

This is 5 directories and a file plus two ports in the command line. Yes, you can use your comand history to rerun it later. But this is not well organized, and it does not make things easy for others to run your image. And if you need to start more than one container, did you actually start the right one, or try to run the one image twice? Are you sure everything will work together?

The docker-compose command and the docker-compose.yml file can help make this more manageable. The YAML (.yml) file is a written description of how to start one or more containers as a single unit, including defining how volumes and ports are attached to each container. This not only makes things more readable, but it is also more easily shared with others and more easily managed- the file (and not your shell history) has the most current configuration of your containers. Also, you can use it to visually verify which services will be exposed to the open network.

Let’s get started with a docker-compose.yml file by creating a new project folder. The options for a docker-compose.yml file can be found at https://docs.docker.com/compose/compose-file/. We will start by defining a MySQL container using the directions at https://registry.hub.docker.com/_/mysql:

version: "3.8"
volumes:
  mysql_data: {}
services:
  mysql-container:
    image: mysql:8.0
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    ports:
     - "3306:3306"
    volumes:
     - "mysql_data:/var/lib/mysql"
    environment:
      MYSQL_ROOT_PASSWORD: example

YAML (.yml) files are a markup syntax, and takes a little getting used to for non-programmers. I’m not going to discuss the syntax, but will highlight what each line does:

  • The first fine defines the version of docker-compose syntax you want to use. The version should be at least 3.0, with the current version as of when I wrote this lesson is 3.8
  • The volumes line defines the named volumes that you want created.
  • The following line creates a volume named mysql_data. The curly braces indicate that we are passing no parameters to the volume create command. We might need parameters if we need to specify a driver or if the driver needs additional configuration information.
  • The services line indicates that the following elements define services, and each service starts at least one container copy of an image. A service can specify that more than one copy should be ran at the same time, allowing for redundancy.
  • The next line is the name of the service. If Docker is aware of this service, then it will control the existing container that provides this service.
  • The image line defines the name of the image to use. If the desired image does not already exist on the Docker host, it is fetched from known registries. If the image name in the file does not match the name of the current image, then the container is destroyed and rebuilt.
  • The command line replaces the content of the CMD function in the image. It can do this in conjunction with the ENTRYPOINT option in a Dockerfile, which we have not covered yet. This specific line passes a parameter to the MySQL server.
  • This restart line tells Docker that this image should always restart when Docker starts. This is very useful and important if you are using Docker to run a program and do not want to rely on manual intervention to start it back up.
  • The ports line specifies that we are defining ports. Since we could need multiple ports to be accessible, we can define more than one port to map. In example, Nginx can also listen on 443 for HTTPS connections, and we might want to make both ports 80 and 443 available for external use. The line after that specifies this is a port mapping value of port 3306 to port 3306. That is how we will access this container externally.
  • Last, the environment line indicates the environment variables that will be set in the container. Again, as shown in the last lesson, these values only work if the image is expecting to use them. This environment variable sets the initial password for the root account in the server.

Now, from the same directory as the .yml file, run:

docker-compose up

You will see Docker spring to life, creating a MySQL server instance. You will also get to see the initialization of MySQL (which we didn’t see last lesson). When it is done (and should say “ready for connections”), use Control-C to get terminate the Docker container. Go back to your docker-compose.yml file, and change the image line so that it just uses “mysql” instead of “mysql:8.0”. Run “docker-compose up” again. Note that this time, there is a lot less text. This is because (like last lesson) our database data is on our mysql_data volume and that is already built from the first run of docker-compose.

Now let’s add our phpMyAdmin image to our docker-compose.yml file. But instead of calling an image we have already built, let’s import a Dockerfile for this resource. In the directory with your docker-config.yml file, create a new folder called my_phpmyadmin, and copy the Dockerfile from lesson six into that folder. Your file tree will look like this:

Project
 |
 +- docker-compose.yml
 |
 +- my_phpmyadmin
    |
    +- Dockerfile

Next, add the following service to the docker-compose.yml file (note the indent before the service name):

  phpmyadmin-container:
    build: my_phpmyadmin/.
    ports:
     - "8080:80"
    environment:
      IP_ADDRESS: mysql-container

Now run the following to commands to get both containers running:

docker-compose build
docker-compose up

The first command builds all out-of-date images using their docker files, which is needed for the phpMyAdmin image. The second command then runs both containers at the same time. Now you can go to http://localhost:8080 and log in as root with the password “example”. To stop running your containers, use Control-C.

Important to point out: we gave the name of the mysql service, mysql-container, as the IP_ADDRESS used by our phpMyAdmin container. Containers can reference each other using the name of the service of the container. This enables the containers to communicate with each other, and is essential because we don’t know what the container’s IP will be in advance. Also, services can communicate with each other even if the port is not exposed to the host. Don’t believe me? Then remove the “ports:” section from the mysql-container service in the docker-compose.yml file, rerun docker-compose up, and log into phpMyAdmin again. It should work fine. Those who are security conscious will immediately jump on this as a security flaw. However, by default, containers can only communicate with each other in this manner if they are defined in the same docker-compose.yml file. I verified this by bringing up a new container manually, and then trying to communicate with the other containers. It did not work, even by IP addresses. Additionally, Docker does allow the creation of virtual networks to allow you to further limit which containers in this file can talk to each other. If you wanted network isolation from other containers, then you need to begin defining networks for each set of containers and assigning those containers to the intended network.

That wasn’t too bad. With a 20 line file, we created two containers and a volume, and defined how they all work together. To run them as a background process and then stop them, you can run these two commands from inside your project directory:

docker-compose up -d
docker-compose down

To summarize, we covered how to write a docker-compose.yml file to manage the desired configuration of multiple containers and how to use the docker-compose command to build, start, and stop those containers.

For the next lesson, we will examine how to leverage Docker for development, including making a full image that could be used for production.

Tagged :

Learning Docker – Using volumes

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we created an image by injecting an application into the image without using packages and set the image up to use environment variables as a way to change settings in the container when the container is created.

This lesson, we we cover how to use volumes to keep data even after deleting a container.

Normally, a Docker container is ephemeral: the container and all its contents are intended to be thrown away, and application developers are expected to design their docker applications for this. So, if someone wants to upgrade, they destroy the old container and install the new version. But then how does this benefit an application like MySQL?

The solution is in the use of volumes. A volume is a chunk of storage that is “attached” to a container. It can be a file or folder on the Docker host that is mounted to the inside of the container, allowing the container to store data on “the outside”. Another option, however, is the creation of a named image layer (also called a volume) that replaces a folder inside of the container. So for this lesson, we are going to create a volume that will store our MySQL data, create a container that runs an older version of MySQL, add some data, then delete and upgrade to a new version without losing our data.

We can easily create a volume with docker volume create:

docker volume create --name mysql

Now we can create our container. We are going to use the docker run command line to get this running borrowing some directions from https://registry.hub.docker.com/_/mysql:

docker run --name mysql_80 -d -v "mysql:/var/lib/mysql" -e "MYSQL_ROOT_PASSWORD:example" -p "3306:3306" mysql:8.0 --default-authentication-plugin=mysql_native_password

The new -v option specifies a volume to use and where to mount it. You can specify an absolute path or, if using a *nix system, a relative path for the volume, and it will attach the Docker host’s directory or file and put it at the location you specify after the colon inside the container. Volumes that are not a path (does not start with “/” or “./”, or is not a valid full Windows path) are expected to be volumes that were created in docker, and either need to be the full volume ID or the name assigned while creating the volume. You may also make the volume inside the container read-only by adding “:ro” to the end of the volume argument. You may have more than one volume attached to a container, if desired.

The parameters after the mysql:8.0 image name are command arguments that are passed into the image. One Dockerfile statement we have not seen yet is ENTRYPOINT, which defines the specific program to run and then the CMD statement supplies arguments to the program indicated by ENTRYPOINT. If you add parameters after the image name, these replace the contents of the CMD statement from the image’s Dockerfile. In this case, the new command values direct MySQL to use it’s internal account management instead of the default the image was set to.

If you have your myPhpAdmin container running, you can now visit http://localhost:8080 and login as user “root”, password “example”. If not, you can build the image and run it following the directions from the last lesson.

Now that we have a MySQL server running, let’s add some data so that we can clearly see that the data is stored in the volume we created instead of in the container. We’re going to connect to a shell in the container and create a new database and table and store a row of data.

docker exec -it mysql_80 /bin/bash
mysql -uroot -pexample
create database vehicle;
create table vehicle.cars (model varchar(16));
insert into vehicle.cars values ("Pinto");

Before we leave the mysql command and the container shell, let’s confirm that our data is there.

select * from vehicle.cars;

If it says “Pinto”, then we’re set. Exit out of the mysql shell and the container shell. Next, stop and delete the MySQL container:

docker container stop mysql_80
docker container rm mysql_80

Now if we had not attached a volume to /var/lib/mysql, our precious Pinto would be lost. But we attached a volume there, and so our data should safely still be in that volume. So now let’s create a new container using the latest version of MySQL. Well, it happens to be the same version, but the data should still be there because of the volume, even though we have a new container:
docker run –name mysql_latest -v “mysql:/var/lib/mysql” -p “3306:3306” mysql –default-authentication-plugin=mysql_native_password

docker run --name mysql_latest -v "mysql:/var/lib/mysql" -p "3306:3306" mysql --default-authentication-plugin=mysql_native_password

Now you might have noticed that we do not include the environment variable for the root password. This is only needed when the database is being created. Since we have the database files already created in our volume, we should not need this anymore. To confirm, we are running this docker command without the -d parameter, so the mysql server will run as long as the docker run command does. If we see in the last line “ready for connections” and not “needs MYSQL_ROOT_PASSWORD”, then we are in good shape. Hit Control-C to get your shell prompt back, then run the following to get into a container shell:

docker start mysql_latest
docker exec -it mysql_latest /bin/bash

From here, we run our select query to see what’s inside:

echo "select * from vehicle.cars;"|mysql -uroot -pexample

And our Pinto should still be there, even though we deleted the original container and are in a new one. Volumes rock!

There are two imporant things to point out about volumes. First, it might be possible to share a volume between two containers at the same time, depending on the driver used to create the volume. The driver will specify the conditions for sharing the volume; some will permit read-write access to multiple containers at the same time, some only allow read-write to one container but read-only to others, and still some only allow access to one container at a time. Read up on the driver documentation to see what you can do with a volume.

Second, depending on the driver, your volume may be accessible from multiple docker hosts at the same time. In an enterprise environment, this allows you to “move” your container from one host to another, which is extremely helpful for resource allocation and management, as well as conducting server maintenance. However, even if the volume can be accessed from many hosts, the limitations mentioned above may still apply. For example, the volume might only be accessible by one container at a time, but since the volume is accessible from many hosts, you have your choice of host to run the container (and access the volume) on.

To summarize, we showed how use a volume with a container to persist data separate from the container that created and uses the data.

For the next lesson, we will learn how to use docker-compose to define an application that spans multiple containers.

Tagged :

Learning Docker – A PHP app

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we reviewed how to make an program in a container available to the network and how to run a container in the background.

This lesson, we learn how to add an application to an image without using the package manager. We will also cover how to setup an image so we can customize it at runtime.

This lesson is a huge apple to take a bite out of. Up to this point, we have worked with only a couple files or commands to make docker do what we want it to. Today, we are creating our own phpMyAdmin image with a running webserver. This is a lot more complicated than I originally thought, and hopefully we’ll cover a bit about the challenges of building an image.

I’m covering this because there are three main ways to get an application into an image: the package manager from the image base OS, copying it off your computer into the image, or having the Dockerfile run the OS commands needed to get the application you want into the image. In the first case, you are letting the OS provider handle this for you. It is normally a good option. However there are some cases when you want to have more control over what goes into your image. In that case, it needs to be done on your computer and copied in, or you need to get the Dockerfile to do it for you.

My original thought for this lesson was, “Hey, phpMyAdmin is just a set of PHP files that get copied over and a config file updated so it works. This will be easy-peasy.” However, we will need to add some functionality to our PHP image, as it does not have all the library dependencies that PHP requres. Well put this all together in a moment.

Because I beleive in automation (I’m lazy), I’m going to have the Dockerfile setup phpMyAdmin for me. The advantage to this is that if I want an updated application, I may only need to change a couple of parameters in the Dockerfile to get the latest software. In some cases, I may only need to make a new image.

So before I dive into the Dockerfile, let’s review how I normally setup the LATEST version of phpMyAdmin (meaning we need to download a fresh copy of phpMyAdmin):

  1. We have a server that has a webserver and a recent version of PHP installed, including all needed PHP libraries.
  2. We download a copy of the phpMyAdmin source from the Internet.
  3. We extract the phpMyAdmin tarball to where we want to run the pages from.
  4. We configure the webserver, if needed, to enable web users access to the phpMyAdmin site.
  5. We configure phpMyAdmin to work with our MySQL (or MariaDB) server.

So how can we do this with Docker? Let’s break down the process:

  1. We can grab a PHP image with a working webserver baked in. Looking at https://hub.docker.com/_/php I can see that there is php:apache that we can use, or if needed, start with a OS image and package install a web browser and PHP.
  2. We can use wget or curl to download the latest phpMyAdmin using their GitHub URL, https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.xz
  3. We can pipe the downloaded file directly into tar to extract the files where we need to. This will save us from needing to delete the downloaded file later, again saving space.
  4. If we drop phpMyAdmin directly onto the root of the webserver, then we should not need any additional configuration for the webserver, This also follows a the Docker philosophy of only one application per container.
  5. We need a way to setup the phpMyAdmin configuration to to point to a MySQL server. We could copy a file to replace the config with or even map a config file from the local host into the container, but there has got to be a better way…

One thing we should do is test the image we think that we’ll use before we use it with docker build. We can start the basic php on apache image:

docker run --name php -d -p "8080:80" php:apache

Now we can connect to a shell in the container and try to install anything that’s missing so we know what to do for our Dockerfile:

docker exec -it php /bin/bash

First, we need to download and extract the phpMyAdmin file, and trying to do it by hand in the container helped me figure out a command to use in the Dockerfile, too. Run this to get phpMyAdmin into place:

curl -sL https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.xz|tar -xvJC /tmp && mv /tmp/phpMyAdmin/ /var/www/html/ && rmdir /tmp/phpMyAdmin*

This downloads the archive with curl, untars the archive into /tmp, moves the extracted files from /tmp to /var/www/html, and removes the directory the archive created.

After extracting the phpMyAdmin files, I opened http://localhost:8080 to see what would happen. I got a PHP error:

phpMyAdmin - Error
The mysqli extension is missing. Please check your PHP configuration.

Hmm. Missing mysqli command? I did a search on Google for “docker php with mysqli” and found https://github.com/docker-library/php/issues/776 with a hint:

You’ll need to extend the image with your own Dockerfile:

FROM php:7.2-apache-stretch
RUN docker-php-ext-install mysqli

hairmare on https://github.com/docker-library/php/issues/776

Wait… There’s a separate command to load additional PHP extensions? Looking back at the PHP page on Docker Hub, there is a section that covers the docker-php-ext-install command at https://registry.hub.docker.com/_/php under the header “How to install more PHP extensions”.

So let’s add the mysqli library and see what happens:

docker-php-ext-install mysqli

Next, we need to restart the webserver. Do this by exiting from your container shell, then stopping and restarting the container:

docker container stop php
docker container start php

Taking a step back, this would be a great point to build a new base image of PHP that includes the mysqli library, in case we had another project that used it. But since I want to only focus on one Dockerfile in this lesson, I’m not going to do that. You can if you want to for practice; we’ve covered how to in the previous lessons up to this point!

Going to the webpage still gives us an error. What’s happening?! Then I realized that while we are connected to the container, we are running as the root user. This means that any new files created in the container are owned by root, and only root can see them. In our case, the webserver runs as the www-data user, so we need to set all the application files to be owned by that account instead. Connect to your container’s shell again and run the following:

chown -R www-data:www-data /var/www/html

Visiting the website again finally gives us a phpMyAdmin login page. This is great. Now we just need to configure the app to point to a MySQL server.

If you don’t have your own MySQL server, then you’ll need to trust that this will work until our next lesson, where we’ll create a MySQL container. If you do have your own server, then follow along and let’s see if we can get this container to work with your server.

phpMyAdmin uses the config.inc.php file to define available databases to connect to. For this to work, you will need the network IP address of the MySQL server. If you are running Windows, open the Command Prompt and run ipconfig. On Linux or Mac, open a terminal or a shell and run ifconfig. Newer Linux OSes can also run ip addr. Run the following command to create your config.inc.php file:

sed 's/localhost/<IP_ADDRESS>/' /var/www/html/config.sample.inc.php | sed "s/'blowfish_secret'] = ''/'blowfish_secret'] = 'weneedakeyheretologin'/"> /var/www/html/config.inc.php

Don’t forget to replace with your MySQL server’s IP address. you can also change the blowfish key if you want, but we just need something there so we can login.

Now let’s visit the webpage and login. I know my MySQL server’s root account name and password, so that’s what I logged in with. And it appears to be working! I can see all the databases that are on my server.

Now we have a working container running phpMyAdmin and connected to a MySQL server. Now we need to turn our steps into a Dockerfile. But we have not covered one important piece: how to allow the image user to supply the server IP without needing to add their own config.inc.php file to the image. For this, we will use environment variables and a script to process them.

Environment variables are values that are stored in the shell prompt used by the OS (or container). Most shells allow for these values to be set when started, and this enables a Docker image to get customized information for the container that it is running in. This would allow us to have one phpMyAdmin image run in multiple containers, each connected to a different database server.

But we need to get our image setup to accept environment variables. To do that, we need to add something that will process these values for us. To look for inspiration, I began looking through the phpMyAdmin image at https://hub.docker.com/r/phpmyadmin/phpmyadmin and https://github.com/phpmyadmin/docker to see how they did it. What? You wondered why I didn’t just use that one? Because, we’re learning how to build images, not just use them!

Looking at https://github.com/phpmyadmin/docker/blob/master/Dockerfile-debian.template (our image is based on Debian), they have a line to copy a local config.inc.php file to the image. Our solution is in that script:

if (isset($_ENV['PMA_QUERYHISTORYDB'])) {
    $cfg['QueryHistoryDB'] = boolval($_ENV['PMA_QUERYHISTORYDB']);
}

Their config file uses the $_ENV special variable in PHP to set the value used! We can do the same to set the server IP address. We can use the following command to create the desired config.inc.php file:

sed "s/'localhost'/\$_ENV['IP_ADDRESS']/" /var/www/html/config.sample.inc.php | sed "s/'blowfish_secret'] = ''/'blowfish_secret'] = 'weneedakeyheretologin'/"> /var/www/html/config.inc.php

Now whatever the environment variable IP_ADDRESS is set to is what phpMyAdmin will use as the server address.

So we can run our image, let’s stop and clean up the container we were testing in from the host’s shell prompt:

docker container stop php
docker container rm php

Let’s take everything we’ve learned and turn it into a Dockerfile:

FROM php:apache
RUN docker-php-ext-install mysqli \
&& curl -sL https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.xz|tar -xvJC /tmp \
&& mv /tmp/phpMyAdmin/ /var/www/html/ \
&& rmdir /tmp/phpMyAdmin* \
&& sed "s/'localhost'/\$_ENV['IP_ADDRESS']/" /var/www/html/config.sample.inc.php | sed "s/'blowfish_secret'] = ''/'blowfish_secret'] = 'weneedakeyheretologin'/"> /var/www/html/config.inc.php \
&& chown -R www-data:www-data /var/www/html

One thing I did different from earlier files: I used the backslash to continue the RUN command across multiple lines. Putting all of this onto one line makes it harder to read, and we still have two commands that are ridiculously long (the curl and sed commands). You can use the backslash to split these up too, but I felt that having all the piped commands on one line help me to logically break down the process “one step at a time”. The RUN command:

  • installs the mysqli libtrary into PHP
  • downloads and extracts the latest phpMyAdmin software into a temporary directory,
  • moves the files into /var/www/html where the webserver expects them,
  • deletes the unused folder in /tmp,
  • creates our config.inc.php file with the $_ENV variable, and
  • sets the file owner to www-data so the webserver can see them.

Now let’s create our image and fire up a container. Change to the directory that has your Dockerfile and run the following:

docker build -t phpmyadmin .
docker run --name my_phpmyadmin -d -p "8080:80" -e "IP_ADDRESS=<IP_ADDRESS>" phpmyadmin

The “-e” parameter allows you to set an environment variable, in a “name=value” format. You can have as many environment variable parameters as you need; just include multiple “-e” options. Don’t forget to replace the <IP_ADDRESS> with your server’s IP address so this container will work correctly. If you are running MySQL on your development box like I am, then you need to use your actual IP address, not localhost or your Docker virtual IP address.

If everything is working, you should now be able to visit http://localhost:8080 and login with an account from your MySQL server. And you didn’t hard code the server address into the image; instead you set the address when you ran the image. Great work!

To summarize, we created a Dockerfile that downloads and installs a fresh copy of phpMyAdmin and configured the image to accept the server address when the container is created. We were able to do this by starting a copy of the container we wanted to base our image from and using a shell in the container to learn how we needed to setup our Dockerfile.

For the next lesson, we cover how to use volumes to keep data even after deleting a container.

Tagged :

Learning Docker – Getting out

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we covered how use an existing custom image as a building block for an application. Easy-peasy.

This lesson, we are covering how to make a container accessible on the network. We are also covering how to use a generic application image from the Docker Hub to build an application.

Docker has a couple neat network features that make it network friendly:

  1. Programs in a container are free to access the network outside the container, host and network firewall permissions permitting.
  2. Programs in a container that can accept network traffic may only do so if Docker is told that the program may be accessed from outside the container.

Some security experts will question the usefulness of the first point, but it makes your job as an application developer easier if your application needs to get data over the network from somewhere else. But the second feature is cool because helps protect your applications container from unwanted access. However, this means that you are respnsible for identifying how the application can be accessed.

We are going to grab an image of the Nginx HTTP server, stash a custom home page into it, and allow access to the server from the network. However, the Dockerfile does not control network access to the image.

Docker splits out the data of the application (your image) from the definition of how to run it. This provides flexibility on how to run an image: you can mount directories from the docker host into the container to allow for data persistence (does not go away when the container is deleted) or for rapid development purposes (use the web server image to test your application from your local computer’s filesystem, before baking it into one application image). You can define persistent volumes, which allows you to carry forward container data from one image to another (like upgrading MySQL). And you can specify what ports you want the local host to use to allow network communication with a program in your container.

The good (even great) news is that you do not have to use the same host port number as the program in your image expects. This means that you could run four copies of nginx at the same time; even though all four expect to use port 80 in each container, you can map the container host’s ports 8000, 8080, 8888, and of course, 80 to each of the different containers. The admin running your containers becomes responsible for communicating to users which host port connects with which container. If you are running the containers on your computer, then you are that admin 🙂

Let’s get started. First, create a project directory and create an index.html file:

<html>
<head><title>Hello world!</title></head>
<body><p>You tagged me!</p></body>
</html>

Next, create your Dockerfile:

FROM nginx
COPY index.html /usr/share/nginx/html

We omit the CMD statement, so the CMD statement from the base image is used instead. The makers of this image intended for their image to be used as a building block for other developers’ applications, so there are directions on how to use this image at https://hub.docker.com/_/nginx. The highlights:

  1. The web files are stored at /usr/share/nginx/html, so you can copy your website there or mount a local directory that has the files to serve.
  2. You can replace the Nginx config file at /etc/nginx/nginx.conf.
  3. They have a template system for embedding environment variable values into the configuration of the webserver. This is useful if you want to further customize how the webserver runs without making a new image every time.
  4. There is an image that supports Perl scripts, allowing you to write a Perl web application hosted on Nginx without any additional software needed.

Generate and run your image:

docker build -t my_nginx .
docker run -d --name go_nginx -p 8080:80 my_nginx

This docker run command has a couple different options worth talking about. First, the --name parameter is used to name the container. It still contains the 10 character alphanumeric name, but you can also refer to it by name now. This will make it easier to stop the container when we are done. We gave the container a slightly different name than the image to make it clear which was the container name and which was the image name.

Second, there is -d instead of -it. This container is being run as a daemon -d instead of interactive with a terminal -it. Without the -d, docker would wait until the application stopped on its own (it won’t for this image) or until you used Control-C to terminate it. Instead, you get the full identifier of your new container and your prompt back immediately; Docker runs this container in the background.

Last, there’s the -p, which maps a host port into the container. In this case, we are mapping port 8080 on the container host to port 80 inside the container, where Nginx is expecting network traffic at. If you get an error about the port being in use by another program, you can always select a different host port, like 8888, 12345, or anything between 1025 and 49151 (don’t ask how I know those numbers). Do not change the :80; Nginx is setup to listen on that port inside the container and you would then need to reconfigure it to use a different port.

Let’s test that this is working. Visit http://localhost:8080 on the computer your container is running on. If you can connect to that computer over the local network, you can try http://<ip_address>:8080 from another computer on your network. You should see your webpage.

You can stop your container from running with the following:

docker container stop go_nginx

You can also restart it later:

docker container start go_nginx

Note that this command does not need the -d option; your container knows that it runs in the background.

When you are finished, you can clean up your container with the following command:

docker container rm -f go_nginx

To summarize, we used an image from the Docker Hub that is designed to be used as the basis for an application to build a webserver with our (one page) website. We enabled computer on the network to access the weserver running in the container. We also learned how to run the container as a background (daemon) process.

For the next lesson, we cover how to add an application to an image without using a package manager and how to add options that can be set when an image is run.

Tagged :

Learning Docker – Reusing images

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, covered how to add a package using the image’s package manager to add software to an image.

This lesson, we are going to use an image we created as the base for a new image. You will need the image create from the previous lesson for this one.

One of the main benefits of Docker is reusability. If you break out a large project into installed applications and customization layers, then you have the potential to reuse your applications in additional projects. This is fundamentally the way most Docker image cerators operate:

  1. Find a base image that has the application software needed for the project.
  2. Inject customizations to the base to provide the desired functionality, eg, adding source code to an interpreter, like Java, PHP, or Python.
  3. Bake into an image and deploy in a container to confirm that it works as expected, or when ready, for actual use.

However, sometimes you can’t find a base image that suits you. In that case, you can use a low level base image (like ubuntu, busybox, or scratch), and build the core application software your project needs into it. An example of this might be installing a custom compiled version of PHP with support libraries that are not a part of the normal PHP image.

For this lesson, we’re going to use the Python 3 image created in the previous lesson to create a Python application that writes the first 100 Fibonacci numbers. Yes, we could use the Python 3 base image from the Docker Hub, but knowing how you can reuse your own images will allow you to save yourself work by creating reusable images, instead of taking one giant Dockerfile from an old project and reusing it for a new project.

Start by creating a new folder for this project, and create a file called fib.py:

#!/bin/python3
now = 1
last = 0
for count in range(100):
  print (last)
  now, last = now + last, now

If you are on Windows, ensure that you save the file with Unix newlines using a text editor like Notepad++.

Next, make your Dockerfile:

FROM my_python
COPY fib.py /bin
RUN chmod +x /bin/fib.py
CMD [ "fib.py" ]

The RUN chmod statement is a shell command to make the fib.py file executable. If you are on a Linux box creating the image, this needs to be done so the fib.py program can be run as a command (“fib.py”) instead of needing to be run as a Python argument (“python3 fib.py”). And putting this file in /bin puts the program in the container’s PATH, so you don’t need to be in the same folder to call it (or run “./fib.py”). On windows, as a side effect of Windows not using the same permission system as Linux, Docker copies the file over as read and execute ready, which can be a security flaw if you don’t want a file to be run as a program.

Generate and run your image:

docker build -t fib .
docker run --rm -it fib

Seeing some of those big numbers makes me feel like a math wiz.

A couple of things to point out about how the last image and this image work together to make our application. First, only the CMD statement of the run image (the last layer) gets ran with docker run. Remember that from our last lesson, we had CMD ["python3"]. That did not run with this application. We only used the executables and libraries that make Python work from the last image. The second point is that if we update the Python image from our last lesson, it does not instantly update the python used by our application. Instead, our application image is permanently tied to the CURRENT python image available at the time this image was created. In order to update the python that our application uses, we need to recreate the application image after we recreate the Python image.

To summarize, we showed how to use your own, previously built image as a base image for a new project. We covered which CMD statement is used when Docker runs an image. Last, we discussed how your application image will use the version of all parent images that was available when your image was built, and in order to update a dependency in your application, you will need to update the dependencies AND your application image.

For the next lesson, we cover how make a Docker application network accessible.

Tagged :

Learning Docker – Add a package

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we covered how to add files to an image and run them. This included the file that was ran calling other files that were uploaded.

This lesson, we are going to add a package to the image and setup Docker to launch that software when it creates a container.

Packages are Operating System specific bundles of software. Depending on the version of Linux you run, you could leverage RPM, DEB, or APK packages to deploy new programs to your computer. Containers, since they are based on an operating system, have the ability to install packages using the base OS package manager. For both Windows- and Linux-based containers, software can also be added to an image by running an installation executable in the Dockerfile.

To get started, you must know how to use the underlying package manager for the base OS you plan to use for your Dockerfile. For this tutorial, I will use the ubuntu/latest image, as I am comfortable with Ubuntu package management (and I don’t feel like learning how Alpine Linux manages software today). Most distributions make it easy to install the software you want; each package identifies the packages it need to run correctly. To take advantage of this fact, we will create an image that contains Python 3. We will then create a second image that uses the first to write your running “Hello world!” to console.

Before we go creating Docker files, let’s play with the Ubuntu image first, shall we? Run the following:

docker run -it ubuntu

This will drop you into a bash shell, even on Windows (that’s slick). Note that the user is root, and the host name is a random alphanumeric value. That value is the container ID, which you can get with the commands:

docker ps
docker conatiner ls

This bash shell is an Ubuntu bash shell, and by default has the core Ubuntu software installed, including the apt package management suite. from here, we can test a few commands to see how to install the software we want into our image. If you are looking to install a specific piece of software from the package management software, you can usually find the package you want to install with a well-formed Google search. Since we want to install Python 3, the following should work if my memory serves me:

apt-get -y install python3

Wait, what’s this?

root@ce0b6cd38d85:/# apt-get -y install python3
Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package python3
root@ce0b6cd38d85:/#

That package can’t be found. However, the reason why is very simple: we haven’t updated the package manager with what packages are available. So we need to run the following command first:

apt-get update

A list of web servers will spring to life as apt-get searches for what’s out there. It will take a while too: Ubuntu has a lot of good software to choose from, and we are downloading the entire catalog of it.

When that’s done, rerun the apt-get install command that failed the first time, and it will install Python 3 into our container. Note that this is an important point: the container is your base image, plus a “working” layer that tracks your running changes beyond the initial source image. If you stop and restart the container, you will lose no data. But if you destroy and recreate your container, you will lose all your changes and have to start from ground zero. You can, however, convert your container to an image, but then your “working” layer becomes a permanent part of your image, and you potentially introduce bloat into your image. But there are ways to work around that. 🙂

Let’s test our python installation out:

echo 'print ("Hello world!")' | python3

If you get “Hello world!” again, then Python 3 installed correctly into our container. Now let’s make this into an image. When you’re ready, run “exit” to leave your docker container, and you can remove it if you would like:

docker container ls -a
docker container rm <container_id_or_name>

Create an empty folder for your python image. In there, create a Dockerfile with the following:

FROM ubuntu
RUN apt-get update && apt-get install -y python3
CMD [ "python3" ]

This introduces the RUN command. RUN takes the place of you connecting to the shell in your container and typing in the commands. Reading the Docker documantation at https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run, they indicate that it is best practice to join your run commands into one RUN call, as each RUN call generates an intermediary image layer.

It is important to point out that whatever you do with the RUN command stays in your image, including any temporary files that you do not want to be a part of your image. Our particular RUN command will download the entire Ubuntu package catalog and LEAVE IT IN YOUR IMAGE. You are responsible to clean up after any RUN command you use. If you want to clean up in this case, you can substitute the following RUN command in your Dockerfile, which adds a command to clean the package cache:

RUN apt-get update && apt-get install -y python3 && rm -rf /var/lib/apt/lists/*

Now change your directory to where your Dockerfile is at and let’s build and run this image:

docker build -t my_python .
docker run -it --rm my_python

The CMD satement causes the python shell to launch when the image is run. You can run the following python command:

print ("Hello world!")

and Python obediently echoes the string on the console. To get out, hit Control-D then enter.

Because it helps to understand why cleanup is important, I created two Dockerfiles, each using one of the two RUN statements in this tutorial, so that I could see the difference in size. Here’s what I saw:

D:\Desktop\Learning Docker\03. Add A Package>docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
my_clean_python     latest              a4f160c1518a        5 minutes ago       113MB
my_python           latest              23c7fae3b613        7 minutes ago       135MB

The cleanup cut 22MB out of my image. That is a space savings of 16%. Cleanup pays.

To summarize, we learned how to launch a base image to see how it works. We covered how to use package managers to add new software into a container and an image. And we wrote a Dockerfile that created a new image with newly packaged software for use in a new image.

For the next lesson, we cover how to use our own image as a base image for a new project.

Tagged :

Learning Docker – Copy a file

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the previous lesson, we had a brief primer on Docker and building images, created a two-line Dockerfile, and built and ran an image from that.

This lesson, we are going to add our own files into the image we want to create. After all, I don’t know anyone that wants to write their entire application as a sh script in the CMD line of a Dockerfile. So we will build two images. The first image will move the sh command into a file and run that. The second will expand on that and run a couple of different shell scripts.

Before starting, we need to review how files get stored in an image and discuss layers. An image is a virtual file system, composed of layers. An image can have multiple layers, and an image that is based on another image uses all the layers of that base image. Docker has two features that make this really neat: image layers overlay into the virtual filesystem, and each layer only contains changes. The first point means that the virtual filesystem is built by taking the files in the first layer, then adding the files from the second layer, and so on. This is like extracting a zip file, then extracting a second zip on top, then extracting a third on that. The files from the last layer wins in case of conflict, so if two layers had /bin/ls, the ls file from the last image is used. The second point means that the layers (and by extension the image) is small- only changes are tracked. In our last lesson, there were no new files-so that layer was really tiny. For this lesson, the layer only needs to store our one file and the code to run the script- the busybox image has all the layers we need for this to work.

For the first image, start by creating an empty folder for your image “project”. In there, create a file called helloworld.sh and add the following:

#!/bin/sh
echo Hello World!

NOTE: If you are on Windows, the .sh files must be saved with Unix-style newlines. I was able to use Notepad++ and clicked on Edit->EOL Conversion->Unix to get this to work. Otherwise you get the error ‘standard_init_linux.go:211: exec user process caused “no such file or directory”‘ when you run the image in a container.

Also create a Dockerfile file in the folder:

FROM busybox
COPY ./helloworld.sh /helloworld.sh
CMD ["/helloworld.sh"]

This introduces the COPY command. COPY works very similar to the cp (or copy) shell command. The COPY command expects one or more source paths and one destination path. The source path is relative to the path given to docker build. The destination must be absolute, and uses the paths of the image’s virtual file system. Note that the files you want to copy must be present in the current or a child directory from where you run docker build, as this directory gets temporarily copied into Docker for image creation.

Now change your direcotry to where your Dockerfile is at and let’s build and run this image:

docker build -t copyfile1 .
docker run -it --rm copyfile1

Did you get that “SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host” message? This warning is because Windows does not use the Linux permission system, and Docker is warning you that it is enabling any user account in the Docker container to have read and execute permissions on all files and directories that you copied over. This will ensure that your image will run, but from a security perspective makes it possible for malicious software and their users to more easily access your data. We will cover how to correct this in a later lesson.

Now that we have run the file that we added to the image, let’s make things more complicated. Create a new folder and add the following three files.

#!/bin/sh
echo What is your name?
read name
/greet.sh $name
!/bin/sh
clear
echo Hello, $1!
echo At date I said hello to $1.
ls /

Dockerfile

FROM busybox:glibc
COPY ./*.sh /
CMD ["/whoareyou.sh"]

Run the following in the new Dockerfile directory to bake your image and run it:

docker build -t copyfile2 .
docker run -it --rm copyfile2

Now you should be prompted for your name, and when you hit enter, the screen clears, your image greets you, tells you when it greeted you, and lists the root of the image’s virtual filesystem. Not bad for a few lines of text in three files, right?

To summarize, we reviewed how to add files into an image and execute them. We also covered how the image is made of layers and why this saves space. Last, we explained why a shell script written on Windows might fail in an image if the newlines are not in Unix format.

For the next lesson, we will explore how to install software packages into our image.

Tagged :

Learning Docker – Hello World!

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

Before diving in, I want to give a brief overview of Docker and some of the concepts we will need to apply. Docker works by providing an environment, called a container, for applications to run in, regardless of the underlying OS. Each container runs an image. There are many images available free for use, especially those listed on Docker Hub at https://hub.docker.com.

However, if you want your application to run in Docker, or any other container platform that uses Docker containers such as Kubernetes, you will need to make your own images. Every image can be used as the basis for a new image, eg the Tomcat images are based on Java images. So a good way to start building your own image is to identify an image that already exists with tools that you will need, and then add to it. This also means that you can make your own “starter images” containing your common software that you need for multiple applications.

Last, as a note, is is good development practice to isolate the top-level running programs that run to make up your application into separate images that run as individual containers, eg building a Python program and the RabbitMQ broker that it uses as separate images. This enables you to update each component separately and can cut down on development time. You also don’t have to follow this guidance, but you may discover that you can reuse that RabbitMQ image in another application.

First, we will create the dreaded, “Hello world” image. There is one on Docker Hub, but that image comes with a binary that is run, and the application primarily confirms that your Docker installation is working. My goal is to build my first image.

To build our “Hello World” image (or any image), we need to create a Dockerfile file. This file tells Docker how to build your image. There is a reference on the commands that you can place into a Dockerfile (and a good best practices guide to boot) at https://docs.docker.com/develop/develop-images/dockerfile_best-practices/. However, we only need two commands for your Dockerfile: FROM and CMD.

Make a directory on your computer, and create a file called Dockerfile with the following content:

FROM busybox
CMD ["sh", "-c", "echo Hello world!"]

The FROM statement identifies which image name (repository) and optional tag you want to use as the starting point for your image. By default, this will pull from Docker Hub. However, you can attach other registries (including your own) and get your images from there. We chose the “busybox” image, as it is small (1.22MB) and provides a shell that we can run a command against. “busybox” identifies no tags, so it assumes that we want the “busybox:latest” The people who built the busybox image can also tag different versions of their image, using different architectures, libraries, and versions. At the time of this writing, the image contained busybox 1.31.1 and could be compiled using one of three different C libraries: musl, glibc, and uclibc. The image maintainers identified the tags “busybox:musl”, “busybox:glibc”, and “busybox:uclibc” so that you as the image developer could pick which kind of busybox binary you want. It also happens that the “busybox:latest” tag is also assigned to the “busybox:musl” tagged image, so that is the busybox version we are using.

The CMD statement is the program to run inside the docker container. In that case of Apache HTTPD or Tomcat, we would supply the command we would type at a shell prompt to run the server. For our “Hello World” image, we are only going to call the sh shell (which happens to be busybox) and pass it the -c and “echo Hello world!” arguments, which causes sh to write “Hello world!” to the console.

Now that we have a working Dockerfile file, we can bake our first image! From a shell prompt, change to the directory you created for your Dockerfile and run this command:

docker build -t helloworld .

Docker will spew out a bit of information regarding its progress, and will download the busybox image we specified in the Dockerfile if it is not already on your system. If it is successful, your should see the following two lines:

The first line gives you the image identifier, and this number is unique for every image on your system. The number it displayed for you is YOUR OWN image identifier. You can use this identifier to specify an image to manipulate when using docker image manipulation commands. The second line indicates the tag applied to your image. The -t argument for docker build specifies the tag you want to give the image. You can also add your own subtag, like “helloworld:v1”, “helloworld:busybox”, or even “helloworld:im_awesome”. You can also use a tag to identify an image to manage with the docker image commands.

If you get a “SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host” message, don’t worry about this for now. We will cover this in the next lesson.

You have your image. Let’s try it out! Run the following Docker command:

docker run -it --rm helloworld

If all is well, you will see “Hello world!” in your shell. If not, the Dockerfile is probably mistyped. The -it tells docker to run interactive and to use the console for input and output. Otherwise you won’t see a thing. The --rm tells Docker to destroy the container it creates to run your image when it is finished running. For now, we don’t need these littering your disks.

So now you have your first Docker image. But it is using the output that I provided you. Edit your Dockerfile so that it outputs your own personalized message, save it, and rerun the docker build command from above. Once you have your “Successfully built”, then run the docker run command from above. Now you have built and run your first customized docker image.

Now as you begin building your own images, what happens to the old ones? Did you change the tag name in your docker build command? If not, what happened to the old image? Let’s find out. Run the following:

docker image ls

Your output should look something like this:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
helloworld          latest              0021c983fd24        About an hour ago   1.22MB
<none>              <none>              9cde4988b759        About an hour ago   1.22MB
busybox             latest              1c35c4412082        2 days ago          1.22MB

I didn’t change the tag in my build command, and docker will only allow one image to have a given tag. The last image made gets the tag. So then what can I do with the image with no tag? I can do a few things. I can run it:

docker run -it 9cde4988b759

I can also delete it:

docker image rm 9cde4988b759

If you get a message that there are stopped containers using the image, you can purge stopped containers:

docker container purge

So to wrap up, we briefly covered what a Docker container and an image are. We created a simple Dockerfile that writes “Hello world!” using the latest busybox image off Docker hub and talked about tags. We built an image from the Dockerfile and ran it. We made a second image with a custom message and ran that one, too. And we listed the images on our system and deleted an unused image. I hope you found this helpful, and for the next lesson, we will add custom files into our images.

Tagged :