Learning Docker – Fun in MUD

Welcome to my tutorials on how to leverage Docker to deploy your application! This is meant as much to help me learn, so it doubles as a “what worked for me” series.

In the last lesson, we created a private Docker Registry using the OpenStack Swift storage driver to store our registry in a storage bucket externally. We also enabled SSL encryption with help from Let’s Encrypt and setup password authentication.

For this lesson, we will create a web-accessible MUD in as well-architected Docker application. This is a HUGE project with two Docker stacks.

What’s a MUD? MUD is short for Multi-User Dungeon, and it is an online text game based on the telnet protocol (that has been around for over 35 years). A MUD software is usually designed to allow for players to participate in the game, and for builders (empowered players) to modify and improve the game. It involves job classes, monsters (mobs for short), items, and most importantly, other people.

So why do this? Well, I’ve always wanted to run my own MUD. And I needed a topic for this Docker lesson… So I’m going to create a composer application that will run and update the MUD software. And while searching, I happened to come across a new release (5 days old) of someone that converted DikuMUD to use websockets. The project is at https://github.com/Seifert69/DikuMUD3 and we’re going to use that for this tutorial. And a special thanks to Michael Seifert, who made one very important change to his source code that made running DikuMUD in Docker (and this tutorial) possible.

This project will require five containers as two Docker stacks. The first container will be its own stack and will act as the manager for the application, and it will check for updated source code, compile the code as needed, and update the application. The other stack will run the other four containers. One container will run the actual MUD. Two containers will provide telnet and websocket access to the MUD container. The last container will be a nginx webserver to host the website.

The manager container is going to do a lot. It will need a cron application, Git, a C++ compiler, some Docker management binaries, and access to the docker host running the application. Let’s start with the cron application, as I feel this will be the most difficult. Why? Because cron normally runs as a background service. Docker containers do not advocate “runs in the background”, as these processes may not properly terminate when the main container program finishes. One possibility I came across is how the GlusterFS team created their container- they run the init process, which is the native Linux programm for managing background services. But they made this work by starting with a full CentOS image and removing services from that, and I don’t want to create a container that looks like a hacked OS installation. There’s also the Supervisord container, which acts as a supervisor service to background jobs. But a third and common option is to run the service in the foreground as the only task for the container. This is the easiest way to go in my optinion, so we’ll grab a vanila OS container and install cron into it and run that in foreground mode.

Let’s create a project directory. In there, create a directory called “docker”. In there, create two more directories, “updater” and “diku”. We need this layout to separate the files for each stack because when we run docker-compose, the containing directory name is prepended to the image name when generating the container name. If the directory name was in use for another stack, Docker mistakenly assumes that you are updating that stack configuration, creating all sorts of problems. So the main take-away is to use a unique directory name for storing your docker-compose.yml files for each stack in your environment.

Create a Dockerfile named Dockerfile.manager in the updater folder:

FROM ubuntu:latest
RUN export DEBIAN_FRONTEND=noninteractive && \
  apt-get update && \
  apt-get install -y cron && \
  apt-get install -y g++ && \
  apt-get install -y libboost-dev && \
  apt-get install -y libboost-system-dev && \
  apt-get install -y libboost-filesystem-dev && \
  apt-get install -y libboost-regex-dev && \
  apt-get install -y bison && \
  apt-get install -y flex && \
  apt-get install -y make && \
  apt-get install -y git && \
  apt-get install -y docker-compose && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/*
CMD [ "/sbin/cron", "-f" ]

If you are familiar with the apt-get command, you know that you can specify more than one package to install in a single command. Why only list one package per command? If there’s an error, I can easily tell which package I need is puking. Otherwise I need to weed through the output to identify what’s not working. As for why I picked these specific packages, I ran the container shell commands below and kept getting errors with the compile. I figured that it would be nice for you to have an easier time than I did getting this setup.

From your DikuMUD3 project directory, build it and run this image with:

docker build -t diku_manager -f docker/manager/Dockerfile.manager .
docker run --name throw_away -v /var/run/docker.sock:/var/run/docker.sock -d diku_manager

Again, I am starting my process of figuring out what commands need to be run in the Dockerfile in order to build the image. Now connect to the container shell:

docker exec -it throw_away /bin/bash

Using this throw away container, I figured out the commands that I needed to get the source code for this MUD installed. I followed the build directions at https://github.com/Seifert69/DikuMUD3 to clone the repository:

cd /opt; git clone https://github.com/Seifert69/DikuMUD3.git

Next, I ran the compile commands. The -j finds the number of cores available to the container and runs make multithreaded. It is a lot faster that way. This is also where I was seeing all sorts of make and compiler errors, including missing libraries and missing build utilities. I have already added these to the Dockerfile above so that we could skip those pain points:

cd /opt/DikuMUD3/vme/src && make all -j$(ls /sys/bus/cpu/devices/|wc -l)

Pre-process the .def files:

cd /opt/DikuMUD3/vme/etc && make all

Now we compile the zones:

cd /opt/DikuMUD3/vme/zone && /opt/DikuMUD3/vme/bin/vmc -m -I/opt/DikuMUD3/vme/include/ *.zon

We should also test out the docker cli commands. Remember that our goal is to have this container manage other containers, so we should test that, Let’s list the running Docker cotainers:

docker ps|grep throw_away

Good. The container can see itself. Onward!

At this point, per the directions, the game could be started. However, we now need to build our game container infrastructure. We are going to implement a several things:

  • Create a script to check for source code updates from git and to download and compile as needed.
  • Create a script to notify users then recreate all managed containers.
  • Create a script to regenerate the manager container.
  • Create a Dockerfile.telnet for the telnet server
  • Create a Dockerfile.websocket for the websocket server
  • Create a Dockerfile.nginx for the webserver
  • Create a script to check for source code updates from git
  • Create a docker-compose.yml file that defines how our containers work together
  • Update the Dockerfile.manager to copy all these components into the manager image, so that it can rebuild

The important thing to realize here is that our container will build containers. So we will not run any of the new Dockerfiles from the development box, but instead will run them from the master container on the development box. So let’s write some files. We will use the updater directory for these following files.

Start with check_updates.sh, which will clone or update from your git repository as needed and recreate the other containers:

#!/bin/bash
echo Running update check at `date`
export GIT_DISCOVERY_ACROSS_FILESYSTEM=1
# Confirm that the git repository exists
if [ ! -d /opt/DikuMUD3/.git ]; then
    echo Cloning repo
    git clone $GIT_REPO /opt/DikuMUD3 && \
    cd /opt/DikuMUD3 && \
    git checkout $GIT_BRANCH && \
    touch /opt/changes-detected && \
    echo Clone created.
fi
# Check for updates
cd /opt/DikuMUD3
if git pull>/tmp/git && ! grep "Already up to date." /tmp/git >/dev/null; then
    echo Repo has been updated
    # Create a flag that there are changes if we successfully download updates.
    touch /opt/changes-detected
    # Remove this flag if we have changes.
    rm /opt/rebuild-now 2>/dev/null    
fi 
# If there are changes and we are not compiling, then try to compile.
if [ -f /opt/changes-detected ] && tempfile -n /opt/updating-now; then
    echo Compiling updated source    
    cd /opt/DikuMUD3/vme/src && \
    make clean && \
    make all  -j$(ls /sys/bus/cpu/devices/|wc -l) && \
    cd /opt/DikuMUD3/vme/etc && \
    make clean && \
    make all && \
    cd /opt/DikuMUD3/vme/zone && \
    /opt/DikuMUD3/vme/bin/vmc -m -I/opt/DikuMUD3/vme/include/ *.zon && \
    touch /opt/rebuild-now && \
    rm /opt/changes-detected && \
    echo Source compiled successfully.
    # Always remove this flag; it allows the compile process to start the next time the script runs.
    rm /opt/updating-now
fi
# If we have a clean compile, then recreate our images.
if [ -f /opt/rebuild-now ]; then
    echo Rebuilding game images
    cd /opt
    # If the next four build commands run, then tag all four new images as "test"
    # The testing docker-compose uses the "test" tag to ensure that stuff works.
    docker image ls |grep diku_listener |grep live >/dev/null && docker image tag diku_listener:live diku_listener:old
    docker image ls |grep diku_nginx |grep live >/dev/null && docker image tag diku_nginx:live diku_nginx:old
    docker image ls |grep diku_mud |grep live >/dev/null && docker image tag diku_mud:live diku_mud:old
    docker-compose -f ./docker/diku/docker-compose.yml build && \
    date -d'1 hour' > /opt/restart-ready && \
    echo Game images rebuilt 
    # Always remove the rebuild flag.
    rm /opt/rebuild-now
    # The restart flag will get seen by a separate cron job that will notify users for an hour that the world is rebooting, then do the deed.
fi
echo Completed update check at `date`.

An important feature of this script are the GIT_REPO and GIT_BRANCH environment variables. We need this to be able to select a Git repository to download code from. More on this in a bit. Another important feature is the use of tags to rotate the images from live to old: this is our revert case if there are any problems. To revert to your previous images, you would only need to log into the manager container shell and run the following:

docker image tag diku_listener:old diku_listener:live
docker image tag diku_nginx:old diku_nginx:live
docker image tag diku_mud:old diku_mud:live
docker-compose -f /opt/docker/diku/docker-compose.yml up -d

Next, we need a check_restart.sh script. This will look for the /opt/rebuild-now flag file, and if present, will regenerate the other four game containers using freshly built images.

#!/bin/bash
echo Checking if restart is requested.
if [ -f /opt/restart-ready ]; then
    echo Restart is ready, calculating time left.
    REBOOT_TIME=$(date -d"$(cat /opt/restart-ready)" +%s)
    NOW=$(date +%s)
    MIN_LEFT=$(( (REBOOT_TIME - NOW) / 60 + 1 ))
    if [[ $MIN_LEFT -le 0 ]] || ! docker-compose -f /opt/docker/diku/docker-compose.yml ps|grep container>/dev/null ; then
        echo Restarting MUD now
        cd /opt && \
        rm /opt/restart-ready && \
        docker-compose -f /opt/docker/diku/docker-compose.yml up -d && \
        echo Restart initiated suceessfully && \
        sleep 60 && \
        nc -q 3 `docker network inspect updater_default|grep Gateway|grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*'` 4282 </dev/null  | grep . && \
        curl http://`docker network inspect updater_default|grep Gateway|grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*'`/  | grep . && \
        echo Restart was successful. && \
        exit 0
        echo There was a problem restarting the diku stack...
    fi
    if [[ $(( $MIN_LEFT % 15 )) -eq 0 || ( $MIN_LEFT -lt 15 && $(( $MIN_LEFT % 5 )) -eq 0 ) || $MIN_LEFT -le 3 ]]; then
        # Notify players of remaining time
        echo $MIN_LEFT minutes before reboot...
    fi
fi
echo Restart not done.

We also need a restart_manager.sh script. This will cause the manager to update its own image and to trigger its own rebuild. The good news is that this is not so sensitive, because if the manager container crashes, the rest of the game will continue running, without updates.

#!/bin/bash
cd /opt
# Tag the current running manager image as old, rebuild, then relaunch.
docker image tag diku_master:live diku_master:old && \
docker-compose -f ./docker/updater/docker-compose.yml build && \
docker-compose -f /opt/docker/updater/docker-compose.yml up -d
# This container should die if the last command runs successfully while a new one launches in its place.

Last, before we start working on Dockerfiles, we need a crontab file. This will make the above three scripts run on a schedule. You are welcome to update this to your own liking. As the file is here, the check_update script runs every hour at the 45 minute mark, the check_restart runs every minute, and restart_manager runs on the 3rd of every month at 12:05 in the morning. The output of all these commands is written to the STDOUT of the cron program, which will be accessible through the docker logs command.

45  *   *   *   *   root    /opt/docker/check_updates.sh >/proc/1/fd/1 2>&1
*   *   *   *   *   root    /opt/docker/check_restart.sh >/proc/1/fd/1 2>&1
5   0   3   *   *   root    /opt/docker/restart_manager.sh >/proc/1/fd/1 2>&1

Now we can start work on our second stack: the game itself. What makes this tricky is that we are automating all the commands we would normally do by hand to compile our app and then create new images based on the finished application. So, just like we would once our program is compiled and ready to be turned into an image, we create Dockerfiles that put files where we need them in order to run correctly. Let’s place all these files into the diku directory we made earlier.

We luck out for the websocket and telnet listeners; they use the same executable with different parameters. We’ll create one generic listener image then define two containers with the different command line options needed to support each listening mode. However, the container needs to be told the IP or name of the MUD container. So we need an start script to manage the needed IP address. Create this file as entrypoint_listener.sh:

#!/bin/bash

echo MUD Host: $MUD_HOST
COMMAND="./mplex -l /dev/stdout -a $MUD_HOST $@"
echo Command: "$COMMAND"

exec $COMMAND

Now write this into Dockerfile.listener:

FROM ubuntu:latest
COPY ./DikuMUD3/vme/bin/ /opt/DikuMUD3/vme/bin/
COPY ./docker/diku/entrypoint_listener.sh /entrypoint_listener.sh
WORKDIR /opt/DikuMUD3/vme/bin
ENTRYPOINT [ "/entrypoint_listener.sh" ]
CMD [ "-w", "-p", "4280" ]

The start script and this ENTRYPOINT argument forces the container to always start the listener. Override command arguments are passed into this program. We have also defined a couple mandatory arguments in the start script to force logging to STDOUT. This is what is output when you run docker logs. More on this in a bit. The default CMD is the arguments needed for the websocket listener. We can pass the alternate command arguments -p 4282 to get it to work for telnet. We also use the MUD_HOST environment variable as a means to communicate the MUD containers IP to this program.

We also have the option to use the EXPOSE command in this Dockerfile. This tells Docker that this container should allow for external connections to these specified ports. This is not needed for containers managed by docker-compose, as all containers are added to their own isolated network and can freely communicate on any port with each other. We will use our compose file to specify which ports on which containers are available for external clients to connect to.

The contents of Dockerfile.mud:

FROM ubuntu:latest
RUN export DEBIAN_FRONTEND=noninteractive && \
  apt-get update && \
  apt-get install -y --no-install-recommends dnsutils && \
  apt-get install -y --no-install-recommends libboost-system1.71 && \
  apt-get install -y --no-install-recommends libboost-filesystem1.71 && \
  apt-get install -y --no-install-recommends libboost-regex1.71 && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/*
COPY ./DikuMUD3/vme/bin/ /opt/DikuMUD3/vme/bin/
COPY ./DikuMUD3/vme/etc/ /opt/DikuMUD3/vme/etc/
COPY ./DikuMUD3/vme/log/ /opt/DikuMUD3/vme/log/
COPY ./DikuMUD3/vme/zone/ /opt/DikuMUD3/vme/zone/
COPY ./DikuMUD3/vme/lib/ /opt/DikuMUD3/
COPY ./docker/diku/entrypoint_mud.sh /entrypoint_mud.sh
VOLUME [ "/opt/DikuMUD3/vme/lib" ]
WORKDIR /opt/DikuMUD3/vme/bin
ENTRYPOINT [ "/entrypoint_mud.sh" ]
CMD [ "./vme", "-l", "/dev/stdout" ]

Another important observation for this application: we are copying the vme/lib off the master container directly into the DikuMUD3 directory on the MUD container. This is because the lib folder contains the state data for the MUD, including player data. We don’t want to blow that out every time we recompile and rebuild the containers; in fact we will mount a volume there to persist the data between compiles. However, there are some initial files that MUST exist there in order for the vme binary to properly start up. So we need a way to selectively copy over the initial set of files. We will manage that using the ENTRYPOINT script, entrypoint_mud.sh:

#!/bin/bash

if [ `realpath $1` == "/opt/DikuMUD3/vme/bin/vme" ]; then
    # First make a backup. Old backup is moved to /tmp, then a new backup 
    # is made in /tmp, then both moved back to lib.
    rm /opt/DikuMUD3/vme/lib/lib.old.tar.xz 2>/dev/null
    mv /opt/DikuMUD3/vme/lib/lib.tar.xz /tmp/lib.old.tar.xz 2>/dev/null
    tar -cJC /opt/DikuMUD3/vme/lib -f /tmp/lib.tar.xz
    mv /tmp/*.tar.xz /opt/DikuMUD3/vme/lib
    # Copy files in /opt/DikuMUD3/lib if needed
    pushd /opt/DikuMUD3/lib
    # Make all directories
    for file in `find ./* -type d`; do
        mkdir -p /opt/DikuMUD3/vme/lib/$file
    done
    # Copy files if they don't exist.
    for file in `find ./* -type f`; do
        cp -n $file /opt/DikuMUD3/vme/lib/$file
    done
    popd
fi

# Update the server config to allow our container multiplexors to connect.
IP_TELNET=`dig +short $MPLEX_TELNET`
if [ -z "$IP_TELNET" ]; then IP_TELNET=$MPLEX_TELNET; fi
IP_WEBSOCKET=`dig +short $MPLEX_WEBSOCKET`
if [ -z "$IP_WEBSOCKET" ]; then IP_WEBSOCKET=$MPLEX_WEBSOCKET; fi
echo Telnet: $MPLEX_TELNET $IP_TELNET
echo Websockets: $MPLEX_WEBSOCKET $IP_WEBSOCKET
sed -i "s/mplex hosts = .*/mplex hosts = ~${IP_TELNET}~ ~${IP_WEBSOCKET}~/" /opt/DikuMUD3/vme/etc/server.cfg

exec "$@"

From an application management perspective, this script makes a backup of the lib folder every time it starts, and it keeps the last two backups, in case the MUD doesn’t start.

This script also uses magical environment variables. In this case, it expects the DNS name or IP address of the telnet and websocket containers. The vme executable actually whitelists incomming connections, so we need to let it know about our websocket and telnet containers. We need the IP address support for testing, as we can’t resolve the IP addresses of the containers from outside the container. However, we can only rely on the DNS name of the container when we get this running through docker-compose. So we need to support both options. For the record, when testing manually, you will need to get the internal container IP addresses of the listeners before you create the mud container.

Lastly, while I was working on this script, I realized that some of the files in here were more like settings than game data. I reached out to Michael for a list of the files that needed to be replaced when the game was updated, and he was awesome enough to move those into the /etc folder, making the /lib file exclusively for game data. This made this update script simple and this project possible. Thanks, Michael!

Last for the game containers, we need a docker-compose.yml file. This is relatively straightforward, connecting the dots between our four containers:

version: "3.4"
volumes:
  dikumud_lib: {}
#  dikumud_log: {}
services:
  mud-container:
    image: diku_mud:live
    build:
      context: ../..
      dockerfile: ./docker/diku/Dockerfile.mud
    restart: always
    volumes:
     - "dikumud_lib:/opt/DikuMUD3/vme/lib"
#     - "dikumud_lib:/opt/DikuMUD3/vme/log"
    environment:
     - MPLEX_TELNET=telnet-container
     - MPLEX_WEBSOCKET=websocket-container
#    command: [ "./vme" ]
  telnet-container:
    image: diku_listener:live
    build:
      context: ../..
      dockerfile: ./docker/diku/Dockerfile.listener
    restart: always
    command: [ "-p", "4282" ]    
    environment:
     - MUD_HOST=mud-container
    ports:
     - "4282:4282"
  websocket-container:
    image: diku_listener:live
    restart: always
    environment:
     - MUD_HOST=mud-container
    ports:
     - "4280:4280"
  nginx-container:
    image: diku_nginx:live
    build:
      context: ../..
      dockerfile: ./docker/diku/Dockerfile.nginx
    restart: always
    ports:
     - "80:80"

This docker-compose.yml file exposes two ports: 80 for web access and 4282 for direct telnet access. Per the DikuMUD project maintianer, the telnet access is only intended for debugging. The dikumud_lib volume will store your game data so that it is available after updates.

If you notice the version number it is older than the “3.8” I have been using up to this point. That is because the Docker installed in the image is an older version maintained by the distribution, instead of the current version Docker offers. The Ubuntu container will not process the version 3.8 files, but will process 3.4. Since I’m not using any features newer than what’s supported in version 3.4, this all works out.

I have commented out the means to have DikuMUD log to a persistent log volume if you uncomment all the commented parts of this file. However, then you become responsible for cleaning up that log. If you don’t comment out these lines, then the executables all write to STDOUT, which is available through the docker logs command. You could see what’s going on by running:

docker ps
docker logs <container_name>

The downside to using docker logs is that you lose the logs when the container is stopped. It’s your call; I chose not to save the logs to disk, that’s all.

At this point, we only need to update the Dockerfile for the manager container to get it to automatically compile and deploy the other containers and create a docker-compose file to rule – I mean run – them all. Update Dockerfile.manager in the updater directory to the following:

FROM ubuntu:latest
ENV GIT_REPO=https://github.com/Seifert69/DikuMUD3.git
ENV GIT_BRANCH=master
RUN export DEBIAN_FRONTEND=noninteractive && \
  apt-get update && \
  apt-get install -y cron && \
  apt-get install -y g++ && \
  apt-get install -y libboost1.71-dev && \
  apt-get install -y libboost-system1.71-dev && \
  apt-get install -y libboost-filesystem1.71-dev && \
  apt-get install -y libboost-regex1.71-dev && \
  apt-get install -y bison && \
  apt-get install -y flex && \
  apt-get install -y make && \
  apt-get install -y git && \
  apt-get install -y curl && \
  apt-get install -y docker-compose && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/*
COPY ./docker /opt/docker
RUN chmod +x /opt/docker/*/*.sh && \
  crontab < /opt/docker/updater/mud.cron && \
  cp -p /opt/docker/updater/entrypoint_manager.sh /entrypoint_manager.sh
ENTRYPOINT [ "/entrypoint_manager.sh" ]
VOLUME [ "/opt/DikuMUD3" ]
CMD [ "/sbin/cron", "-f" ]

This Dockerfile now contains the ENV command, which sets default values for two of the environment variables used by this container, ensuring that the default repository, if not defined, is the main DikuMUD git repo we started with. It also sets the default branch to master. Why do this? Because someone, somewhere, is going to want to run their own MUD using Docker. After all, who doesn’t want to run their own game? But unless you want to use everything exactly like the main repository does, including the /etc/server.cfg file, which tells EVERYONE ON THE INTERNET the name of your only in-game immortal, then you need to be able to point to your own cloned git repository. This also will allow you to add your own zones, items, classes, etc to the game, as you don’t want to change this directly in your game container. Every time the source code updates, then you would lose all your changes when the game container gets rebuilt. The only requirement is that your Docker manager container must be able to access your git repository over the network. So you could clone to a new repo on GitHub, or use your dev station if the repo you setup can be accessed over SSH, or even find a Docker git repository image and use that. Managing the repository, including logins and how to craft that into your GIT_REPO URL is beyond the scope of this tutorial.

Also, the DikuMUD3 directory is now defined as a volume mount. This will keep us from losing the current state of the repository. If we didn’t do this, then the whole game would recompile and relaunch every time the manager container was rebuilt every month.

Last, we need a compose file for the manager, called docker-compose.manager.yml:

version: "3.4"
volumes:
  manager_repo: {}
services:
  master-container:
    image: diku_master:live
    build:
      dockerfile: ./docker/updater/Dockerfile.manager
      context: ../..
    restart: always
    volumes:
     - "manager_repo:/opt/DikuMUD3/vme/lib"
     - "/var/run/docker.sock:/var/run/docker.sock"
    environment:
     - GIT_REPO=https://github.com/Seifert69/DikuMUD3.git
     - GIT_BRANCH=master

If you wanted to customize the git repo used for the game’s source code, you would change the environment variables to point at your repo, including any login configuration you would need. Also, if you leverage the git repo similarly to how we tag the images, you can make a tag or branch called “live”, and only update that branch in git when you are ready to have your main game recompile.

Now that we have all of our files staged, we can finally go to our DikuMUD3 project directory and run the following:

docker-compose -f ./docker/updater/docker-compose.manager.yml build
docker-compose -f ./docker/updater/docker-compose.manager.yml up -d
docker-compose -f ./docker/updater/docker-compose.manager.yml logs -f

If you noticed, we used docker-compose to build the image. I was unable to get docker-compose to start a stack that used a custom image unless it was built by docker-compose. However, images built with docker-compose can still be tagged and otherwise manipulated just like any other Docker image.

The two -f parameters on the logs command is tricky and the placing is important. The first is a parameter for docker-compose, indicating where the config file is located. The command always comes after that. The second -f is to follow the log, so the software will sit there reporting all log output. As you watch your container’s logs, you should see it download and build a fresh copy of the DikuMUD3 source code from GitHub. You can also adapt this command to watch the logs from all four game containers as well:

docker-compose -f ./docker/diku/docker-compose.yml logs -f

Finally, for the moment you’ve been waiting for. Open a browser and go to http://localhost. On the webpage that appears, click Connect, and BAM! You’re in! Welcome to your own copy of DikuMud!

If you want to continue to work on this MUD, I would suggest that you create a private git repo that the containers can access, clone the project’s code base, and completely rebuild your containers with the docker-compose.manager.yml file updated to point to your new repo. I was able to do this by downloading a container for GitLab. You can also enjoy the game in it’s current state; I suggest adding some friends to make the journey more enjoyable.

To recap, in this lesson, we created a Docker stack running a web-browser accessible MUD, and a second stack that builds and maintains the first. We started this by developing an image with the tools needed to compile the source code from scratch and planned for how that image would call Docker commands to recreate the other stack. We also created Dockerfiles to deploy the MUD software into images like we would for any app, but then added scripts to the compiling image to build the game images for us and deploy them, too.

For now, I don’t have a next lesson in mind. But I will be back soon with more to share!

Tagged :