Home > Backup and Recovery Blog > How to Backup Docker Containers? Docker Container Backup Methods of 2024.

How to Backup Docker Containers? Docker Container Backup Methods of 2024.

1 Star2 Stars3 Stars4 Stars5 Stars
(15 votes, average: 4.07 out of 5)
Loading...
Updated 16th January 2024, Rob Morrison

While Docker – and all containers – are typically somewhat newer to a production IT environment than most other technologies, there is still a need to backup these containers, their applications and their persistent data. If a production IT system produces persistent data, that data will likely have some value. It may even be of  critical importance. Therefore, safeguarding that data in case some sort of disaster happens, for example as a result of a data breach or human error, is likely necessary.

This topic covers both Docker’s ability to create backups by itself, as well as the ability of various third-party solutions to create comprehensive Docker backups, such as Bacula Enterprise.

Docker container backup and restore

This usually begins with committing the container in question as an image, using the following command:


# docker commit -p [container-id] backup01
sha256:89682d4xxxxxx

This image can be saved as a .tar file with another command:


# docker save -o backup01.tar backup01
# ls -al | grep back
-rw——- 1 root root 178697728 Mar 31 23:35 backup01.tar

The .tar file can also be saved on the NFS mount point. An alternative for that is via pushing the image in question (backup01) directly to your local registry. To do that we’ll have to appropriately tag the image first:


# docker tag backup01 localhost:5000/backup-image:v1

In this case localhost is the location name, and 5000 is the port number. Both can be changed if needed. It’s also important to remember that both the tag name and the repository should be in the lower case for the tag to be applied correctly. The process is complete with the push command initiation:


# docker push backup-image:v1

Since there’s two backup methods we’ve discussed, there’ll be two restore methods, as well. To restore the backup image from a .tar file you’ll have to initiate the following command:


# docker load -i /tmp/backup01.tar

The command line should show you the subsequent status lines if the command has been input correctly:


ff91b8b5abb1: Loading layer [======================>] 2.56 kB/2.56 kB
Loaded image: backup01:latest

Another command called docker run can be used to create a container from this image.

A pushed image can be pulled directly with a relatively simple command:


# docker pull localhost:5000/backup-image:v1

As with the previous example, both the localhost name and the port number is the subject to change if needed.

Docker volume backup and restore

Another type of Docker backups is via volumes – persistent storage providers for Docker containers. These volumes need to be backed up for data continuity.

When it comes to persistent data management inside of the running Docker containers, Docker volumes are the most recommended way to manage it all. This kind of approach offers several different advantages. For example, data that is stored in the Docker container via volumes gets effectively isolated from the rest of the file system, making it that much harder to be affected by system-wide cyberattacks. However, this kind of isolation also makes it harder to create Docker backup volumes.

Additionally, volumes eliminate the need to worry about GUI and UID between the Docker container system and the OS. Volumes themselves are portable when it comes to different Docker installations, meaning that there is no need to worry about the host’s operating system. At the same time, this portable approach makes it possible to manage Docker volumes with various external tools – such as bucket based storage, nfs, etc.

Since Docker volumes are borderline necessary to realistically create different Docker containers with persistent data, it is only natural to make sure that you also have Docker backup volumes – this kind of data is at least as important as any other data inside of your system, if not more important.

Of course, using docker volumes to store container data might not exactly be convenient and there are some growing pains here and there. For example, you would have to learn several different commands to do something as simple as copying information out of the container. You would also have to know multiple commands to be able to grab a shell on a running Docker to be able to see the current status.

The entirety of Docker volume data may be harder to make visible inside of your centralized file system, since it works in a slightly different manner than the traditional storage locations – and you would have to create Docker backup volumes for the same purpose you usually create backups of your regular data.

Usually, Docker volumes are managed by the Docker daemon, however, we will not be interacting with that at all. The idea of a docker backup volume is to get a volume copy as a compressed file in one of the local directories. This copy is the backup we’re looking for.

In this example our container in question is called dckr-site with the volume called dckr-volume, it’s mounted at /var/lib/dckr/content/ and stores all of the data there.

The first step is to stop the container using the following command:


$ docker stop dckr-site

The next one is focused on both mounting the container and backing up the volume’s contents:


$ mkdir ~/backup
$ docker run –rm –volumes-from dckr-site -v ~/backup:/backup ubuntu bash -c “cd /var/lib/dckr/content && tar cvf /backup/dckr-site.tar .”

In this scenario:

  • docker run command is used to create a new container,
  • –rm command tells the system to remove the container once the operation is complete;
  • –volumes-from dckr-site is mounting the container’s volumes to our new temporary container;
  • bash -c “cd /var/lib/dckr/content && tar cvf /backup/dckr-site.tar” creates a backup from all of the contents of the /backup/ directory

The recovery process for these backups isn’t that complicated. It begins with creating a new volume with the following command:


$ docker volume create dckr-volume-2

Then you can use the following command to restore the volume from a temporary container in a .tar file:


$ docker run –rm -v dckr-volume-2:/recover -v ~/backup:/backup ubuntu bash -c “cd /recover && tar xvf /backup/dckr-site.tar”

Of course, you’ll have to mount this new volume to the new container for it to work properly:


$ docker run -d -v dckr-volume-2:/var/lib/dckr/content -p 80:2368 dckr:latest

If the procedure is done correctly, the entire state of the application should be restored. It’s important to mention that this method should not be relied on as a single backup source since the backup data is still stored on the host, and therefore  would be lost in case of a data loss or disaster that affects the host as well.

Alternative approach to Docker volume backup & restore

It is natural for all modern systems to evolve and change over time, and Docker is no exception, especially with such a massive community of users and developers. As an example, there is an alternative approach to backing up Docker volumes that some users might prefer over the first option for one reason or another.

The first step here would be to define the volume name. The command in question differs for Windows (PowerShell) and Linux/macOS users, the former needs the $VOLUME=”name_volume” command, while the latter are relying on VOLUME=”name_volume” command.

Once that is complete, the first option is to perform a Backup command:


docker run –rm \
-v “${VOLUME}:/data” \
-v “${PWD}:/backup-dir” \
ubuntu \
tar cvzf /backup-dir/backup.tar.gz /data

The /data part represents the target destination and could be modified if necessary.

The Restore command would be different from that, of course, and the example below showcases that well:


docker run –rm \
-v “${VOLUME}:/data” \
-v “${PWD}:/backup-dir” \
ubuntu \
bash -c “rm -rf /data/{*,.*}; cd /data && tar xvzf /backup-dir/backup.tar.gz –strip 1”

It should be noted that the rm -rf /data/{*,.*} part exists to delete all of the existing files prior to the backup restoration, but the nature of Docker means that it includes two different commands – one for deleting all files and folders that start with a dot, and the other for files that do not start with a dot. However, there are two invisible “files” in every container called “.” and “..” that are not technically files in the first place, which is why they cannot be deleted.

As such, the first two output messages after running the Restore command would be about the inability to remove these files – this is normal and does not affect the rest of the process in any way, shape, or form.

There are a few key differences that exist between this method and the ones we have offered before:

  • The three previously existing tar arguments (c, v, and f, as in cvf) are now included as cvzf – so that the backup file would be automatically compressed after its creation.
  • The backup volume target paths are specified in every backup command, making them easier to locate afterwards. This is a different approach from using the –volumes-from command, since the latter does not show volume names or their target paths to the user in question.
  • The restoration process includes the command for deleting all of the existing volume data before performing a restoration operation to ensure proper restore process. Additionally, there is a reason for semicolon to be used between the tar xvzf and the rm commands, since using && (similar to how our first method works) means that the entire command chain stops completely after facing the first command failure.

Of course, this particular command is only suitable as a manual example for Docker volume backups, since every single command execution produces a full backup – something that would be highly inefficient in terms of both time and storage space if this command was to be automated and performed on a regular basis (and not even compression can solve this issue).

Docker volume management best practices

Working with Docker volumes can be quite the challenge – it is a very useful tool, but it has its own nuances that most users need to get used to sooner or later. To make this process slightly easier, we can go over a few pieces of advice and best practices for Docker volume management:

  • Implement consistent methods for attaching and detaching volumes to containers.
  • Employ consistent and meaningful naming conventions for easy identification and management of volumes.
  • Avoid creating unnecessary volumes, as each volume consumes resources.
  • Regularly check volumes to identify and address excessive data storage or capacity issues.
  • Avoid simultaneous write access to the same drive by multiple containers to prevent data corruption.
  • Implement appropriate security measures to safeguard sensitive data stored in volumes.
  • Assign clear and descriptive names to volumes for better organization and management.

Custom scripts and Docker images

Since Docker has a rather large community of highly skilled individuals, it should not be that surprising to learn that there are plenty of custom-made scripts and images for Docker backup purposes.

A Docker image from Offen is a good example of that. It is a lightweight companion container that can perform either individual or regular Docker volume backups to a specific target directory – local directory, Azure Blob, S3, Dropbox, WebDAV, etc. It also has the capability to rotate away older backups, it can encrypt backups with GPG, and failed backup tasks provide notifications to the user.

The initial setup process is fairly easy (all of the information can be located using the link above), and the example command line for a single backup operation should look like this:


docker run –rm -v data:/backup/data 
  –env AWS_ACCESS_KEY_ID=”<key_id>” \
  –env AWS_SECRET_ACCESS_KEY=”<access_key>” \
  –env AWS_S3_BUCKET_NAME=”<bucket_name>” \
  –entrypoint backup \
  offen/docker-volume-backup:v2

Of course, this is just one example, but there are many other options available on the market for users that are willing to spend the time and effort to learn how to use those.

Volumes Backup & Share

Users that are less familiar with the logic of a command line interface used to perform all of the aforementioned tasks may be looking into an easier way to work with Docker containers as a whole. Luckily, there is a solution for that problem – an official Docker Desktop app that offers a straightforward GUI for app management, container management, and more.

This same app also has an extension marketplace – offering plenty of different extensions to expand or modify the solution’s functionality in some capacity. The extensions can be both official and user-generated, but the one we are interested in is provided by Docker themselves.

Volumes Backup & Share is the official Docker volume management extension for Docker Desktop, offering a seamless way to work with volumes, including their backup and restore operations, as well as cloning, sharing, and more.

The solution in question offers four different backup types:

  • A new image.
  • A file backup (a tarball compressed with gzip, similar to the one we have discussed before).
  • An image registry.
  • An existing image.

Other features include the ability to import or restore volumes, ability to transfer volumes from one host to another, and all of that can be done with zero CLI commands involved – a great option for a specific target audience.

SnapShooter from Digital Ocean

Moving our focus toward third-party backup options for Docker, we can start with SnapShooter. SnapShooter is a relatively young backup solution that used to target DigitalOcean offerings specifically. It was expanded to other providers a few years later, and was acquired by DigitalOcean at the beginning of 2023.

As its name suggests, SnapShooter’s main offering is the ability to perform snapshots – even though it was expanded to also include file backups, database backups, app backups, etc. As for its Docker capabilities – SnapShooter can offer support for Docker volume backup, as well as database backup coverage for MongoDB, MySQL, and Postgres that runs in Docker.

SnapShooter can also offer a rather user-friendly interface that helps its users perform a variety of different tasks. This includes backup creation and data restoration processes for Docker volumes, as well.

Duplicati

Another third-party backup option for Docker volumes that is somewhat different is Duplicati. Duplicati is a rather well-known free backup solution with an open-source structure, it offers the ability to create data backups and move them to a variety of storage target locations, be it remote servers, NAS, or cloud services.

Since Duplicati is a free and open-source offering, it is a bit more difficult to work with it for an average user. However, it can still be used to set up regular Docker volume backups using a specific sequence of commands. It should be noted that this example uses a specific Docker container – linuxserver/duplicati.

The first step here would be to run the command called id $user to ensure that the current host has root permissions for all of the future tasks. The command in question, as well as the response, should look like this:


root@debian:~# id $user
uid=0(root) gid=0(root) groups=0(root)

Number zero in both gid and uid fields represents the status of a root user.

The next command runs the aforementioned Docker container linuxserver/duplicati inside of your system:


docker run -d \
  –name=duplicati \
  -e PUID=0 \
  -e PGID=0 \
  -e TZ=Etc/UTC \
  -p 8200:8200 \
  -v duplicati-config:/config \
  -v /tmp/backups:/backups \
  -v /:/source \
  –restart unless-stopped \
  lscr.io/linuxserver/duplicati:latest

As soon as this command is entered, you can go to Duplicati’s web UI by visiting http://<server_ip>:8200 in the search bar of your web browser.

Once the initial run has been completed, it is important for the container in question to be stopped with the docker stop <id_cont> command, in which you are typing in the unique ID of the container.

Most graphical interface types for Docker container management should also be able to perform this next step, but we are going to go over the CLI version of it for the sake of this example. Here, we have to modify the previously used Docker container launch command by replacing the /source field with the exact location of each volume that you want backed up.


docker run -d \
  –name=duplicati \
  -e PUID=0 \
  -e PGID=0 \
  -e TZ=Etc/UTC \
  -p 8200:8200 \
  -v duplicati-config:/config \
  -v /tmp/backups:/backups \
  -v volume1:/source/volume1 \
  -v volume2:/source/volume2 \
  –restart unless-stopped \
  lscr.io/linuxserver/duplicati:latest

Once this process is complete, it is recommended to restart the Duplicati container before continuing. Our last step here is to start using Duplicati’s web UI to initiate backup jobs. Use the Add backup button and follow the backup wizard to do just that. You would have to point to the exact data that you want backed up – the Docker volumes you’ve mapped should be located under the Source data category in the backup wizard. Backups themselves are saved in the /tmp/backups directory (can be modified).

We went over a small-scale Docker backup solution and a free Docker volume backup option, now it is time to see how large-scale enterprise backup solutions work with the same task.

Docker backup and restore with Bacula Enterprise

Bacula Enterprise utilizes its modular capability to allow for native integration of various systems and services, including Docker. Bacula is a networked, comprehensive backup and recovery software that is designed to carry the heavy workloads of medium and large enterprises. It also has especially high security levels compared to other backup vendor solutions, and as such, is relied on by the vast majority of Western military, defense and government organizations. Bacula’s Docker module provides a lot of useful additional features alongside the core backup and recovery ones.

Docker backup with Bacula Enterprise consists of three simple steps:

  1. Current state of the container is saved to the new image
  2. Docker utility is executed and the data is saved
  3. The snapshot in question is removed to free up the space

The backup in question can be performed to a container in every state, and Bacula software shows the status of the backup process on each step. Each container image backup means one more .tar file saved. Image backups are kept in the /@docker/image//.tar, and container backups are kept in the /@docker/container//.tar directory.

The restoration process of Docker backups with Bacula Enterprise is slightly more complicated and can be done in two different ways:

  • Restore to local directory that utilizes the where=/some/path Bacula parameter to state the full path for the backup to be restored to as an archive file or files;
  • And Restore to Docker service, meaning that the backup data would be restored with the where= command as the new container, without archiving it in the first place.

The customization of the restoration process is also available via several different parameters, like container_create, container_run, and more.

At the time of writing, Bacula is one of the very few (or the only) enterprise-grade backup and recovery solutions to be able to perform full and comprehensive backup of a Docker environment. It also safeguards a broad range of other container technologies, and is recommended for demanding and mission-critical IT environments due to its scalability, reliability and resilience.

Overall, the availability of other, specialist Docker backup solutions, combined with Docker’s own inbuilt solution, together with Bacula’s Docker module, mean that there is a range of choices available to IT managers seeking a way to safeguard their production Docker deployment.

About the author
Rob Morrison
Rob Morrison is the marketing director at Bacula Systems. He started his IT marketing career with Silicon Graphics in Switzerland, performing strongly in various marketing management roles for almost 10 years. In the next 10 years Rob also held various marketing management positions in JBoss, Red Hat and Pentaho ensuring market share growth for these well-known companies. He is a graduate of Plymouth University and holds an Honours Digital Media and Communications degree, and completed an Overseas Studies Program.
Leave a comment

Your email address will not be published. Required fields are marked *