Introduction
If you are running applications with Docker on a server, backups should not be treated as something optional or something to “set up later.” Containers can be recreated, images can be rebuilt, and application code can usually be restored from Git, but the real risk often lives inside Docker volumes. That is where uploaded files, database data, and other persistent content are stored.
For a personal site or a game-related project, losing those volumes can mean losing articles, project entries, messages, uploads, and media that are not easily recreated. A small mistake in configuration, a bad deployment, a server issue, or an exposed service can quickly become a painful lesson if there is no backup system in place.
This article explains a practical way to set up automatic backups for Docker volumes on a Linux server. It focuses on a setup where MongoDB and uploaded files are stored in Docker volumes and shows how to back them up safely and regularly. It also explains why backing up MongoDB with mongodump is better than relying only on raw volume archives.
Why Docker Volumes Matter
Docker containers themselves are usually not the most important thing to protect. In many cases, a container can be deleted and recreated from an image in seconds. What matters is the data that survives container restarts and rebuilds.
That data usually lives in Docker volumes.
Typical examples include:
- database files
- uploaded images and media
- CMS content files
- attachments
- local persistent application data
If the application is rebuilt but the volume is lost, the service may still run, but the real content may be gone.
That is why backing up Docker volumes is one of the most important parts of running applications on a server.
The Backup Strategy
A practical backup plan should do two things:
- Back up file-based Docker volumes, such as uploads
- Back up MongoDB using a logical export rather than only raw data files
The reason for using both methods is simple.
Raw volume backups are useful for uploaded files and for quick full-volume recovery. But for MongoDB, a logical dump with mongodump is better because:
- it is easier to restore
- it is more portable across environments
- it reduces version-related risks
- it is safer than copying raw database files while relying only on storage engine state
The setup described here uses:
- compressed
.tar.gzbackups for uploads volumes .archivebackups for MongoDB usingmongodumpcronto automate the process- simple rotation to keep only the most recent backups
Step 1: Create a Backup Folder
The first step is to create a directory on the server where backups will be stored.
A good location is:
/opt/docker-volume-backups
Create it with:
sudo mkdir -p /opt/docker-volume-backups
sudo chown $USER:$USER /opt/docker-volume-backups
This creates the directory and gives ownership to the current user.
It is useful to keep backups in one predictable place so they are easy to inspect, copy, rotate, and restore later.
Step 2: Identify the Volumes That Matter
Before writing scripts, identify the Docker volumes that actually need protection.
For example, in a setup with two projects, the important volumes might be:
eneaslaricom_mongodb-dataeneaslaricom_uploadslarixgamescom_mongodb-datalarixgamescom_uploads
You can list all volumes on the server with:
docker volume ls
The output helps confirm the exact names Docker is using.
For file uploads, raw volume backups are usually enough.
For MongoDB, it is better to back up the database using mongodump instead of trusting only raw Mongo storage files.
Step 3: Create a Script for Raw Docker Volume Backups
The next step is to create a shell script that archives the selected volumes into compressed .tar.gz files.
Create the script:
nano ~/backup-docker-volumes.sh
Paste the following:
#!/usr/bin/env bash
set -e
BACKUP_DIR="/opt/docker-volume-backups"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
VOLUMES=(
"eneaslaricom_mongodb-data"
"eneaslaricom_uploads"
"larixgamescom_mongodb-data"
"larixgamescom_uploads"
)
mkdir -p "$BACKUP_DIR"
for VOLUME in "${VOLUMES[@]}"; do
echo "Backing up $VOLUME ..."
docker run --rm \
-v "${VOLUME}:/from:ro" \
-v "${BACKUP_DIR}:/to" \
alpine \
sh -c "cd /from && tar -czf /to/${VOLUME}_${TIMESTAMP}.tar.gz ."
done
# Keep only the newest 7 backups per volume
for VOLUME in "${VOLUMES[@]}"; do
ls -1t "${BACKUP_DIR}/${VOLUME}_"*.tar.gz 2>/dev/null | tail -n +8 | xargs -r rm -f
done
echo "Backup completed at ${TIMESTAMP}"
Save the file and make it executable:
chmod +x ~/backup-docker-volumes.sh
How This Script Works
This script does a few important things.
First, it defines the backup directory and generates a timestamp so each backup file gets a unique name.
Then it loops through the selected Docker volumes and mounts each one read-only into a temporary Alpine container. Inside that temporary container, it creates a compressed .tar.gz archive of the volume contents and stores it in the backup folder.
Finally, it rotates the backups by keeping only the newest seven archives for each volume. Older backups are removed automatically.
This makes the process simple, repeatable, and easy to automate.
Step 4: Test the Volume Backup Script Manually
Before automating anything, always test the script manually.
Run:
~/backup-docker-volumes.sh
Then inspect the backup folder:
ls -lah /opt/docker-volume-backups
You should see files similar to:
eneaslaricom_mongodb-data_2026-03-27_20-15-00.tar.gz
eneaslaricom_uploads_2026-03-27_20-15-00.tar.gz
larixgamescom_mongodb-data_2026-03-27_20-15-00.tar.gz
larixgamescom_uploads_2026-03-27_20-15-00.tar.gz
If those files are created successfully, the raw volume backup process is working.
Why Raw Volume Backups Are Not Enough for MongoDB
At this point, it is important to understand a limitation.
Backing up a MongoDB volume as raw files is possible, but it is not the most reliable restore method. MongoDB uses internal storage formats such as WiredTiger, and raw copies can be more sensitive to:
- MongoDB version changes
- storage engine state
- lock files
- unclean shutdowns
- portability across environments
This does not mean raw volume backups are useless. They are still valuable as an additional safety layer. But for the actual database content, mongodump is the better backup method.
Step 5: Create a MongoDB Backup Script with mongodump
Create a dedicated Mongo backup script:
nano ~/backup-mongo.sh
Paste this:
#!/usr/bin/env bash
set -e
BACKUP_DIR="/opt/docker-volume-backups/mongo"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
mkdir -p "$BACKUP_DIR"
docker exec eneaslari_db mongodump --db eneaslariDB --archive="/tmp/eneaslariDB_${TIMESTAMP}.archive"
docker cp "eneaslari_db:/tmp/eneaslariDB_${TIMESTAMP}.archive" "${BACKUP_DIR}/eneaslariDB_${TIMESTAMP}.archive"
docker exec eneaslari_db rm -f "/tmp/eneaslariDB_${TIMESTAMP}.archive"
ls -1t "${BACKUP_DIR}/eneaslariDB_"*.archive 2>/dev/null | tail -n +8 | xargs -r rm -f
Then make it executable:
chmod +x ~/backup-mongo.sh
How the Mongo Backup Script Works
This script creates a timestamped MongoDB archive dump from the running container eneaslari_db.
It uses mongodump inside the container, writes the archive temporarily into /tmp, copies it out to the backup directory on the host, and then deletes the temporary file from the container.
It also rotates backups by keeping only the seven newest archive files.
This type of backup is especially useful because it captures the database contents in a portable format designed for restore operations.
Step 6: Test the Mongo Backup Script
Run the script manually:
~/backup-mongo.sh
Then check the backup folder:
ls -lah /opt/docker-volume-backups/mongo
You should see files like:
eneaslariDB_2026-03-27_20-15-00.archive
If the archive appears, MongoDB backup is working.
Step 7: Automate Both Scripts with Cron
Once both scripts work manually, automate them with cron.
Open the current user’s crontab:
crontab -e
A simple schedule is:
30 3 * * * /path/to/backup-docker-volumes.sh >> /var/log/docker-volume-backup.log 2>&1
45 3 * * * /path/to/backup-mongo.sh >> /var/log/mongo-backup.log 2>&1
If your user is not root, replace /root/ with the actual home directory.
This schedule runs:
- raw volume backups at 3:30 AM
- Mongo logical backups at 3:45 AM
Spacing them slightly apart helps keep the process simple and easier to debug.
Step 8: Check the Logs
After the cron jobs run, inspect the logs:
cat /var/log/docker-volume-backup.log
cat /var/log/mongo-backup.log
These logs help confirm that backups are running correctly and make it easier to catch failures early.
It is a good habit to inspect backup logs occasionally instead of assuming the schedule is working forever.
Restoring a Raw Volume Backup
To restore a file-based volume such as uploads, stop the app first:
docker compose down
Then restore the volume:
docker run --rm \
-v eneaslaricom_uploads:/to \
-v /opt/docker-volume-backups:/from \
alpine \
sh -c "cd /to && find . -mindepth 1 -delete && tar -xzf /from/eneaslaricom_uploads_YYYY-MM-DD_HH-MM-SS.tar.gz -C /to"
After restoring, restart the stack:
docker compose up -d
This method replaces the contents of the volume with the selected backup.
Restoring a MongoDB Archive Backup
Mongo restore is cleaner with mongorestore.
First copy the archive into the Mongo container:
docker cp /opt/docker-volume-backups/mongo/eneaslariDB_YYYY-MM-DD_HH-MM-SS.archive eneaslari_db:/tmp/eneaslariDB.archive
Then restore it:
docker compose exec mongo mongorestore --drop --archive=/tmp/eneaslariDB.archive
After restore, verify that the data is present:
docker compose exec mongo mongosh --eval "use eneaslariDB; db.articles.countDocuments(); db.projects.countDocuments(); db.users.countDocuments();"
This helps confirm that the restore actually worked before depending on the application interface alone.
Keeping Only On-Server Backups Is Not Enough
Backing up to /opt/docker-volume-backups is a good start, but it is not the final solution.
If the server is lost, the disk fails, or the machine is badly compromised, backups stored only on that same server can disappear too. That means local server backups should be treated as the first layer of protection, not the only layer.
A stronger backup plan should eventually include:
- copying backups to another machine
- syncing them to cloud storage
- using
rsync,scp, or a dedicated backup service - keeping at least one off-server copy
Even a weekly copy to a local machine is far better than keeping all backups only on the same server.
A Practical Recommendation
For a setup like this, the most practical approach is:
- use
mongodumpfor MongoDB - use raw
.tar.gzbackups for uploads and other file-based volumes - rotate backups automatically
- run them every night with cron
- copy backups off-server regularly
This gives both convenience and resilience.
The raw volume backup helps recover file storage exactly as it existed. The Mongo logical dump makes database restore safer and more portable. Together, they create a much better recovery path than having no backups at all.
Final Thoughts
Backups are one of those things that are easy to postpone when everything is running normally. But the moment something goes wrong, they become the difference between inconvenience and disaster.
For Docker-based projects, protecting volumes should be treated as part of the application itself, not as an extra chore. A clean backup folder, two small scripts, and a cron schedule are enough to create a solid baseline. That alone can save a project from unnecessary loss.
The most important thing is not to wait until after a failure to think about backups. A backup strategy should exist before the next mistake, before the next bad deployment, and before the next unexpected attack.
Because once the data is gone, the value of a backup becomes very clear.