
If you used the first method and have multiple individual files that make up your database dump, then we should first make the dumped database files available within the docker container by copying them into the container: ❯ docker cp ~/Downloads/dump :/dump Restore Method one: Mongodump's default individual collection dump files Slightly confusing, but that seems to be the Mongo way. As you specify user roles that specify which database(s) the user has actual access to, even though the user might exist in the admin database, it doesn't actually have any permissions to do anything there, so it seems to not be that big of a deal. There are ways to change this behavior, but that is outside the scope of this guide. One part that seems somewhat confusing and that threw me off previously is that by default when you try to connect to a MongoDB server it expects that admin is the database in which the user exists – even if said user has no access permissions for this database. A prompt will appear after running this command in which you can specify the user's password. You can run the following in your mongodb (cli) client of choice to create a new user. If you don't have one already, let's go ahead and create one now. Ensure a user is created firstīefore we continue, we should make sure we have a database user account ready with read and write access to the database you want to restore into.

Whether this is a fresh and new container or you're restoring a backup, the process is the same. Now that we have the database dump files ready, let's go ahead and import (or, restore) them. Now we can basically do the exact opposite in our new container. Keep in mind that in these examples, the ~/Downloads/mongodb.dump file will be created on the host machine, or at least on whatever machine you are running these commands from, not within the container. Or again, if you are using a connection URL: ❯ docker exec -i /usr/bin/mongodump -uri "" -archive > ~/Downloads/mongodb.dump This is what what would look like: ❯ docker exec -i /usr/bin/mongodump -username -password -authenticationDatabase admin -db -archive > ~/Downloads/mongodb.dump
#Docker mongodb archive
There is a flag we can use that changes the behavior of mongodump to instead send everything out into a single archive file or stdout, which we can use to directly pipe everything we need out of the container into a file on our host machine – or indeed to machine we are running these container commands from. Here I am using the ~/Downloads destination as an example: ❯ docker cp :/dump ~/Downloads/dump Method two: Pipe results directly out of the container into a single dump Now that we have the resulting files we can copy them out of the container and to our host. Or if you use a mongodb+srv or similar URL to connect to your database instead: ❯ docker exec -i /usr/bin/mongodump -uri "" -out /dump Here's an example that dumps your entire database to the /dump directory within your container: ❯ docker exec -i /usr/bin/mongodump -username -password -authenticationDatabase admin -db -out /dump For this we can use an intermediary step of storing the dumped database files in a directory within the container, which we can then copy out to our host using docker cp – or if you already have a host directory mounted as a volume, you can use that and skip the docker cp step altogether of course.

If this is the method you'd like to use, we won't be able to rely on shell piping to get the files directly out of our database container.

Method one: Mongodump's default individual collection dump filesīy default a mongodump export generates individual files, one for reach collection. This assumes your source database is (also) in a container, but the command would be roughly identical if you your source database is directly installed on your (virtual) machine too. We can use the following command to create a dump of your entire database, all collections included.
