Docker is a container manager, but that doesn't say anything if you don't know what containers are.
Containers are basically isolated apps. For example, take something like Nextcloud. Nextcloud can run in a Docker container, which means that it runs in an isolated environment completely separated from the user's system.
If Nextcloud breaks, the user's server won't be affected at all, because it's running isolated.
Why is this useful? Well, it's useful because dependencies and such automatically update. Nextcloud for example, is dependent on PHP and if you install Nextcloud directly on your server, you'll need to ensure that PHP 8 has been installed and set up properly. If PHP (or the required PHP extensions) aren't properly installed, Nextcloud won't work. Or, maybe if there's a Nextcloud update that requires a new version of PHP (PHP 9 or 10 in the future), you'll have to manually update PHP to the newer version.
All that dependency management is completely gone with containers. The container itself automatically installs and sets up a proper environment for the app that's running. So in the case of Nextcloud, the PHP binaries, extensions, and all the other stuff is all automatically included without the developer having to do anything at all. Just run one command and your entire Nextcloud instance is automatically updated.
Also, if server software running in a container gets compromised, hopefully the container can contain the compromise from spreading to the rest of the system.
If there are no external volumes and the container is in its own network without any other containers, then any malware in the container shouldn't be able to reach / affect the host server, because it's isolated.
Even with external volumes, I don't think there should be any mechanism where a container can escape a bind mount to affect the rest of the host fs? I use bind mounts all the time, far more than docker volumes.
How does the container know what's safe to update? Nextcloud (in this example) may need to stay on a specific version of some package and updating everything would break it.
The Dockerfile used to build the container controls what is in the container. It's "infrastructure as code"-like. You create a script that builds the environment the application needs.
If you need a newer version of PHP you update the Dockerfile to include the new version. Then you publish the new container.
I only use docker images supplied by the devs themselves or community maintained (e.g. Linux server.io) so they essentially tell docker what needs to be installed in the container, not me. It takes the hassle out of trying to figure out what I need to do to get the service running. If they update their app, they'll probably know best what else needs to be updated and will do that in the image. I guess you are relying on them to keep everything updated but they are way more knowledgeable than me and if there is a vulnerability, it is only in that container and not your other services.