Currently I’m planning to dockerize some web applications but I didn’t find a reasonably easy way do create the images to be hosted in my repository so I can pull them on my server.
What I currently have is:
A local computer with a directory where the application that I want to dockerize is located
A “docker server” running Portainer without shell/ssh access
A place where I can upload/host the Docker images and where I can pull the images from on the “Docker server”
Basic knowledge on how to write the needed Dockerfile
What I now need is a sane way to build the images WITHOUT setting up a fully featured Docker environment on the local computer.
Ideally something where I can build the images and upload them but without that something “littering Docker-related files all over my system”.
Something like a VM that resets on every start maybe? So … build the image, upload to repository, close the terminal window, and forget that anything ever happened.
What is YOUR solution to create and upload Docker images in a clean and sane way?
Careful this will also delete your unused volumes (not attached to a running container because it is stopped for whatever reason counts as unused). For this reason alone, always use bind mounts for volumes you care about.
I use Gitea and a Runner to build Docker images from the projects in the git repo. Since I'm lazy and only have one machine, I just run the runner on the target machine and mount the docker socket.
BTW: If you manage to "litter your system with docker related files" you fundamentally mis-used Docker. That's exactly what Docker is supposed to prevent.
Self hosting your own CI/CD is the key for OP. Littering is solved too because litter is only a problem on long running servers, which is an anti-pattern in a CI/CD environment.
I already have Forgejo (soft-fork of Gitea) in a Docker container. I guess I need to check how I can access that exact same Docker server where itself is hosted …
With littering I mean several docker dotfiles and dotdirectories in the user’s home directory and other system-wide locations. When I installed Docker on my local computer it created various images, containers, and volumes when created an image.
This is what I want to prevent. Neither do I want nor do I need a fully-featured Docker environment on my local computer.
Maybe you should read up a bit about how docker works, you seem to misunderstand a lot here.
For example the "various images" are kind of the point of docker. Images are layered, and each layer is its own image, so you might end up with 3 or 4 images despite only building one image.
This is something you can't really prevent. It's just how docker works.
Anyway, you can mount the docker socket into a container, and using that socket you can then build an image within the running container. That's essentially how most ci/cd systems work.
You could maybe look into podman and buildah, as far as I know, these can build images without a running docker daemon. That might be a tad "cleaner", but comes with other problems (like no caching).
Do you mean that you want to build the docker image on one computer, export it to a different computer where it's going to run, and there shouldn't be any traces of the build process on the first computer? Perhaps it's possible with the --output option.. Otherwise you could write a small script which combines the commands for docker build, export to file, delete local image, and clean up the system.
For local testing: build and run tests on whatever computer I'm developing on.
For deployment: I have a self hosted gitlab instance in a kubernetes cluster. It comes with a registry all setup. Push the project, let the cicd pipeline build, test, and deploy through staging into prod.
Gitlab has a great set of CI tools for deploying docker images, and includes an internal registry of images automatically tied to your repo and available in CI.
I build, configure, and deploy them with nix flakes for maximum reproducibility. It’s the way you should be doing it for archival purposes. With this tech, you can rebuild any docker image identically to today’s in 100 years.
Nowadays, I build them locally, and upload stable releases to registry. I have in the last used GitHub runners to do it, but building locally is just easier and faster for testing.
For development, I have a single image per project tagged "dev" running locally in WSL that I overwrite over and over again.
For real builds, I use pipelines on my Azure DevOps server to build the image on an agent using a remote buildkit container and push it in my internal repository. All 3 components hosted in the same kubernetes cluster.