Docker
- i2p front end inaccessible after its container is restarted
I'm experimenting with i2p and a librewolf container setup in Docker compose. However, the i2p web console front end (127.0.0.1:7657) becomes inaccessible if the container itself is restarted. This can be remedied by removing the directories that get created by the volume mappings in the compose file, but this obviously not ideal. Anyone have experience with this problem? I've seen hints from people online suggesting that the data in those directories getting somehow corrupted. I have not yet investigated that further.
``` version: "3.5" services: i2p_router: image: geti2p/i2p:latest environment: - JVM_XMX=256m volumes: - ./i2phome:/i2p/.i2p - ./i2ptorrents:/i2psnark ports: - 4444:4444 - 6668:6668 - 7657:7657 - 9001:12345 - 9002:12345/udp
libre_wolf: image: linuxserver/librewolf ports: - 9300:3000 - 9301:3001
volumes: i2phome: i2ptorrents: networks: frontend: driver: bridge ```
- Docker multi-stage builds with Rust
Are you shipping your dev environment? Let me show you how we can use Buildx in Docker to go from 2.5Gb to just 5Mb.
- Volume empty after mounting
I have a docker compose file with a bind volume. It basically mounts /media/user/drive/media to the container's /mnt.
It works as expected when /media/user/drive/ is mounted and its media folder has the files I want the container to see.
However, as it's a network drive, the container usually tries to start before it is mounted, so it would throw the error that /media/user/drive/media doesn't exist. So I created an empty folder in /media/user/drive called media while the drive was not mounted so that at least the container starts with the volume /mnt being empty until the network drive gets mounted and all the files appear at /media/user/drive/media.
To my surprise, when the drive gets mounted, even though if I do ls /media/user/drive/media it lists the drive contents correctly, the container still sees /mnt empty.
How would I go about getting the drive files inside the docker container when it automatically starts?
- Network problems with Plex in docker
I am hoping that you awesome people can help me with something I've noticed in my Plex logs. Quick notes on my set up:
Mini PC running Ubuntu 22.04, portainer, Plex, arrs and calibre all running in docker. All of these except Plex are using a bridge network that I created in portainer. The PC is connected to the router by ethernet cable, and I have set up a static IP in the router settings. I have also added the static IP info into the network settings in Ubuntu.
The following text is repeated over and over in the Plex Media Server log, about 6 seconds apart. My playback is mostly fine, but I have been experiencing buffering. Regardless this can't be right!.
n.b. I did post elsewhere but I feel that this is not necessarily Plex related and you can likely help with this more technical question.
DEBUG - NetworkInterface: received Netlink message len=88, type=RTM_DELADDR, flags=0x0 Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - NetworkInterface: Netlink address message family=2, index=3, flags=0x0 Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - Network change. Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - NetworkInterface: Notified of network changed (force=0) Aug 22, 2024 18:51:49.637 [139016208919352] DEBUG - Network change notification but nothing changed.
- calibre integration with readarr problems in docker
I have readarr and all other arrs working in Ubuntu with docker portainer. I followed the trash guides and LinuxServerio guides to get me this far. I want to expand my book library, and so I have added calibre.
After having calibre import my book library, I went to readarr to delete the root, and re-add it with the new path to the calibre library. I am having problems with the Calibre Settings on the Add Root page.
The calibre server is listening at 172.18.0.2, port 8081, HTTP. I have created a user account on the calibre "sharing over the net" page. In readarr, I have set the Calibre Host to 172.18.0.2 and the Calibre Port to 8081. When I click save, I get the error Unknown exception: Http request timed out.
Most of the guides I have found are 3 or 4 years old. On one guide the Calibre Host was set to: calibre. That doesn't work. Setting the Host to the IP of my server doesn't work either.
Can any one help? I don't know if I have a permissions or firewall problem, or if I am just doing something wrong. The calibre logs are not showing any issues. I have copied the .yaml files used below.
-services:
- calibre:
- image: lscr.io/linuxserver/calibre:latest
- container_name: calibre
- security_opt:
- seccomp:unconfined
- environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- CLI_ARGS = #optional
- volumes:
- /data/calibre:/config
- /data/Media/calibre:/library
- /data/Media/books:/upload
- ports:
- 8080:8080
- 8081:8081
- restart: unless-stopped
-services:
- readarr:
- image: lscr.io/linuxserver/readarr:develop
- container_name: readarr
- environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- volumes:
- /data/readarr:/config
- /data/Media/calibre:/library
- /data/Media/downloads:/downloads
- ports:
- 8787:8787
- restart: unless-stopped
- [Question] Docker and Databases: Why choose one over another? Does it matter?
Hi everyone !
Intro
Was a long ride since 3 years ago I started my first docker container. Learned a lot from how to build my custom image with a Dockerfile, loading my own configurations files into the container, getting along with docker-compose, traefik and YAML syntax... and and and !
However while tinkering with vaultwarden's config and changing to postgresSQL there's something that's really bugging me...
Questions
---
- How do you/devs choose which database to use for your/their application? Are there any specific things to take into account before choosing one over another? ---
- Does consistency in database containers makes sense? I mean, changing all my containers to ONLY postgres (or mariaDB whatever)? ---
- Does it make sense to update the database image regularly? Or is the application bound to a specific version and will break after any update? ---
- Can I switch between one over another even if you/devs choose to use e.g. MariaDB ? Or is it baked/hardcoded into the application image and switching to another database requires extra programming skills? ---
Maybe not directly related to databases but that one is also bugging me for some time now:
- What's redis role into all of this? I can't the hell of me understand what is does and how it's linked between the application and database. I know it's supposed to give faster access to resources, but If I remember correctly, while playing around with Nextcloud, the redis container logs were dead silent, It seemed very "useless" or not active from my perspective. I'm always wondering "Humm redis... what are you doing here?".
Thanks :)
- sonarr in docker with seedbox help please - solved
Edit - marking as solved.
- Remote path: /home/seedit4me
- local path: /data
This is now working, I don't know why it wasn't before. ‐------------
I have followed the docs and have the recommended folder structures for my Plex and arrs setup.
sonarr has a volume set as /data which gives it access to e.g. /data/usenet/downloads This is working fine with SABnzdb
I am using a seedbox for torrents. Looking at ruTorrent on the seedbox, I can see that the local download folder there is set to: /home/seedit4me/torrents/rtorrent
sonarr is reporting "No files found are eligible for import in:
- /home/seedit4me/torrents/rtorrent/Completed/tv-sonarr/filename.mkv
I have set a remote path in the download clients page in sonarr as follows:
- Host - ****.seedit4.me
- Remote path: /home/seedit4me
- local path: /data
I have ftp'd the mkv file to actual folder structure:
- /data/torrents/rtorrent/Completed/tv-sonaar/filename.mkv
The permissions on this file are:
- -rw-rw-r--
the folder permissions are:
- drwxrwxr-x 2 myacct myacct 4096 Aug 2 11:41 .
- drwxrwxr-x 3 myacct myacct
My uid=1000(my acct), same for gid I have set these as the PUID and PGID env variables in sonarr
The log file in sonarr is reporting: |Error|DownloadedEpisodesImportService|Import failed, path does not exist or is not accessible by Sonarr: /home/(removed)/torrents/rtorrent/Completed/tv-sonarr/filename.mkv
Seeing this, i tried mapping /home/(removed)/ to /data/ but that doesn't work either.
Can anyone guide me on what I am doing wrong? I feel like I've checked everything so I can't understand the issue at all.---
- Renew ssl certificate in docker container using certbot?
I am working on this django docker project template with this certbot setup, Dockerfile
```docker FROM certbot/certbot:v1.27.0
COPY certify-init.sh /opt/ RUN chmod +x /opt/certify-init.sh
ENTRYPOINT ["/opt/certify-init.sh"] ```
entrypoint ```bash #!/bin/sh
set -e
echo "Getting certificate..."
certbot certonly \ --webroot \ --webroot-path "/vol/www/" \ -d "$DOMAIN" \ --email $EMAIL \ --rsa-key-size 4096 \ --agree-tos \ --noninteractive
if [ $? -ne 0 ]; then echo "Certbot encountered an error. Exiting." exit 1 fi
#for copying the certificate and configuration to the volume if [ -f "/etc/letsencrypt/live/${DOMAIN}/fullchain.pem" ]; then echo "SSL cert exists, enabling HTTPS..." envsubst '${DOMAIN}' < /etc/nginx/nginx.prod.conf > /etc/nginx/conf.d/default.conf echo "Reloading Nginx configuration..." nginx -s reload else echo "Certbot unable to get SSL cert,server HTTP only..." fi
echo "Setting up auto-renewal..." apk add --no-cache dcron echo "0 12 * * * /usr/bin/certbot renew --quiet" | crontab - crond -b ```
problem with this setup is,certbot exits after initial run of getting the certificate and when it's renew time it require manual intervention.
Now There are two choices
-
set
restart: unless-stopped
in docker compose file so it keeps restarting the container and with cron job to renew the certificate when required. -
Set cron job in host machine to restart the container.
Are there any other/more option to tackle this situation.
-
- Docker Desktop 4.32: Beta Releases of Compose File Viewer, Terminal Shell Integration, and Volume Backups to Cloud Providerswww.docker.com Docker Desktop 4.32: Beta Releases of Compose File Viewer, Terminal Shell Integration, and Volume Backups to Cloud Providers
Discover the powerful new features in Docker Desktop 4.32, including the Compose File Viewer, terminal integration, and enterprise-grade volume backups, designed to enhance developer productivity and streamline workflows.
- Release v27.0.2 · moby/moby · GitHubgithub.com Release v27.0.2 · moby/moby
27.0.2 For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones: docker/cli, 27.0.2 milestone moby/moby, 27.0.2 milestone Deprecated and removed featur...
cross-posted from: https://lazysoci.al/post/15099881
> A surprise Docker update!
- How to hot deploy?
I want to have a tomcat in docker and hot deploy my java stuff. Or as it has been requested here 11 years ago: https://stackoverflow.com/questions/31246526/how-to-hot-deploy-java-ee-applications-in-docker-containers and has recently implemented by IntelliJ.
Any directions?
- How can I use SMB share in Docker Compose for storage?
I am running Fedora Server with Docker installed, and it has a folder that connects to my NAS via SMB. I will have all of my Docker files (and Compose configs) stored on my NAS, since it has a lot more storage. I am worried that Docker will glitch out and cause a mess, since my NAS starts ~2 minutes later than my server from a reboot. Is there something that I can do to make sure Docker is able to connect to the SMB share safely?
- Is there a way to export a Docker container's configuration?
I have a NAS where I tested to see if some apps could run on it without a server. They overloaded the CPU, so I am now wanting to move them over to a more powerful workstation. I'm used to Compose files/configs, but it seems that my NAS uses plain Docker. Is there a way to extract the configs/long terminal setup commands? I have Portainer installed, if that makes it easier.
- Release v26.1.4 · moby/moby · GitHubgithub.com Release v26.1.4 · moby/moby
26.1.4 For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones: docker/cli, 26.1.4 milestone moby/moby, 26.1.4 milestone Deprecated and removed featur...
cross-posted from: https://lazysoci.al/post/14373858
> cross-posted from: https://lazysoci.al/post/14373856 > > > Docker got updated.
- Thanks to you all, I did it!
cross-posted from: https://lazysoci.al/post/14279205
> I built my first image locally and now I'm dancing around my desk to myself in satisfaction. I was anxious AF and so that meant I had a million extra questions along the way and everyone helped me. I'm truly grateful. Thanks for teaching me/holding my hand. I can't put into words my gratitude, but truly, thank you so so much.
- Please teach me about images
cross-posted from: https://lazysoci.al/post/14145485
> There's a service that I want to use, however for reasons, it no longer has any builds available. Consequently, I am thinking of building it myself. How does one go about doing that and then afterwards, how do I get it up on Docker hub? Can I just create an account and upload?
- Shared Volumes Question
cross-posted from: https://lazysoci.al/post/13966618
> If I have > >
> version: "3.8" > > services: > example1: > image: example.com/example1:latest > ports: 8000:80 > volumes: > - shared_example:/data > services: > example2: > image: example.com/example2:latest > ports: 8080:80 > volumes: > - shared_example:/data > > volumes: > shared_example: > driver_opts: > type: nfs > o: "192.100.1.100, nolock,soft,rw" > device: ":/local/shared" >
> > Will that slow things down or is the proper solution to have >> volumes: > shared_example1: > driver_opts: > type: nfs > o: "192.100.1.100, nolock,soft,rw" > device: ":/local/shared" > shared_example2: > driver_opts: > type: nfs > o: "192.100.1.100, nolock,soft,rw" > device: ":/local/shared" >
> Or even >> volumes: > shared_example1: > shared_example2: > driver_opts: > type: nfs > o: "192.100.1.100, nolock,soft,rw" > device: ":/local/shared" >
- [Nevermind] Is it possible to move a service to a new network and stack within the same compose file
I have my main compose file which has a bunch of services in and while it makes it easier to manage, it's also limiting when I wanna use postgres:// to access a database rather than exposing a port. I'm wondering if I can remedy this by moving it to a new network and(?) stack?
If so, is it just as simple as adding ``` networks
- new network name stacks
- new stacks name ```
I'm still curious as to the answer, but it's not something I need.
- Release v26.1.1 · moby/moby · GitHubgithub.com Release v26.1.1 · moby/moby
26.1.1 For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones: docker/cli, 26.1.1 milestone moby/moby, 26.1.1 milestone Deprecated and removed featur...
- Release Release 2.19.5 · portainer/portainer · GitHubgithub.com Release Release 2.19.5 · portainer/portainer
2.19.5 See Upgrading Portainer instructions. Overview of changes New Portainer CE 2.19.5 release Portainer Resolved CVE-2024-29296 by creating uniform response time for login attempts
- Get started with the latest updates for Dockerfile syntax (v1.7.0) | Dockerwww.docker.com Get started with the latest updates for Dockerfile syntax (v1.7.0)
Dockerfiles are fundamental tools for developers working with Docker, serving as a blueprint for creating Docker images. Learn about new Dockerfile (v1.7.0) capabilities and how you can leverage them in your projects to further optimize your Docker workflows.
- unable to open the database file (SQLITE_CANTOPEN)
I have a Bluesky PDS running successfully. Now I'm trying to set up GoToSocial, an ActivityPub server that also uses sqlite. When I run
sudo docker compose up -d
I get the following error in the docker log for GoToSocial:
Error executing command: error creating dbservice: sqlite ping: Unable to open the database file (SQLITE_CANTOPEN)
Is this more likely to be a conflict between the two docker applications or something specific to GoToSocial? (I've gone through the sqlite issues I've been able to find in GoToSocial's GitHub.)
If something to do with running sqlite in two containers, do you have any tips to resolve the issue?
- Confused about image digests
I've been thinking about writing a script that would alert me if there was an updated version of an image I was running.
DockerHub shows an image digest on the page for that tag:
And I can extract the digest for an image I am running with:
docker inspect --format='{{index .RepoDigests 0}}' jc21/nginx-proxy-manager:latest
This matches the one from the DockerHub screenshot. But I can't see a CLI way to get the image digest from a registry. It seems like:
docker manifest inspect jc21/nginx-proxy-manager:latest
should do it, but it pulls out the digest of each of the architecture builds for that tag instead of the one shown in dockerhub.
Is there a way to compare the current local image with one in a registry from the command line? Or perhaps there's a more sensible way to do this?
- Sudden Issues
cross-posted from: https://lazysoci.al/post/12093283
> Suddenly, things aren't loading properly. For example Heimdall takes forever to load and Navidrome is timing out. > > When I do
docker-compose pull
> > It says says > >Error response from daemon: Get "https://registry-1.docker.io/v2/" net/http: request cancelled while waiting for connection (Client.Timeout exceeded while awaiting headers)
> > Anyone know what's up or how to fix it? - [Wishlist] BOINC Screensaver with WebUI
I have found several docker containers that allow you to run a BOINC server as a container, but I haven’t seen any that give you access to the BOINC screensaver. My ideal would be if the screensaver itself was shown on a dedicated port so it could be used easily as a display without any controls popping up.
Sadly, I have no idea how to make this or how to get a custom container made. I guess I’m just throwing this idea out to the universe in hope that it happens someday.
- [Solved] Docker not accepting environment variable
I'm trying to create a postgres container, I have the following in my Docker Compose:
db: container_name: db image: postgres restart: always environment: #POSTGRES_USER="postgres" POSTGRES_PASSWORD: HDFnWzVZ5bGI ports: - 5432:5432 volumes: - pgdata:/var/lib/postgresql/data adminer: container_name: adminer image: adminer restart: always ports: - 8338:8080
And yet Docker keeps saying that the database is initialized and that the superuser is not specified. Where am I going wrong?
I've tried with and without equals, a hyphen, quotation marks. No matter what I try, it won't see it.
#Solution:
Find:
volumes: - pgdata:/var/lib/postgresql/data
Replace:volumes: - /opt/postgres/data:/var/lib/postgresql/data
More info: https://lazysoci.al/comment/8597610
- How to Detect Cache Misses Using Observabilitydigma.ai How to Detect Cache Misses Using Observability - Digma
In this article, we'll examine cache misses using observability, learn about the caching concept and how to implement it in Spring Boot.
cross-posted from: https://programming.dev/post/11703185
> cross-posted from: https://programming.dev/post/11703178 > > > In this article, we’ll examine cache misses and, in general, learn about the caching concept and how to implement it in Spring Boot.
- 100+ Docker Concepts you Need to Know - Fireship
YouTube Video
Click to view this content.
Note: video sponsored by Docker
- Release v25.0.4 · moby/moby · GitHubgithub.com Release v25.0.4 · moby/moby
v25.0.4 For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones: docker/cli, 25.0.4 milestone moby/moby, 25.0.4 milestone Deprecated and removed featu...
- Looking for feedback/review on my project starter template (DRF + Nuxt + Docker compose)github.com GitHub - pcouy/drf-nuxt-template: An opinionated Django project template including DRF for the back-end, Nuxt for the front-end and docker-compose files
An opinionated Django project template including DRF for the back-end, Nuxt for the front-end and docker-compose files - pcouy/drf-nuxt-template
cross-posted from : https://lemmy.pierre-couy.fr/post/350920
> I am trying to come-up with a reusable template to quickly start new projects using my prefered tools and frameworks, and I'm happy with what I got. However, using Docker is quite new for me and I've probably done some weird or unconventional stuff in my
docker-compose.yml
or myDockerfile
s. I'd love to learn from people with more experience with Docker, so feel free to tell me everything that is wrong with my setup. > > I'm more confident about the stuff I did with Python/Django and Nuxt, but all criticism is welcome. This also applies to the readme : I'd like to provide detailed instructions about working with this project template, so please report anything that is unclear or missing. > > Thank you to anyone who takes the time to check it out and help me make this useful to as many people as possible. - A working volume on a single file, not a directory?
Is it possible to create a volume that is a file, not a directory?
I am trying to make a simple structured nginx instance with docker. Using this command below to create the container...
docker container create -v ./nginx.conf:/etc/nginx/conf.d/nginx.conf -v .:/app/ -p 80:80 docker.io/nginx
And this is the file structure of it...
```
- www -------------- public ---------------------------- index.html
- nginx.conf ```
And this is the nginx.conf
``` server { server_name localhost; listen 80;
root /app/www/public;
index index.html index.htm; autoindex on; } ```
However the index.html will not work when I go to the localhost.
When I change the docker command to this it does work however, but this will also mirror all of the files and folder from my file structure into the containers /etc/nginx/conf.d/ directory
docker container create -v .:/etc/nginx/conf.d/ -v .:/app/ -p 80:80 docker.io/nginx
- Effective Coding with Java Observabilitydigma.ai Effective Coding with Java Observability Digma
Learn how to implement and use observability when coding in Java. Discover how to effectively code with observability in java.
cross-posted from: https://programming.dev/post/10005452
> cross-posted from: https://programming.dev/post/10005448 > > > Things you can do right now to learn new and valuable things that can improve your code.
- See 2-10x Faster File Operation Speeds with Synchronized File Shares in Docker Desktopwww.docker.com See 2-10x Faster File Operation Speeds with Synchronized File Shares in Docker Desktop
Learn about the latest Docker Desktop feature, synchronized file shares, which provides native file system performance, improving file operation speeds by 2-10x.
Learn about the latest Docker Desktop feature, synchronized file shares, which provides native file system performance, improving file operation speeds by 2-10x.
- Streamline Dockerization with Docker Init GAwww.docker.com Streamline Dockerization with Docker Init GA
The Docker team announces the general availability of docker init, with support for multiple languages and stacks, making it simpler than ever to containerize your applications.
The Docker team announces the general availability of docker init, with support for multiple languages and stacks, making it simpler than ever to containerize your applications.
- Does a docker image minimizer like this exist?
I am looking for something that can take a Dockerfile, like the following as an input:
--- ```Dockerfile FROM --platform=linux/amd64 debian:latest ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y curl unzip libsecret-1-0 jq COPY entrypoint.sh . ENTRYPOINT [ "/entrypoint.sh" ] ``` ---
And produce a a multi-stage Dockerfile where the last stage is built from
scratch
, with the dependencies for the script in the ENTRYPOINT (or CMD) copied over, like this:--- ```Dockerfile FROM --platform=linux/amd64 debian:latest as builder ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y curl unzip libsecret-1-0 jq
FROM --platform=linux/amd64 scratch as app SHELL ["/bin/bash"]
the binaries executed in entrypoint.sh
COPY --from=builder /bin/bash /bin/bash COPY --from=builder /usr/bin/curl /usr/bin/curl COPY --from=builder /usr/bin/jq /usr/bin/jq COPY --from=builder /usr/bin/sleep /usr/bin/sleep
shared libraries of the binaries
COPY --from=builder /lib/x86_64-linux-gnu/libjq.so.1 /lib/x86_64-linux-gnu/libjq.so.1 COPY --from=builder /lib/x86_64-linux-gnu/libcurl.so.4 /lib/x86_64-linux-gnu/libcurl.so.4 COPY --from=builder /lib/x86_64-linux-gnu/libz.so.1 /lib/x86_64-linux-gnu/libz.so.1
...a bunch of other shared libs...
entrypoint
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ] ``` ---
I've had pretty decent success creating images like this manually (using
ldd
to find the dependencies) based on this blog. To my knowledge, there's nothing out there that automates producing an image built fromscratch
, specifically. If something like this doesn't exist, I'm willing to build it myself. - Release v25.0.2 · moby/moby · GitHubgithub.com Release v25.0.2 · moby/moby
25.0.2 For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones: docker/cli, 25.0.2 milestone moby/moby, 25.0.2 milestone Security This release contai...
- Is it possible to inherit declarations from one Docker-Compose.yaml to another?
I just installed Immich and while all my other containers have just required me to add to them to existing yaml, Immich requires its own yaml. That's fine I guess, but for the library, I wanna host it on my NAS and so I made the volume in my main Docker-Compose.yaml, the Immich yaml was all like, "what you talking about Willis?" because in my Immich environment I tried to point to something created in my main yaml. I thought I could work around this by adding an empty volume declaration, but now I can't find my uploads 😂 any idea on the correct methodology/workaround?
- 8 Top Docker Tips & Tricks for 2024www.docker.com 8 Top Docker Tips & Tricks for 2024
Whether you’re a Docker expert or new to the Docker community, you may be wondering about the best ways to optimize or get started quicker on Docker. Docker Captain Vladimir Mikhalev rounds up top Docker tips to help you supercharge developer productivity in 2024.
Whether you’re a Docker expert or new to the Docker community, you may be wondering about the best ways to optimize or get started quicker on Docker. Docker Captain Vladimir Mikhalev rounds up top Docker tips to help you supercharge developer productivity in 2024.
- What made you first install Docker?
How did your journey begin? I feel like Docker has been getting mentioned for aeons and I've only just started using it now.
- Can't grasp the idea that I need more than an adblocker for docker.
I know this seem like a typical "Your opinion man" post but honestly, is there anything really useful beyond hosting an adblocker? (read: complementary, not found elsewhere, that enhances own's personal needs and/or life as a whole). And yes, I'm aware of what can be hosted -- picture editors, video editors, webtops, dhcpcd, emulators, and so on.
But the question is -- why I'd need all that if these stuff can be found on any ordinary PC out there? Even phones. Hell, even on a smart tv. That is like "trying to reinvent the wheel" for no necessary purpose other than "to look cool". There's also "because its fun", but is it really fun doing pretty much the same thing over and over again? There isn't a "learning gap" between these hosting options -- all of em have (pretty much) the same procedure to get things running.
With that said... I've been trying really, REALLY hard to host more stuff but I can't go beyond hosting (only) an adblocker.