There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.
But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling
docker ps | wc -l
For those wanting a quick count.
I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.
I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.
At home it’s 12.
I was watching a video yesterday where an org was churning 30K containers a day because they didn’t profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.
Yeah that shit is more common than people think.
A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.
There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.
61 containers in 26 docker files.
49, I could imagine running all of those bare would be hard with dependencies
- Because I’m old, crusty, and prefer software deployments in a similar manner.
I salute you and wish you the best in never having a dependency conflict.
I’ve been resolving them since the late 90s, no worries.
I use Debian
My worst dependency conflict was a libcurlssl error when trying to build on a precompiled base docker image.
Isn’t that harder?
It depends a lot on what you want to do and a little on what you’re used to. It’s some configuration overhead so it may not be worth the extra hassle if you’re only running a few services (and they don’t have dependency conflicts). IME once you pass a certain complexity level it becomes easier to run new services in containers, but if you’re not sure how they’d benefit your setup, you’re probably fine to not worry about it until it becomes a clear need.
Agreed. Im tired after work. Debian/yunohost is good enough.
At work its hundreds of docker containers but all ci/cd takes care of that.
Me too!
13 in a docker LXC, most of my stuff runs on 13 other dedicated LXCs
Zero. Either it’s just a service with no wrappers, or a full VM.
Why a full VM, that seems like a ton of overhead
Uh… Probably somewhere around 150?
12 LXCs and 2 VMs on proxmox. Big fan of managing all the backups with the web ui (It’s very easy to back to my NAS) and the helper scripts are pretty nice too. Nothing on docker right now, although i used to have a couple in a portainer LXC.
None. I run my services they way they are meant to be run. There is no point in containers for a small setup. Its kinda lazy and you miss out on how to install them.
Small setups can very easily turn into large setups without you noticing.
The only bare-metal setup I’d trust to be scaleable is Nix flakes (which I’m actually very interested in migrating to at some point)
I’ve never even heard of NIX flakes before today. It looks like another soluion in search of a problem. I trust debian and I trust bare metal more than any container setup. I run multiple services on one machine. I currently have two machines to run all my services. No problems and no downtime other than a weekly update and reload. All crontabed, all automatic.
At work I have multiple services all running in KVM including some windows domain controllers. Also no problem and weekly full backups are a worry free. Only requiring me to checks them for consistency.
In short as much as people try to push containers they are only useful if you are dealing with more than few services. No home setup should be that large unless someong is hosting for others.
I disagree that Nix is a solution in search of a problem, in fact it solves arguably the two biggest problems in software deployment: dependency hell and reproducibility (i.e. the “It works on my machine” problem)
Every package gets access to the exact version of all the dependencies it needs (without needless replication like Flatpaks would have) and sharing a flake to another machine means you can replicate that exact setup and guarantee it will be exactly the same
Containers try to solve the same problems, and succeed to a somewhat decent extent, although with some overhead of course.
I’m not trying to criticize you or your setup at all, if Debian alone works for you, that’s fine. The beauty of open source and self hosting is that we can use whatever tools we want, however we want. I do though think it’s good practice to be aware of what alternatives are out there should our needs change, or should our tools change to no longer align with our needs.
All containers do that. Its nothing new just another implementation of the idea with its own idea about what is best. It only saves resources in the form of time if its a large scale operation and finally its just the last in a long line of similar solutions.
I still haven’t figured out containers. 🙁
How come? What do you use to run them and what is it you have a hard time with?
I’m using docker. Tried to set up Jellyfin in one but I couldn’t for the life of me figure out how to get it to work, even following the official documentation. Ended up just running the jellyfin package from my distros repo, which worked fine for me. Also tried running a tor snowflake, which worked, but there was some issue with the NAS being restricted and I couldn’t figure out how to fix that. I kinda gave up at that point and saved the whole container thing to figure out another day. I only switched to Linux and started self-hosting last year, so I’m still pretty new to all of this.
If you do decide to look in to containers again and get stuck please make a post. We are glad to help out. A tip I can give you when asking for help. Tell the system you are using and how. Docker with compose files or portainer or something else etc. If using compose also add the yaml file you are using.
I will definitely try again at some point in the next year, so I will keep that in mind! I appreciate the kind words. A lot of what you said is over my head at the moment though, so I’ve got my work cut out for me. 😅
Docker Compose is really the easiest way to self-host.
Copy a file, usually provided by the developers of the app you want to run, change some values if instructed by the
# comments, rundocker compose upand it “just works”.And I say that as someone who has done everything from distro-provided packages to compiling from source, Nix, podman systemd, and currently running a full-blown multi-node distributed storage Kubernetes cluster at home.
Just use docker compose.
I’m pretty sure I was at the same point years ago. The good thing is, next time you look into containers it’ll likely be really easy and you’ll wonder where you got stuck a year or two ago.
At least that’s what has happened to me more times than I can remember.
Haha, fingers crossed.
I don’t use them. I’m using OpenBSD on my server which don’t support this feature.
No jails?
It’s FreeBSD feature
140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:
- 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
- 55 Manual-updates (either it’s family-facing e.g. Jellyfin, or it’s got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it’s something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody’s in the middle of watching something)
I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.
Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.
I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?
I added the bookmarklet to my bookmarks bar so it’s pretty easy to just navigate to the releases page on github and hit the button. I change the “visibility” setting to “show in its category” so things stay in their lanes rather than all go in a communal main feed but otherwise leave it as default.
I did have to add some filters to the categories so it wouldn’t flag all the -dev/-rc releases but that’s it. The filters that work for me are:
intitle:prototype- intitle:-build-number intitle:rc5 intitle:rc6 intitle:rc7 intitle:rc8 intitle:rc9 intitle:-dev. intitle:Beta intitle:preview- intitle:rc1 intitle:rc2 intitle:rc3 intitle:rc4 intitle:"Release Candidate" intitle:Alpha intitle:-rc intitle:-alpha intitle:-beta intitle:develop- intitle:"Development release" intitle:Pre-ReleaseI just added this URL for Jellyfin and it “just worked”:
if not, adding .rss or .atom should do the trick:
https://github.com/jellyfin/jellyfin/releases.atom https://github.com/jellyfin/jellyfin/releases.rss
thanks, I’ll look into it. Much appreciated
All of you bragging about 100+ containers, please may in inquire as to what the fuck that’s about? What are you doing with all of those?
Kube makes it easy to have a lot, as a lot of things you need to deploy on every node just deploy on every node. As odd as it sounds, the number of containers provides redundancy that makes the hobby easy. If a Zimaboard dies or messes up, I just nuke it, and I don’t care whats on it.
A little of this, a little of that…I may also have a problem… >_>;
The List
Quickstart
- dockersocket
- ddns-updater
- duckdns
- swag
- omada-controller
- netdata
- vaultwarden
- GluetunVPN
- crowdsec
Databases
- postgresql14
- postgresql16
- postgresql17
- Influxdb
- redis
- Valkey
- mariadb
- nextcloud
- Ntfy
- PostgreSQL_Immich
- postgresql17-postgis
- victoria-metrics
- prometheus
- MySQL
- meilisearch
Database Admin
- pgadmin4
- adminer
- Chronograf
- RedisInsight
- mongo-express
- WhoDB
- dbgate
- ChartDB
- CloudBeaver
Database Exporters
- prometheus-qbittorrent-exporter
- prometheus-immich-exporter
- prometheus-postgres-exporter
- Scraparr
Networking Admin
- heimdall
- Dozzle
- Glances
- it-tools
- OpenSpeedTest-HTML5
- Docker-WebUI
- web-check
- networking-toolbox
Legally Acquired Media Display
- plex
- jellyfin
- tautulli
- Jellystat
- ErsatzTV
- posterr
- jellyplex-watched
- jfa-go
- medialytics
- PlexAniSync
- Ampcast
- freshrss
- Jellyfin-Newsletter
- Movie-Roulette
Education
- binhex-qbittorrentvpn
- flaresolverr
- binhex-prowlarr
- sonarr
- radarr
- jellyseerr
- bazarr
- qbit_manage
- autobrr
- cleanuparr
- unpackerr
- binhex-bitmagnet
- omegabrr
Books
- BookLore
- calibre
- Storyteller
Storage
- LubeLogger
- immich
- Manyfold
- Firefly-III
- Firefly-III-Data-Importer
- OpenProject
- Grocy
Archival Storage
- Forgejo
- docmost
- wikijs
- ArchiveTeam-Warrior
- archivebox
- ipfs-kubo
- kiwix-serve
- Linkwarden
Backups
- Duplicacy
- pgbackweb
- db-backup
- bitwarden-export
- UnraidConfigGuardian
- Thunderbird
- Open-Archiver
- mail-archiver
- luckyBackup
Monitoring
- healthchecks
- UptimeKuma
- smokeping
- beszel-agent
- beszel
Metrics
- Unraid-API
- HDDTemp
- telegraf
- Varken
- nut-influxdb-exporter
- DiskSpeed
- scrutiny
- Grafana
- SpeedFlux
Cameras
- amcrest2mqtt
- frigate
- double-take
- shinobipro
HomeAuto
- wyoming-piper
- wyoming-whisper
- apprise-api
- photon
- Dawarich
- Dawarich—Sidekiq
Specific Tasks
- QDirStat
- alternatrr
- gaps
- binhex-krusader
- wrapperr
Other
- Dockwatch
- Foundry
- RickRoll
- Hypermind
Plus a few more that I redacted.
I look at this list and cry a little bit inside. I can’t imagine having to maintain all of this as a hobby.
From a quick glance I can imagine many of those services don’t need much maintenance if any. E.g. RickRoll likely never needs any maintenance beyond the initial setup.
In my case, most things that I didn’t explicitly make public are running on Tailscale using their own Tailscale containers.
Doing it this way each one gets their own address and I don’t have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.
On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.
Ironic that Nextcloud AIO spins up multiple…
deleted by creator
Possibly. I don’t remember that being an option when I was setting things up last time.
From what I’m reading it’s sounding like it’s just acting as a slightly simplified DNS server/reverse proxy for individual services on the tailnet. Sounds Interesting. I’m not sure it’s something I’d want to use on the backend (what happens if Tailscale goes down? Does that DNS go down too?), but for family members I’ve set up on the tailnet, it sounds like an interesting option.
Much as I like Tailscale, it seems like using this may introduce a few too many failure points that rely on a single provider. Especially one that isn’t charging me anything for what they provide.
Not bragging. It is what it is. I run a plethora of things and that’s just on the production server. I probably have an additional 10 on the test server.
Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.
And that is just for one of my web crawlers.
/S
100 containers isn’t really a lot. Projects often use 2-3 containers. Thats only something like 30-50 services.
“Only”
13 with podman on openSUSE MicroOS.
i used to have a few more but wasn’t using them enough so i cut them.












