Every 15 minutes exactly, in whatever terminal window(s) I have connected to my server, I’m getting these system-wide broadcast message:
Broadcast message from systemd-journald@localhost (Sun 2026-02-15 00:45:00 PST):
systemd[291622]: Failed to allocate manager object: Too many open files
Message from syslogd@localhost at Feb 15 00:45:00 ...
systemd[291622]:Failed to allocate manager object: Too many open files
Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:00:01 PST):
systemd[330416]: Failed to allocate manager object: Too many open files
Message from syslogd@localhost at Feb 15 01:00:01 ...
systemd[330416]:Failed to allocate manager object: Too many open files
Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:15:01 PST):
systemd[367967]: Failed to allocate manager object: Too many open files
Message from syslogd@localhost at Feb 15 01:15:01 ...
systemd[367967]:Failed to allocate manager object: Too many open files
The only thing I found online that’s kind of similar is this forum thread, but it doesn’t seem like this is an OOM issue. I could totally be wrong about that, but I have plenty of available physical RAM and swap. I have no idea where to even begin troubleshooting this, but any help would be greatly appreciated.
ETA: I’m not even sure if this is necessarily a bad thing that’s happening, but it definitely doesn’t look good, so I’d rather figure out what it is now before it bites me in the ass later


I just reduced my global max connection from infinity to 500 (somehow that increased my upload speed?) and it reduced qbits shit by a couple thousand, but it’s still in the many thousands. I’m assuming the number on the left is literally just how many files it’s using, in which case, how could that ever get below 1024? Not to mention I have many other services that are also above 1024. (See below). In any case, I’m only using 14 GB of memory out of my 32 GB of ram and 16 swap, so I think it would be fine to increase the limit, but that does worry me a bit.
18841 python3 16294 qbittorre 14064 docker-pr 11900 Sonarr 8940 Radarr 8441 Cleanupar 8246 Prowlarr 6130 java 5836 postgres 3532 container 2766 gunicorn 1980 dockerdYour number of python file descriptors went up after that change. Have you looked at what python stuff is running? Something isn’t closing files or sockets after it is done with them.
You can also look at how many network sockets you have open and where they are connecting.
netstat -anwill give you a quick look.lsofcan help you figure out what is using those ports.If you really need that many connections there are some tcp tunables you can do to help them be more efficient.
I went to sleep last night after posting this and left a ssh connection open to see what it did in the morning when I woke up. And when I woke up and checked it out, I found that coincidentally it stopped doing the timer almost exactly as I went to sleep, yet I don’t think I was doing anything that would make that happen. I have no idea why it stopped, but it hasn’t started again either.
Network hardware is sensitive to lots of small packets going over many connections — some cheap routers can straight up overheat from that. And especially if your WiFi router doesn’t support full-duplex connection, uploads will compete with downloads over the bandwidth — which includes metadata communication like “hey, how much of that torrent have you got?”
Interesting! I guess that makes sense.