Every 15 minutes exactly, in whatever terminal window(s) I have connected to my server, I’m getting these system-wide broadcast message:
Broadcast message from systemd-journald@localhost (Sun 2026-02-15 00:45:00 PST):
systemd[291622]: Failed to allocate manager object: Too many open files
Message from syslogd@localhost at Feb 15 00:45:00 ...
systemd[291622]:Failed to allocate manager object: Too many open files
Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:00:01 PST):
systemd[330416]: Failed to allocate manager object: Too many open files
Message from syslogd@localhost at Feb 15 01:00:01 ...
systemd[330416]:Failed to allocate manager object: Too many open files
Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:15:01 PST):
systemd[367967]: Failed to allocate manager object: Too many open files
Message from syslogd@localhost at Feb 15 01:15:01 ...
systemd[367967]:Failed to allocate manager object: Too many open files
The only thing I found online that’s kind of similar is this forum thread, but it doesn’t seem like this is an OOM issue. I could totally be wrong about that, but I have plenty of available physical RAM and swap. I have no idea where to even begin troubleshooting this, but any help would be greatly appreciated.
ETA: I’m not even sure if this is necessarily a bad thing that’s happening, but it definitely doesn’t look good, so I’d rather figure out what it is now before it bites me in the ass later


If it’s happening every 15 minutes, it’s probably a systemd timer trying to kick off a unit on a schedule. Check for .timer files in your system and user systemd configuration and see if there are any configured to run every 15 minutes.
Whatever process is trying to start is probably exceeding the open files ulimit. ulimits can be set system-wide, per user, and per cgroup.
The ulimit may be too low, there may be some process leaking file handles (opening files periodically but never closing them), or the unit might be configured to run under the wrong user or cgroup.
If a reboot gets rid of the problem temporarily, it’s most likely a file handle leak. Remember that objects like network sockets also count as files for the purposes of the open files ulimit.
A tool like lsof can help you track down processes with a lot of open file handles.
I went to sleep last night after posting this and left a ssh connection open to see what it did in the morning when I woke up. And when I woke up and checked it out, I found that coincidentally it stopped doing the timer almost exactly as I went to sleep, yet I don’t think I was doing anything that would make that happen. I have no idea why it stopped, but it hasn’t started again either.