I've recently been working on a Go program, designed to start a proxy service for about 1-2 thousand people online simultaneously.
The performance is satisfactory; on average, it consumes one full CPU core and uses negligible memory.
However, I've encountered an issue that's quite frustrating. Occasionally, I see a flood of error logs like this:
cannot accept new connection : Too Many Files Open
This is a common issue on web servers. Each HTTP connection opens one file descriptor. When too many connections are open, this error occurs.
First, we need to check the program and system limits.
cat /proc/$pid/limits | grep files
Max open files 4096 4096 files
ulimit -n
4096
Under normal circumstances, we need to set the system limit.
sysctl -w fs.file-max=500000
After this, you can check the new system limit.
ulimit -n
500000
However, the process limit remains unchanged. Due to time constraints, I decided to temporarily set the process limit.
prlimit --pid=$pid --nofile=1000000:1000000
Ultimately, the issue was due to systemd. Services managed by systemd have a default file limit, which is usually 4096. You can adjust this in the process.service file.
[Service]
LimitNOFILE=1000000
Done.