Limits on the number of file descriptors
I'm trying to install
389-ds, And it gives me this warning:
WARNING: There are only 1024 file descriptors (hard limit) available, which limit the number of simultaneous connections.
I understand about file descriptors, but I don't understand about soft and hard limits.
When I run
cat /proc/sys/fs/file-max, I get back
590432. This should imply that I can open up to 590432 files (i.e. have up to 590432 file descriptors.
But when I run
ulimit, it gives me different results:
$ ulimit unlimited $ ulimit -Hn # Hard limit 4096 $ ulimit -Sn # Soft limit 1024
But what are the hard / soft limit from
ulimit, and how do they relate to the number stored at
According to the kernel documentation,
/proc/sys/file-maxis the maximum, total, global number of file descriptors the kernel will allocate before choking. This is the kernel's limit, not your current user's. So you can open 590432, provided you're alone on an idle system (single-user mode, no daemons running).
Note that the documentation is out of date: the file has been
proc/sys/fs/file-maxfor a long time. Thanks to Martin Jambon for pointing this out.
The difference between soft and hard limits is answered here, on SE. You can raise or lower a soft limit as an ordinary user, provided you don't overstep the hard limit. You can also lower a hard limit (but you can't raise it again for that process). As the superuser, you can raise and lower both hard and soft limits. The dual limit scheme is used to enforce system policies, but also allow ordinary users to set temporary limits for themselves and later change them.
Note that if you try to lower a hard limit below the soft limit (and you're not the superuser), you'll get
EINVALback (Invalid Argument).
So, in your particular case,
ulimit(which is the same as
ulimit -Sf) says you don't have a soft limit on the size of files written by the shell and its subprocesses. (that's probably a good idea in most cases)
Your other invocation,
ulimit -Hnreports on the
-nlimit (maximum number of open file descriptors), not the
-flimit, which is why the soft limit seems higher than the hard limit. If you enter
ulimit -Hfyou'll also get ‘unlimited’.
please does the hard limit `ulimit -Hn` target the very limit of the system to allocated file descriptor capabilities ?
@Webman : no, it doesn't. `ulimit` only affects the limits for the current *process*. The limits of the current process are bequeathed to children processes too, but each process has a separate count. E.g. with `ulimit -Hn 10`, you can only have 10 file descriptors open at any one time. Each child process you create can only have up to 10 file descriptors too. Only the superuser may increase a limit once set. If you set one too low, your only option may be to kill your shell process and start a new one.
The "select" system call is one of the many terrible brain dead design decisions of unix that makes even windows95 still look so good in comparison.
It should have been banned 20 years ago and then we might by now have ability to unlimited file handlers without problems.
You can increase the number of file descriptors easily with kernel configuration and ulimit BUT remember that if any library uses "select" system call your program will become instable (memory corruption) and fail.
Select can only handle file descriptors from 0 to 1023 and if you feed one with a higher value it will poke randomly in your memory and the select will never repeat the descriptor as working. Unfortunately many libraries use select.
Your comment is a useful warning, but instead of taking a ranting tone, it would have been far more useful to have quoted the `fd_set(3)` man page and that the limit comes from `FD_SETSIZE`. And the best would have been a suggestion of a replacement call like `poll(3)`, as in this answer