Typical Unix System Parameters
Typical Unix System Parameters
Copyright(c) Management Analytics, 1995 - All Rights Reserved
Copyright(c), 1990, 1995 Dr. Frederick B. Cohen - All Rights Reserved
Most UNIX systems have modifiable operating system parameters
that allow the systems administrator to control performance to meet
usage requirements. We will now list some typical parameters (taken
from System 5 UNIX), and describe how they might affect
- MAXSLICE is the maximum timeslice for a process in clock
`ticks'. Larger timeslices make processes run longer, so that
more processing gets done between time consuming process swaps.
Smaller time slices assure that each process gets activated more
often, so it improves response time if each process has a very
small task to do relatively often.
- NINODE is the maximum number of inodes that can be opened at
one time on the system. Each inode is stored in a fixed location
in the operating system memory area, so the more inodes, the less
space is available for user process space. On the other
hand, with too few available inodes, file opens will fail.
Normally, we allow the smallest number of inodes we can get by
with while keeping the number of failures sufficiently low for
- NMOUNT is the maximum number of mounted file-systems at any
one time. Each file-system requires Kernel memory space, and
just like the case for files, we don't want to have too few for
the system configuration. Since the number of mounted
file-systems rarely changes, this can be tuned relatively
- NPROC is the maximum number of processes that can be
operating on the system at one time. If this number is too low,
users will keep being interrupted, and failures will be
commonplace. If it is too high, the space available for users
will cause slowed performance.
- MAXUP is the maximum number of processes that a single
Uid can have on the system at one time. Since UNIX
users can login on multiple terminals at the same time,
this number should be high enough to allow normal processing with
typical usage. If this number is too high, a single user could
dominate the process table, but if it too low, normal usage may
become overly restrictive.
- FLCKREC is the size of a `record' used for the purposes of
file region locking. UNIX allows different
processes to lock different sections of a file
simultaneously, thus permitting very good performance for large
databases accessed at random points. If this number is too small, the
operating system has to work much harder to provide this protection,
while if it is too large, the likelihood of database accesses being
delayed due to a file lock increases.
- NAUTOUP is the delay for cached writes. If the same areas are
being written repeatedly, larger delays save unnecessary writes (writes
that would be overwritten again very soon), while if this number gets
too large, writes are delayed so much that large portions of disk areas
are still in the cache if and when the system fails.
- NOFILES is the maximum number of files that can be opened by
a single process at one time. UNIX normally has many
processes per user, and thus a process rarely has
very many open files.
- ULIMIT is the maximum file size permitted by the operating
system. It is not unusual for a runaway process to create
files with unbounded length. If this constant is too high, these
files can become enormous, while if this constant is too low,
databases, and other programs requiring large files, will be
limited in their utility. Large file sizes are particularly
useful for hashing algorithms, because UNIX systems tend to have
good facilities for sparse files, and hash files tend to
There are usually about 50 tunable parameters on a UNIX
system, and we have only touched on a few of them here. We are not
trying to be comprehensive, and parameters differ from system to system
so these won't necessarily apply to your system, but the concepts are
the same. Each parameter impacts performance in a way that depends on
the available physical resources and the system usage patterns. When
combined, these parameters form a complex tradeoff space, but there are
some guiding principles that will help you along the way.
- You almost always end up trading time with space when you tune
parameters. It will be helpful to think in terms of which is the rarest
resource in your environment.
- Change as few parameters at a time as possible. This lets you
observe the effects of tuning each parameter.
- Don't make dramatic changes. They tend to create unuseble systems,
and they may avoid optimal performance points rather than help you reach
- Use hill climbing to work towards optimality. Hill climbing
assumes that parameters are independent, which they are not, but tends
to work well in practice.
- Read the manual before making a change. UNIX manuals tell
you a lot about the effects of parameter changes, which parameters
depend on which other parameters, and how to recover from unusable
- Be sure to test new configurations under a range of expected
loading conditions to assure that there are no dramatic effects on
performance under special load conditions.
Most UNIX systems provide performance analysis tools to help
understand what is dictating system performance. For example,
performance tools will tell you what percentage of the system time is
spent in paging or swapping, how many file open failures occur as
a result of inode limits, and other information that will help
you find bottlenecks. Once a bottleneck has been identified, it should
be pretty straight forward to find parameters related to the bottleneck
and do simple experiments to find techniques for improvement.
Ultimately, the physical time and space available may simply be
inadequate for your needs. In this case, you may have to add physical
resources to improve performance. For example, if the number of paging
faults is enormous and all related parameters have been adjusted, and
the software is at its best configuration to minimize paging, you may
need to add memory. If this is not possible, you may have to purchase
another computer or change your usage characteristics to get the desired