Return to BSD News archive
Xref: sserve comp.os.linux:48376 comp.os.386bsd.questions:3871 comp.sys.ibm.pc.hardware:60754 comp.windows.x.i386unix:2560
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!haven.umd.edu!uunet!mcsun!sun4nl!tuegate.tue.nl!viper.es.ele.tue.nl!woensel.es.ele.tue.nl!woensel.es.ele.tue.nl!raymond
From: raymond@woensel.es.ele.tue.nl (Raymond Nijssen)
Newsgroups: comp.os.linux,comp.os.386bsd.questions,comp.sys.ibm.pc.hardware,comp.windows.x.i386unix
Subject: Re: SUMMARY: 486DX2/66 for Unix conclusions (fairly long)
Date: 15 Jul 1993 23:12:21 GMT
Organization: Eindhoven University of Technology
Lines: 92
Message-ID: <RAYMOND.93Jul15231221@woensel.es.ele.tue.nl>
References: <CA3pv5.56D@implode.rain.com> <PCG.93Jul13210635@decb.aber.ac.uk>
<michaelv.742625634@ponderous.cc.iastate.edu>
<CA62J8.7Fs@ra.nrl.navy.mil>
NNTP-Posting-Host: woensel.es.ele.tue.nl
In-reply-to: eric@tantalus.nrl.navy.mil's message of Wed, 14 Jul 1993 18:11:31 GMT
In article <CA62J8.7Fs@ra.nrl.navy.mil> eric@tantalus.nrl.navy.mil (Eric Youngdale) writes:
In article <michaelv.742625634@ponderous.cc.iastate.edu> michaelv@iastate.edu (Michael L. VanLoon) writes:
>4.3BSD *pages* when system load is light. This means it takes from a
>[...]
>If system load is very heavy, however, paging would take more time
>than actually running processes, so the system *swaps*. Swapping
>[...]
Thanks for the explanation. As has been pointed out before, linux does
not swap in the traditional sense - since the linux memory manager is not
written to be a clone of some other memory manager, things are different in a
number of ways from a classical memory manager. One major difference is that
I don't think there's anything new with the linux memory manager, really.
there is no minimum number of pages that the memory manager wants to keep in
memory for each process. This means that a sleeping daemon can in fact have
all of it's pages removed from memory (the kernel stack page and the
upage are not removed from memory as apparently some other schemes allow).
Nothing new here. BTW: there are good reasons for keeping a minimum number of
pages in core for a process.
When the linux kernel needs memory, it goes through and looks for pages
that have not been accessed recently. If the page is dirty, this means that
instead of writing it to the swap file, it can simply be reused immediately.
Linux demand loads binaries and shared libraries, and the idea is that any
clean page can simply be reloaded by demand loading instead of pulling it from
a swap file. Thus it tends to be only dirty pages that make their way into the
swap files, but it also means that the kernel can free up some memory by
reusing some code pages without ever having to write them out to disk.
What's new about this scheme? I know of no modern unix not using it. I think
even OS/2 uses it.
Linux tends to share pages whenever possible. For example, all
processes running emacs will share clean pages for both the emacs binary and
sharable libraries (these pages are also shared with the buffer cache). This
Nothing new here, too.
means that swapping out a process that is running the same binary as some other
This is irrelevant. Remember that swapping is not limited to the text
segment. Generally, the data segment is larger by many orders of a magnitude.
process gains very little since much of the actual memory cannot be freed.
Not true, unless the system uses a stupid swapping algorithm.
Paging still works well in this scheme, because it is still easy to find out
which pages not not been used recently by a particular process, and we can
easily remove unused pages from the page tables for processes on the
system. Once the usage count for a particular page goes to 0 (i.e. not in
anyone's page tables, and not in the buffer cache), we can reclaim the page
entirely to be used for something else.
Again, nothing new.
I guess the way I see it, the only advantage of swapping is that you
are effectively keeping particular processes out of memory longer than would
otherwise be the case, which tends to reduce thrashing. The only time when the
Swapping works adequately if you have many non-interactive processes with
laaarge data sets.
linux approach breaks down is when you have too many computable processes
fighting for access to memory, and in principle the linux scheduler could be
modified to temporarily lower the priority of some of these processes and
ultimately achieve the same result with paging. With the current kernel, any
This approach would definately lead to a lower throughput, maybe even to
starvation of large processes.
And if the processes grow even larger, you might even consider process queues.
Note that paging is expensive, regardless of the OS you're talking about.
Jobs fighting for resources causes overhead. The amount of overhead may even
exceed the amount of actual work done.
idle processes will always be "swapped" via paging as it is, so it is not that
clear that this needs to be done.
If you're not running any *heavy* jobs on your system, you may live perfectly
well without swapping. You even won't miss it.
-Raymond
--
Raymond X.T. Nijssen raymond@woensel.es.ele.tue.nl
Ceterum censeo statuam "Bomber" Harris esse delendam.