*BSD News Article 90504


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!news.rmit.EDU.AU!goanna.cs.rmit.edu.au!news.apana.org.au!cantor.edge.net.au!news.teragen.com.au!news.access.net.au!news.mel.connect.com.au!munnari.OZ.AU!news.Hawaii.Edu!news.lava.net!coconut!www.nntp.primenet.com!nntp.primenet.com!news.mathworks.com!cam-news-hub1.bbnplanet.com!news.bbnplanet.com!news.idt.net!mr.net!newsfeed.direct.ca!nntp.portal.ca!news.bc.net!rover.ucs.ualberta.ca!gpu3.srv.ualberta.ca!not-for-mail
From: jgg@gpu3.srv.ualberta.ca (J Gunthorpe)
Newsgroups: comp.os.linux.misc,comp.os.linux.networking,comp.os.linux.setup,comp.unix.bsd.bsdi.misc,comp.unix.bsd.misc
Subject: Re: User-space file systems.  (Re: Linux vs BSD)
Followup-To: comp.os.linux.misc,comp.os.linux.networking,comp.os.linux.setup,comp.unix.bsd.bsdi.misc,comp.unix.bsd.misc
Date: 6 Mar 1997 01:43:43 GMT
Organization: University of Alberta
Lines: 82
Message-ID: <5fl7gf$urs@pulp.ucs.ualberta.ca>
References: <5e6qd5$ivq@cynic.portal.ca> <5evsnm$1200@usenet1y.prodigy.net> <5f283t$667@cynic.portal.ca> <5fj9q4$s0i@pulp.ucs.ualberta.ca> <5fjek4$gtm@cynic.portal.ca>
NNTP-Posting-Host: gpu3.srv.ualberta.ca
X-Newsreader: TIN [UNIX 1.3 950824BETA PL0]
Xref: euryale.cc.adfa.oz.au comp.os.linux.misc:163112 comp.os.linux.networking:71069 comp.os.linux.setup:101282 comp.unix.bsd.bsdi.misc:6236 comp.unix.bsd.misc:2736

In article <5fjek4$gtm@cynic.portal.ca> you wrote:
: In article <5fj9q4$s0i@pulp.ucs.ualberta.ca>,
: J Gunthorpe <jgg@gpu4.srv.ualberta.ca> wrote:
: 
: >I think this discussion is quite interesting, but one point that no-one
: >has brought up is quite simply, why does linux have such a slow protection
: >switch time?
: 
: Um...because Linux, like all the other operating systems out there,
: has no way to make a 386 switch protection modes any faster than
: it currently does?

The raw switch time perhaps, but follow the code path through and I
imagine you'll observe that it's not just a protection switch that's being
issued. I've seen some benchmarks run on various intel system for context
switch time (vaguely similar to protection) and linux is considerably
slower that the fastest result. Anyone know how rapidly RT Linux can
manage to do a Task/Task switch in it's RT level?

See:
http://www.cuug.ab.ca:8001/~zimmerm/ts.results.html

Full User-User Switch Times:
QNX 4.2 (P5/133)         1.73 secs, 57.65/millisec  17 microsec/switch
Linux   (P5/120)         5.59 secs, 17.90/millisec  56 microsec/switch

Unfortunately the person running these tests wasn't able to run them on
the same machines :< It does show a LARGE difference between the two OS's.

-- NOTE, I deliberately choose QNX to compare with linux because it is a
micro-kernel OS who's authors constantly claim it has an insanely low
context switch time. It is a POSIX compliant OS if you're wondering.

: Well, how can you minimise the boundary crossings?
:   1. Kernel receives request. Kernel passes request to uses process.
:      [ kernel -> user transition ]
:   2. User process asks kernel to read disk block.
:      [ user -> kernel transition ]
:   3. Kernel passes disk block back to user process.
:      [ kernel -> user transition ]
:   4. User process passes response back to kernel.
:      [ user -> kernel transition ]

As a trivial example consider a new kernel call 'transfer block from fd to
fd' then the sequence would be only 1 and 2. I can imagine lots of uses
for a function like that, nearly any server spends gobs of time reading
and writing. This kind of function might not be practicle for linux, but
it illistrates what I mean.

: into userland.  The latter is obviously not terribly practical,
: except perhaps for a dedicated NFS serving machine.

I'm not sure I see how moving things into userland would help the
situation, unless they were made a single task with the NFS daemon (Even
more impracticle). To do a user/user switch you still need to make a
kernel call to initiate the switch and then the kernel must context
switch, so things are not any better.

: kernel. But these protocols aren't generally used for that sort of
: thing, and thus don't have the sort of performance issues that an
: NFS server has to deal with. (If you do have that sort of performance
: issue with http, such a solution has been developed: Network
: Appliance boxen now do HTTP as well as NFS.)

What about other filesystems, SMB (Samba), AFS, etc they all have the same
use.. If you ask me, there is a big gain to have have high speed
user->kernel->wire->kernel->user transfers, much larger than having a fast
NFS in the kernel.

: As for the `monolithic bloatware kernel,' you already have that,
: according to the microkernel people. On the other hand, you also
: have a faster kernel than the microkernel people do. :-)

I'm not sure about that (I've never seen any benchmarks either way), there
is alot more to 'speed' than simply where the code is placed.  Undeniably
it is more complex/slower/whatever to write and debug a kernel level
driver than a user level daemon. Not all drivers are ideal code, and I
would suspect someone that can use tools like good profilers and
debuggers, libraries etc, on their driver would be able to squeeze that
extra 1% of speed.

Jason