*BSD News Article 34086


Return to BSD News archive

Xref: sserve comp.os.386bsd.questions:12220 comp.os.386bsd.misc:3106
Path: sserve!newshost.anu.edu.au!harbinger.cc.monash.edu.au!yeshua.marcam.com!MathWorks.Com!panix!berke
From: berke@panix.com (Wayne Berke)
Newsgroups: comp.os.386bsd.questions,comp.os.386bsd.misc
Subject: Re: Whats wrong with Linux networking ???
Date: 9 Aug 1994 17:51:30 GMT
Organization: Nalgame Dios, Ltd.
Lines: 18
Message-ID: <328fn2$i9p@news.panix.com>
References: <Cu107E.Mz3@curia.ucc.ie> <31trcr$9n@euterpe.owl.de> <3256t1$rbn@ra.nrl.navy.mil> <327nj0$sfq@sundog.tiac.net>
NNTP-Posting-Host: berke.dialup.access.net
X-Newsreader: NN version 6.5.0 #1

bill@bhhome.ci.net (Bill Heiser) writes:

>cmetz@sundance.itd.nrl.navy.mil (Craig Metz) writes:

>>>- NFS was *slooow*
>>	No arguing this one. Linux's NFS is still in need of serious work.

>The *speed* of LINUX NFS isn't the real problem.  The reason it's so slow
>is that by default it uses a 1K blocksize.  You can increase the rsize and
>wsize to 8K, like Sun, and the performance improves dramatically.

Could you explain why this should be the case?  Since 8K blocks will typically
(eg. on an Ethernet) be fragmented by IP down to ~1K packets, why should these
bigger blocks be an advantage.  If anything I would suspect that
reassembly and retransmission costs would make the <MTU packets better.
--
Wayne Berke
berke@panix.com