Return to BSD News archive
Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!spool.mu.edu!howland.reston.ans.net!cs.utexas.edu!swrinde!elroy.jpl.nasa.gov!decwrl!pa.dec.com!depot.mro.dec.com!enomem.lkg.dec.com!usenet From: matt@3am-software.com (Matt Thomas) Newsgroups: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.bsd.freebsd.misc Subject: Re: TCP latency Date: 14 Jul 1996 22:12:43 GMT Organization: Digital Equipment Corporation Lines: 36 Sender: thomas@netrix.lkg.dec.com (Matt Thomas) Message-ID: <4sbrcr$rqd@enomem.lkg.dec.com> References: <4paedl$4bm@engnews2.eng.sun.com> <4rlf6i$c5f@linux.cs.helsinki.fi> <31DEA3A3.41C67EA6@dyson.iquest.net> <Du681x.2Gy@kroete2.freinet.de> <31DFEB02.41C67EA6@dyson.iquest.net> <4rpdtn$30b@symiserver2.symantec.com> <x7ohlq78wt.fsf@oberon.di.fc.ul.pt> <Pine.LNX.3.91.960709020017.19115I-100000@reflections.mindspring.com> <x74tnfn35s.fsf@oberon.di.fc.ul.pt> <4s33mj$fv2@innocence.interface-business.de> NNTP-Posting-Host: netrix.lkg.dec.com X-Newsreader: knews 0.9.3 In-Reply-To: <4s33mj$fv2@innocence.interface-business.de> To: joerg_wunsch@interface-business.de (Joerg Wunsch), roque@di.fc.ul.pt Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:45201 comp.unix.bsd.netbsd.misc:4044 comp.unix.bsd.freebsd.misc:23565 In article <4s33mj$fv2@innocence.interface-business.de>, j@ida.interface-business.de (J Wunsch) writes: >Pedro Roque Marques <roque@di.fc.ul.pt> wrote: > >> Like i mentioned in a >> previous post BSD style timers tend to be cheaper but they are less >> correct than what one would normally desire. One of the finest remarks >> i've heard on this issue was "if most of the times the retransmit >> timer won't expire, why set it in the first place ?" the tought part >> is that when the timer expires with want it to expire with the greatest >> precision you can achieve. >Sorry for my ignorance, network code is arguably not my field of >knowledge. What's the (user-visible) drawback of the current scheme? >I do know that the BSD TCP code sometimes suffers from poor recovery >behaviour on lossy lines (packet loss >~ 50 %, as it can often be >observed on transatlantic or transpacific links). Is this what you >mean? There are a number of problems with the way timers are implemented in TCP. The first is granularity. A slow/fast timeout has an inaccuarcy of up to 500ms/200ms. That may cause inaccuaries to creep into round-trip estimates. The second problem is that TCP traverses all of its PCBs looking for timeouts. However, only a small number of PCBs typically have timeouts pending. When supporting large numbers of connections (>20,000), the TCP timeout routines can consume a great deal of CPU time when they are actually doing very little. -- Matt Thomas Internet: thomas@lkg.dec.com U*X Networking WWW URL: http://ftp.dec.com/%7Ethomas/ Digital Equipment Corporation Disclaimer: This message reflects my Littleton, MA own warped views, etc.