Return to BSD News archive
Xref: sserve comp.os.386bsd.misc:2872 comp.os.linux.misc:20446 Newsgroups: comp.os.386bsd.misc,comp.os.linux.misc Path: sserve!newshost.anu.edu.au!harbinger.cc.monash.edu.au!msuinfo!agate!howland.reston.ans.net!EU.net!sun4nl!cs.vu.nl!philip From: philip@cs.vu.nl (Philip Homburg) Subject: Re: I hope this wont ignite a major flame war, but Ive got to know! Message-ID: <CtnLDs.6zG@cs.vu.nl> Sender: news@cs.vu.nl Organization: Fac. Wiskunde & Informatica, VU, Amsterdam References: <CtKBJ5.77B@rex.uokhsc.edu> <3163r7$440@quagga.ru.ac.za> <CtMnq1.C8@rex.uokhsc.edu> Date: Thu, 28 Jul 1994 13:56:16 GMT Lines: 23 In article <CtMnq1.C8@rex.uokhsc.edu> benjamin-goldsteen@uokhsc.edu writes: %csgr@cs.ru.ac.za (Geoff Rehmet) writes: %Actually, though, I think the fact that Linux doesn't base their TCP/IP %code on BSD is a good thing for TCP/IP. Rewriting something from %scratch based on the standards documents is a good way to find bugs and %imprecision. What I remember from my TCP implementation for Minix is that the URG behaviour the the most annoying part. I think I had another problem with zero window probes, but that's basically it for unexpected differences. A well known property of BSD code is of course the use of x.x.x.0 as a broadcast address. But I have to admit, I grew up in a SunOS (BSD) environment. Now if somebody without any IP experience would write a TCP/IP implementation that would be interesting (he might end up implementing 802.2 instead of ethernet) My decision not to use the socket interface is far more problematic... (for porting exiting code) Philip Homburg