Return to BSD News archive
Path: sserve!newshost.anu.edu.au!munnari.oz.au!sgiblab!swrinde!cs.utexas.edu!asuvax!ncar!csn!hellgate.utah.edu!fcom.cc.utah.edu!u.cc.utah.edu!cs.weber.edu!terry From: terry@cs.weber.edu (A Wizard of Earth C) Newsgroups: comp.os.386bsd.questions Subject: Re: How different is VM twixt NetBSD and FreeBSD? Date: 2 Jan 1994 06:34:46 GMT Organization: Weber State University, Ogden, UT Lines: 93 Message-ID: <2g5pu6$n96@u.cc.utah.edu> References: <2fjgs0$5bt@news.service.uci.edu> <JKH.93Dec26154313@whisker.lotus.ie> <DERAADT.93Dec26153823@newt.fsa.ca> NNTP-Posting-Host: cs.weber.edu In article <DERAADT.93Dec26153823@newt.fsa.ca> deraadt@fsa.ca (Theo de Raadt) writes: >In article <JKH.93Dec26154313@whisker.lotus.ie> jkh@whisker.lotus.ie (Jordan K. Hubbard) writes: > bob@nemesis.ps.uci.edu (bob prohaska ) writes: > > How different are the virtual memory system in NetBSD and FreeBSD? > > Close to identical. > >i disagree. there are numerous different changes and bug fixes. > > I think the 4MB decrease is your problem - it's a known thing that gcc2 > doesn't live well with 4MB systems in FreeBSD 1.02 (this has undergone > significant work in the upcoming FreeBSD 1.1) > >and here jordan indicates that the freebsd system *is* different. netbsd >never had any problems with 4 meg systems (but watch out, things are tight >in a 2M system :) > >so, the correct answer is: the vm systems are based on the same code but >have numerous other changes made to them. bugs have been fixed in different >ways. And different bugs have been fixed. For instance, both VM systems, being MACH derived and thus far unrepaired, suffer from memory overcommit and using the program file as a swap store; basically, this means that an NFS client can be crashed by activity on an NFS server (an arguably "BAD THING"(tm)). SVR4 suffers from the same problem, as does MACH, so it's not like it's anything new. I believe that the 4M memory problems have been resolved in FreeBSD IFF you are pulling down "current" sources and recompiling them. This is somewhat of a chore, and it is *very* recent (and there may still be an uncommited swap pager bug in the current sources -- wait a week). NetBSD does suffer from the 4M problem, actually; it is just very dependent on your chipset; the NetBSD group apparently doesn't tend to buy shitty hardware, but there is some bad cache chipsets. Generally, a PCI or a VESA VLB guarantess a working cache chipset to get itself certified (there are good reasons for not using local bus cards otherwise). One example known to have problems because of the way DMA buffers are incompletely handled in both FreeBSD and NetBSD is: UMC UM82C482AF 9302-US NB0368 UMC UM82C481BF 9303-US NB1861 UMC UM82C206F 9303-CA NB0978 This is not a downer on the chipset; simply that it won't work for NetBSD or FreeBSD, especially in regard to the unpatched VM. SVR4.0 used to have problems with this as well, but the problem with a stripped out kernel (a non-stripped kernel will *always* take more than 4M just to run SVR4) in SVR4.2. There were some early problems with the SiS chipset, but I haven't seen them lately. Another potential problem is "the bounce buffer problem"; basically, if you have a controller card in your machine (like a SCSI controller, for instance) that does device-initiated-DMA, *AND* you have more than 16M of memory *AND* your machine has an ISA bus, then *IF* the memory buffer for a DMA I/O is over 16M, you will blow up. Because of the allocation order of DMA buffers in the kernels, initial I/O buffers are allocated in locore in NetBSD; this means that for about 70% of the cases, the problem "goes away; however, if you have a user program that does I/O *AND* the page allocated to the program for the I/O is over 16M, then you can see the blowup again (expecially if the I/O is caused by a reference to an mmap'ed region -- like a shared library). So that problem is still in NetBSD. The FreeBSD VM system does not preallocate, so using more than 16M will cause it to blow (like NetBSD under the circumstances above), except it will blow all the time. There *IS* a "bounce buffer aware" SCSI driver in Alpha right now that is probably a limited fix to the problem (when a more general fix is really what is desirable, since the majority of the cache problems can be resolved at the same time with lottle more effort if a general mechanism is implemented). Just to establish my impartiality, I haven't been running either groups VM system since the end of May, 1993... I've been running my own (which has it's own problems, chief among them that it can't be distributed without a lot of work -- and maybe not even then). Terry Lambert terry@cs.weber.edu --- Any opinions in this posting are my own and not those of my present or previous employers.