Return to BSD News archive
Path: sserve!newshost.anu.edu.au!munnari.oz.au!yarrina.connect.com.au!harbinger.cc.monash.edu.au!msuinfo!agate!howland.reston.ans.net!europa.eng.gtefsd.com!MathWorks.Com!news2.near.net!news.delphi.com!usenet From: John Dyson <dysonj@delphi.com> Newsgroups: comp.os.386bsd.questions Subject: Re: Info on NetBSD & FreeBSD Date: Mon, 25 Jul 94 13:20:54 -0500 Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 48 Message-ID: <5O+QD4e.dysonj@delphi.com> References: <30ur2j$ege@wsiserv.informatik.uni-tuebingen.de> <hkyT78Z.dysonj@delphi.com> <michaelv.775126252@ponderous.cc.iastate.edu> NNTP-Posting-Host: bos1f.delphi.com X-To: Michael L. VanLoon <michaelv@iastate.edu> Michael L. VanLoon <michaelv@iastate.edu> writes: >demand paging and copy-on-write. Demand paging means that text pages >are faulted into memory only as the program tries to execute them. >This means that if a certain section of code lies completely over a >page, and never gets executed, that page will never enter memory (by >executing the program). Likewise, if a certain bit of code were >resident completely in certain pages, and this code didn't get >executed until much later in the program, these pages would not get >loaded at program startup, but only later when they were actually >accessed. Page faulting in a text page is based on the inode number, FreeBSD has some enhancements to fault additional pages in the region of other pages faults. This *greatly* improves program startup where there is a flurry of page-faults. Unlike the disk-clustering based read-ahead, the FreeBSD clustering also reads pages prior to the faulting page. This makes the page-fault clustering much more effective. Most OSes try to do this in order to minimize disk seeks. The downside of this is that sometimes unnecessary I/O is done, but all-in-all it is a net win if done intellegently. In FreeBSD V2.0 all disk I/O will be performed through the VM I/O methods (in fact they will be new methods.) The read/write based I/O will be special-cased in the VM system for read-aheads, etc. Current studies show no decrease in performances and will provide mmap & I/O consistancy. The issue of sharing memory between the disk caching and VM has been essentially resolved. The reason that I am talking about this is to show that the VM system (properly implemented) can be leveraged against many problems. It will be nice to eliminate one of two methods for disk I/O. We might not actually *fully* implement the new stuff for V2.0, but it will have the cache consistancy, and V2.1 will probably not even have a vfs_bio (or anything directly equivalent except for the minimum to support the allocation of buffers for driver stuff only -- no FS stuff). >accessed. Page faulting in a text page is based on the inode number, >device number, and index into the file, so there is one, and only one, >of each page of each file in memory, no matter how many processes have >references to the page. Not only are text pages faulted in that way, but .data pages are too. .data pages are copy-on-write and the I believe that the page directly from disk sticks around available for the next process invocation. (I'll have to refer to the code -- but I believe that it *must* work that way, untill I am corrected, or if I correct myself :-)).