Return to BSD News archive
Path: sserve!manuel.anu.edu.au!munnari.oz.au!spool.mu.edu!agate!dog.ee.lbl.gov!horse.ee.lbl.gov!torek From: torek@horse.ee.lbl.gov (Chris Torek) Newsgroups: comp.unix.bsd Subject: cache terms (was Adding Swapspace ??) Date: 25 Oct 1992 14:33:14 GMT Organization: Lawrence Berkeley Laboratory, Berkeley Lines: 59 Message-ID: <26965@dog.ee.lbl.gov> References: <Bw7H4L.LLB@cosy.sbg.ac.at> <1992Oct16.162729.3701@ninja.zso.dec.com> <1992Oct16.201806.21519@fcom.cc.utah.edu> <Bw8Mw5.IFC@pix.com> <1992Oct18.082017.22382@fcom.cc.utah.edu> <BwLLxp.7Bt@flatlin.ka.sub.org> <1992Oct25.111525.25782@fcom.cc.utah.edu> Reply-To: torek@horse.ee.lbl.gov (Chris Torek) NNTP-Posting-Host: 128.3.112.15 In <1992Oct18.082017.22382@fcom.cc.utah.edu> terry@cs.weber.edu (A Wizard of Earth C) claimed: >>>the write to the disk is done through a write-through cache. In article <BwLLxp.7Bt@flatlin.ka.sub.org> bad@flatlin.ka.sub.org (Christoph Badura) pointed out: >>The UNIX FS buffer cache has since its invention been write-behind >>and not write-through. In article <1992Oct25.111525.25782@fcom.cc.utah.edu> terry@cs.weber.edu (A Wizard of Earth C) writes: >I tend to use these terms synonymously. When can a cache be write through >but not write behind? The various terms for describing caches are pretty standard. In hardware, a `write through' cache is one where each write updates both the cache and main memory `simultaneously'. In contrast, in a `write back' cache, writes update only the cache line; main memory is updated only when the line is kicked out, either by an explicit cache flush or by replacement with new contents. In the Unix kernel, the buffer cache code simulates a hardware `write back' cache. All else being equal, write-back caches are usually more efficient than write-through. (All else is rarely equal.) In this case, cache `flush' occurs only on sync() or fsync() calls, or in some systems, through timers. Replacement occurs when a buffer is reused. The BSD kernel does not, however, use a strict write-back policy. Instead, whenever it seems important for consistency (directory operations and indirect blocks), and/or whenever it seems likely that a block will not be rewritten soon, the kernel uses a synchronous bwrite() call or an asynchronous but immediate bawrite() call. More detail can be found in the Bach and BSD books. >Just curious as to why you draw such a sharp distinction, the point being >that there is negligible overhead in a cached writes for swap no matter >how you slice the pie. This is not really true, since swapping/paging occurs mainly when the machine is low on memory. This tends to coincide with the machine being `active', which implies that every bit of overhead counts. With unified VM/buffer caches, the effect is even worse: `heavy paging' and `overloaded buffers' can become completely synonymous without some sort of policy to prevent the buffer cache from taking over all of physical memory. (Current BSD systems have an enforced limit on buffer cache size, namely `bufpages' in machdep.c.) This is the core of the idea behind `dribble' buffer write policies (the timers mentioned above): the machine can best afford the writes when it is not busy doing other stuff. At the time the write occurs, it is busy (obviously so: someone is busy writing). If the write is merely cached, a huge queue can build up, and then when demand increases *everyone* will have to wait. A `dribble-back' cache avoids all of this, but requires extra mechanism and trades off total throughput for decreased latency. Systems with big queues tend to have greater overall throughput. -- In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 510 486 5427) Berkeley, CA Domain: torek@ee.lbl.gov