Return to BSD News archive
Path: sserve!manuel!munnari.oz.au!network.ucsd.edu!swrinde!mips!newsun!gateway.novell.com!terry From: terry@npd.Novell.COM (Terry Lambert) Newsgroups: comp.unix.bsd Subject: Re: Take out fs cache! Message-ID: <1992Jul14.164209.14031@gateway.novell.com> Date: 14 Jul 92 16:42:09 GMT References: <1992Jul9.141010.2324@ntuix.ntu.ac.sg> <wutcd.710795614@hrz.tu-chemnitz.de> <l640uuINN4cj@neuro.usc.edu> Sender: news@gateway.novell.com (NetNews) Organization: Novell NPD -- Sandy, UT Lines: 56 Nntp-Posting-Host: thisbe.eng.sandy.novell.com In article <l640uuINN4cj@neuro.usc.edu> merlin@neuro.usc.edu (merlin) writes: >How about implementing an optional configurable file system cache? > >In some environments (e.g. end user systems target to support more >or less one user running a dedicated application which requires a >unix environment [unix utilities, unix system calls, gcc, TCP/IP, >RPC, NFS, X11R4/R5]) it might make sense to completely disable the >buffer cache so power failures don't generate extensive filesystem >cleaning problems. In other environments a static cache would be >appropriate. In yet other environments perhaps a dynamic cache. This is what O_WRITESYNC does; it just does it on a per application basis so as to allow you to run both kinds of apps on the same box at the same time instead of committing all of the apps to be one or the other. >Would eliminating the file system cache in the presence of a smart >disk controller (Adaptec 1542b SCSI) create horrendous performance >problems in the single user, foreground/background multiapplication >environment? Or would it achieve the desirable end of not messing >up the filesystem (which consequently requires cleaning) when power >is either accidentally or intentionally removed before more formal >shutdown processes are executed? Anything in a cache can be lost when the power goes, regardless of whether that cache is located in main memory or in the subsidiary memory of some controller; when the power goes, it's gone. The disadvantage that any "smart controller" caches have compared to main memory caches is the fact that the data has to be moved over the generally slower device bus that the controller lives on. Thus if I have a 33 MHz 386 with a local memory bus (clocking at 33 MHz), a main memory cache is 3 to 4 times faster than a controller cache which has to be accessed off a 12 MHz or 8 Mhz EISA/ISA bus. If I get a lot of cache hits (if I never get any, why have a cache?), then I'm going to spend a lot less time getting the information from local rather than device memory. The one advantage I see to device (controller) based caching is the ability to stream data using predicitive read-ahead. This usually never happens, and when it does, it's usually the result of a backup or large copy operation, which has to hit the device bus twice anyway (thus dividing apparent transfer rate by 2.x, where x represents fractional bus contention overhead). So this is basically zero sum. If you were to implement system calls as bus instructions, and your controller was aware of the requests it was satisfying, then device caching could possibly be a win; however, you would pay a stiff penalty for pseudo-devices in doing this. Terry Lambert terry_lambert@gateway.novell.com terry@icarus.weber.edu --- Disclaimer: Any opinions in this posting are my own and not those of my present or previous employers.