Return to BSD News archive
Path: sserve!newshost.anu.edu.au!harbinger.cc.monash.edu.au!simtel!daffy!uwvax!uchinews!vixen.cso.uiuc.edu!howland.reston.ans.net!news.cac.psu.edu!news.pop.psu.edu!hudson.lm.com!netline-fddi.jpl.nasa.gov!nntp.et.byu.edu!cwis.isu.edu!news.cc.utah.edu!cs.weber.edu!terry
From: terry@cs.weber.edu (Terry Lambert)
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Re: Can FreeBSD execute programs in the disk cache?
Date: 16 Jun 1995 21:43:58 GMT
Organization: Weber State University, Ogden, UT
Lines: 101
Message-ID: <3rstuu$2pi@news.cc.utah.edu>
References: <3rpu4v$28q@park.uvsc.edu> <3rqa0t$c6h@marina.cinenet.net> <3rr4bl$ho2@shell1.best.com>
NNTP-Posting-Host: cs.weber.edu
dillon@best.com (Matt Dillon) wrote:
] The answer to the original question, without making too many
] wording jokes about it, is simply:
]
] YES
]
] The more sophisticated answer is: YES, but the amount of caching
] depends on how heavily you are using the machine's memory (to run
] other programs, data file accesses, etc...)
Despite your dislike for my "wording jokes", the answer is that
the question is at best a non-sequitor and at worst too non-specific
to answer.
"Is it shorter to New York or by bus? A simple yes or no will do".
Let us assume that the proper term was used, and it wasn't just
as incorrectly constructed as the lack of elaboration for the
question. So on to providing an answer to each of the possible
meanings for "disk cache":
1) "disk cache" == the in core images of the on disk blocks
as maintained by the operating system's VM system.
Simple: YES
Elaborate: Yes, this is how a virtual memory system
operates.
2) "disk cache" == the memory on the disk controller that
has been mapped into the system address space.
Simple: NO
Elaborate: No. This would be a serious waste of
resources. The purpose of such a mapped
cache implementation is to provide a
transfer buffer mechanism. Mapping the
pages into an application means that you
can not use the memory for its intended
purpose. Further, the memory is mapped
over the bus, so there is bus contention
that would not be present in accessing
main memory for each access. Further,
the expansion bus is generally implemented
as *significantly* slower than the memory
bus on the machine.
3) "disk cache" == cache memory local to the controller
Simple: NO
Elaborate: No. In order to execute code, it must be
loaded into the processer itself. If the
memory is not addressable by the processer,
the code contained in it can't be executed.
The exception to this is if the processer
doing the execution is as ambiguously
defined as your use of the term "disk cache",
in which case, depending on the controller
design, it *is* possible to have the CPU on
the controller itself execute the code. Of
course if you did this, it would make the
controller useless for the purpose for
which it was intended.
4) "disk cache" == cache memory local to the disk
Simple: NO
Elaborate: Hell no.
[ ... ]
] Sillyness is right! In general, the more on-disk and in-controller
] caching you have, the slower your disk accesses are.
Exception: (probably why you chose the term "generally") write
caching drives significantly speed write performance.
] Most SCSI disks have some caching. Most IDE disks have very little.
] Most disk controllers have none and, in fact, you don't want them
] to have any for the reasons mentioned above. Direct DMA is the best
] way to go, especially on a Pentium/PCI-bus controller.
Another exception: track caching. Most modern disks will by default
reverse the sector ordering numbers on the disk and start reading as
soon as they hit the track and stop after they have read the desired
block.
The point in doing this is to have sequential blocks after the
requested block in cache and to prevent the read from being a
necessrily physical one for sequential access. Said cache is
typically a track buffer on the disk itself.
Terry Lambert
terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.