Return to BSD News archive
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!math.ohio-state.edu!howland.reston.ans.net!noc.near.net!ceylon!genesis!steve2 From: steve2@genesis.nred.ma.us (Steve Gerakines) Newsgroups: comp.os.386bsd.bugs Subject: Re: Forward: QIC-80 Message-ID: <CED28C.G57@genesis.nred.ma.us> Date: 4 Oct 93 07:02:34 GMT References: <1993Sep26.203329.10048@gmd.de> <1993Sep30.233456.11034@fcom.cc.utah.edu> Organization: Genesis Public Access Unix +1 508 664 0149 Lines: 90 >A Wizard of Earth C says: >The Streaming floppy tape support is an altogether different issue, and is >based on some bad architectural choices by the designers of the devices. It's bad depending on what OS you're using. :-) The QIC-40/80 drives really were designed for use with DOS and a dedicated CPU. Sitting in a tight loop for 2.5ms and performing after-the-fact error correction under DOS is no big deal. Ripping some of the more complicated logic out of the hardware and into the tape driver is what makes the drives so cheap. >[low scheduling timer resolution] > >This tends to be bad; just like the original Archive/Computone tape >drivers for SCO systems, actively running the tape drive kills everything >else on the system as the tape driver buzz-loops so as to not miss its >timing window and nothing else is allowed to run. Among other things, >this results in QIC-40/80 drives being totally useless for network >backup, and extremely slow because the archiving programs (like tar and >cpio) are not allowed to interleave their execution with that of the tape >drive -- nor is the disk driver, for that matter. For QIC-40/80 drives (or at least my driver), this just isn't so. Performing segment I/O is broken up into 32 1K sector transfers. While each sector is en route, the driver waits for an interrupt from the fdc. During the waiting period, there's no reason other I/O can't go on. I agree you probably wouldn't want this as a network drive though (unless you can do _lots_ of buffering on your card). >The cannonical soloution is to either up the LBOLT clock resoloution to >1 or 2 mS (there is a 2.5mS timing window for a particular operation) and >add a real-time scheduling queue that gets exampined before all other run >queues. This is the soloution taken by several UNIX variants, mostly those >with real-time extensions intrinsic to the OS. This works, but is, all >things considered, an unsatisfactory approach, since timing differences >still must be resolved by buzz-looping at about 30% of your system (at 1mS >resoloution -- HZ=1000) or 60% of your system (HZ=500). Not only is there the problem of resolving timing differences, but there's also the problem of consistency. If two drivers are battling for a short delay period, one might not give back the CPU in time to schedule the other critically delayed driver. In other words, requesting to wait 2.5ms in some instances might end up being 2.7ms. This might not always be acceptable. >Finally, no matter what implementation is used, it should be noted that each >vendor is not sufficiently constrained by the wording of the standard; this >means that it is nearly impossible to write a single driver that operates, >for instance, a Colorado Systems Jumbo 250, and some other vendors QIC-40/80 >drive without rewriting, at a minimum, the initialization code. This is not completely true. I have reports of several different drive types working and recognized by the driver. The only shortfall of the QIC-117 common command set standard was the lack of standardizing the drive activation commands. Fortunately however, it appears that several vendors chose only a few different combinations. Thus, it is quite possible to write a driver for many different models, you just need to perform a few different probes to find the right one. Otherwise, I didn't find the the command set to be a big issue. >Just because something is ubiquitous, doesn't mean that it is standard... >QIC-40/80 drives are just the best example we have so far. The QIC standards are pretty specific when it comes to how the drive should mechanically operate. On the software side however, there is an annoying lack of examples. From a standards standpoint, you would think that examples would be crucial. There should be some reference at which the implementor can point and say "ah! that's how it's done!", not "I think this is what they meant". The driver is not even half the battle, it's the high level data layout. It's horribly complex, much more than it needs to be. For example, here's some of what the standard says a backup program should be able to handle: multiple tape formats, multiple compression types, multiple compressed segment layouts, multiple bad sector mappings, multiple OS save data, optional compression mapping, which may or may not be present in one of two types, and finally optional extension information, which by now most people probably ignore anyhow. :-) What were these people thinking? It's almost like there were either no (or perhaps too many :-)) programmers at the wheel when these standards were derived. The program is obviously not impossible to write. It just bugs me that after designing a piece of hardware that already puts a large burden on the programmer, why make her job yet even more difficult? So to the person who wondered why certain hardware gets "snubbed": It seems to me, the cheaper the hardware, the more popular it is, and the more painful it is to program for. On top of that, many vendors are hesitant to reveal how their hardware operates in the first place. - Steve steve2@genesis.nred.ma.us