Return to BSD News archive
Xref: sserve comp.unix.aix:48670 comp.unix.bsd:15525 comp.unix.pc-clone.32bit:7696 comp.unix.solaris:28136 comp.unix.unixware:15045 Path: sserve!newshost.anu.edu.au!harbinger.cc.monash.edu.au!msunews!agate!howland.reston.ans.net!gatech!newsxfer.itd.umich.edu!zip.eecs.umich.edu!caen!usenet.coe.montana.edu!bsd.coe.montana.edu!nate From: nate@bsd.coe.montana.edu (Nate Williams) Newsgroups: comp.unix.aix,comp.unix.bsd,comp.unix.pc-clone.32bit,comp.unix.solaris,comp.unix.unixware Subject: Re: Unix for PC Date: 10 Dec 1994 17:55:15 GMT Organization: Montana State University, Bozeman Montana Lines: 100 Distribution: inet Message-ID: <3ccq23$mba@pdq.coe.montana.edu> References: <199411210319.TAA18133@nic.cerf.net> <3c81c7$h1o@fido.asd.sgi.com> <3c8aqi$l4i@pdq.coe.montana.edu> <3cale6$6ep@fido.asd.sgi.com> NNTP-Posting-Host: bsd.coe.montana.edu In article <3cale6$6ep@fido.asd.sgi.com>, Larry McVoy <lm@slovax.engr.sgi.com> wrote: >I wrote: >: >I hate to burst your bubble, but I worked at Sun in the systems group for >: >a few years (and then in the server group). They had *no* regression >: >test other than the binaries that shipped with the OS. Since 5.x, >: >they use the POSIX test suites but those (were) are pathetic and >: >certainly don't cover everything. > >Nate Williams (nate@bsd.coe.montana.edu) wrote: >: Then somebody should take the manager of that group out and shoot him. >: Either that or his manager for giving unreasonable deadlines that can't >: be met with decent testing. > >Until you've actually implemented a Unix regression test, don't be so >naive. If it was just a matter of a dumbass manager, do you think they >wouldn't have figured that out by now? It may be difficult to write, but it can't be any more difficult to write than some of the features that are touted to be great in SUNos. There is *NO* excuse for no testing whatsoever. >Real testing is hard, hard, hard. Let me know when you have a solution. Even a 'make world' is better than nothing. Reducing the amount of memory in the machine and running some pretty hefty scientific applications on it. Yes, you won't find all the bugs but you'll hopefully find some of the more obvious ones. Heck, get one of the benchmark programs put out which stress tests lots of the systems and run it for days on end. Even the lmbenchmarks are better than none. >: If microsoft does one thing right it's test it's software. Granted, it >: still never seems to find the bugs that the average user find the first >: time around, but they do indeed spend alot of their resources trying to >: break their own products. > >Reread that paragraph and tell me the content? I'm supposed to (a) >believe that Microsoft does good testing but (b) accept that they can't >find the bugs that the average user encounters? And that is good >testing? Maybe the problem here is your definition of testing, I fail >to see what it is. You mention what the problem is below. Most of the bugs are found in 'strange' hardware combinations that are prevalent in PC's. At least with workstations by Sun, HP, DEC, etc... these sort of problems are not as common. And, at least they've made an attempt at trying to find bugs internally which apparently the Sun systems group didn't bother with. >: >The commercial OS release mechanism is pretty similar to Linux. You send >: >out some alpha junk to a few sites and see what the reaction is. The closer >: >you get to beta the less you allow in. After beta only show stoppers get >: >in. > >: That might be the way SUN does things, but I know of a couple other OS >: vendors that don't do things that way. :) > >Name them. The FreeBSD group, and from I've read BSDi seems to be doing things better than other folks. Cygnus, Tektronix, I'd need more research to name more names. >: >Stability is important but there is little in the way of testing done to >: >insure stability. SMCC might be fixing that, I understand they have/had >: >Solaris 2.4 months before they shipped it. :-) > >: Stability can be tested by putting real world applications on your >: hardware. The way FreeBSD tests it's software is by sticking it on >: wcarchive.cdrom.com. Having 250-350 ftp's is one way of seeing how >: the system reacts under load. :) > >That tests 250-350 ftps. So what? Is that it? That's great if I want to >be doing a bunch of ftps. How about a bunch of ftps and a make? Or a >untar with a make? Or X and an untar and SLIP but no ftps? That'll test your applications, but for system regression testing of the VM system that kind of load tests the memory system, the networking software, and how well the disk sub-system works. And, there are folks in the core team that do test their boxes with 4 makes running (one over NFS), a couple ftp's, a couple SLIP lines, with a couple benchmarks running just for fun. No, this isn't complete testing, but it seems to find *most* of the bugs that affect folks. >Oh, did I mention that all the interesting bugs are ones that only show >up under some weirdball combo of unrelated things? Those are hard to find, and those kind of bugs will only be found by end users for the most part. But NOT doing any regression testing at all is simply foolish, and ANYONE that develops software that way out to be laughed out of the business. Nate -- nate@bsd.coe.montana.edu | FreeBSD dude and all around tech. nate@cs.montana.edu | weenie. work #: (406) 994-5980 | Unemployed, looking for permanant work in home #: (406) 586-0579 | CS/EE field.