Return to BSD News archive
Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.rmit.EDU.AU!news.unimelb.EDU.AU!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!vic.news.telstra.net!act.news.telstra.net!imci3!imci4!newsfeed.internetmci.com!in1.uu.net!news.artisoft.com!usenet From: Terry Lambert <terry@lambert.org> Newsgroups: comp.unix.bsd.freebsd.misc Subject: Re: FreeBSD as a router Date: Thu, 16 May 1996 18:04:05 -0700 Organization: Me Lines: 100 Message-ID: <319BD085.3EE5FAF9@lambert.org> References: <4lfm8j$kn3@nuscc.nus.sg> <317CAABE.7DE14518@FreeBSD.org> <4lt098$erq@itchy.serv.net> <Pine.SUN.3.90.960427140735.3161C-100000@tulip.cs.odu.edu> <4mj7f2$mno@news.clinet.fi> <318E6BB1.6A71C39B@lambert.org> <4mtfsg$14l8@serra.unipi.it> <319407B4.32F4B8B6@lambert.org> <4nc6v9$jib@news.siemens.at> NNTP-Posting-Host: hecate.artisoft.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Mozilla 2.01 (X11; I; Linux 1.1.76 i486) Ingo Molnar wrote: ] Shared IRQ contra no-shared IRQ: ] ] logical events: board 1 issues an IRQ ] board 2 issues an IRQ ] ] discrete interrupt system: ] ] board 1 issues IRQ1 ] board 2 issues IRQ2 ( or IRQ1 a bit later if they are serialised ] internally) ] ] shared interrupt system: ] ] board1/2 issues IRQ1, one cycle to decide which one it was ] board1/2 issues IRQ1, one cycle to decide which one it was ] ] There is an overhead of deciding which one it was. ] But if there is heavy traffic, then it's MUCH cheaper to have ] shared interrupts. Interrupt frequency is to be kept low. The only thing you are doing by checkig board 2 after processing board 1's interrupt is delaying the ack so that board 1 is unavailable to process data because its interrupt has not yet been ack'ed in the shared case. I fail to see how this is a win. ] Internal serialization is a clear loss. Interrupt latencies can ] be up to several tens of usecs. Checking for a board is a few PCI ] cycles, much faster. You are serializing latency by delaying the first board generating its next interrupt until you have processed the interrupt for the second board. This is nore harmful than having to handle two interrupts. ] (maybe not true): And for PC compatible machines, interrupt processing ] is still working the old way (even with PCI cards): the XT PIC hands ] over the IRQ vector in an ISA cycle. (not sure about this last one) ] very ineffective. (this was no pain for Intel since DOS was almost ] never interrupt-driven) ] ] So theoretically you are right. But on PC compatible systems you are ] not. I think that depends on PCI being bridged from the ISA instead of the other way around. The DEC, Motorolla, Apple, and new IBM PCI/ISA bridge chips expect the native bus to be PCI. This is, in general, a necessary optimization for PnP support of direct bus interconnect ISA devices (ie: onboard multi I/O and IDE, SCSI, and ethernet controllers). ] no we are talking about network cards. IRQ latency on a 3Com509 ] card is about 100 usecs, from the point on where a packet arrives and ] the board raises the (ISA) IRQ line, to the point where the first ] instruction in the IRQ handler starts. (with a call gate in the ] IDT, a ring switch and with minimal (read, ~1-2 usecs) preprocessing). ] ] it's a clear win if we "merge" interrupts in busy systems. ] Shared IRQ systems with proper OS support do exactly this. And a clear loss if the "busy system" is busy because of boards which happen to be on the same IRQ, since it delays the ability for the processor to process data generated by one board (for instance, as the result of a DMA completion) until the second board is polled. ] : Of course all this assumes you care about performance... which ] : you do, or you'd be running cheap ISA cards for everything (and ] : not sharing interrupts because ISA is incapable of it anyway). ] ] well, the possibility is there, and many ISPs use shared ISA IRQ ] serial cards. (well, serial cards are not a fair example, IRQ ] merging is just too idal there, since many lightweight interrupts ] occur) That, and the interrupts are shared intra-card, not inter-card, so ther is no bus arbitartion latency for that particular example; makes it a good example for supporting your point, I suppose. 8-). IMO, the name of the game is concurrency. The processor has to stay as busy as possible, not executing bus interaction instructions. This is especially true of, for instance, a P166, which runs a 33MHz PCI and take 5 CPU clocks for each bus clock to arbitrate interrupt sharing. Yes, this is low overhead if you have a crappy PCI implementation, but the whole point I was trying to make was: don't buy crappy hardware if you require performance. This should be intuitively obvious. 8-). Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.