Return to BSD News archive
Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!newshost.nla.gov.au!act.news.telstra.net!psgrain!library.ucla.edu!nnrp.info.ucla.edu!info.ucla.edu!galaxy.ucr.edu!noise.ucr.edu!news.service.uci.edu!unogate!mvb.saic.com!news.mathworks.com!newsfeed.internetmci.com!uwm.edu!news.sol.net!uniserve!van-bc!news.wimsey.com!not-for-mail From: jhenders@wimsey.com (John Henders) Newsgroups: comp.unix.bsd.bsdi.misc Subject: Re: news 'history' file is HUGE Date: 20 Mar 1996 06:26:05 -0800 Organization: Wimsey Information Services Lines: 28 Message-ID: <4ip4ht$mlc@vanbc.wimsey.com> References: <4i34od$ctl@pegasus.starlink.com> <4i3ni6$qsu@jaxnet.jaxnet.com> <4i6te0$l7r@vanbc.wimsey.com> <4ih7t8$h2k@jaxnet.jaxnet.com> NNTP-Posting-Host: vanbc.wimsey.com X-Newsreader: NN version 6.5.0 #3 (NOV) In <4ih7t8$h2k@jaxnet.jaxnet.com> krenaut@jax.jaxnet.com (Karl Renaut) writes: >Anybody know a technique that I could use to split the load of my news >server across multiple machines? Can I NFS mount the /var/news/spool >tree on another machine? How do large ISP's handle not only a full news >feed, but hundreds of people requesting articles? NFS mounting is probably not the best way to do it, as you're going to run into a limit on how many processes can access the spool without bogging down the system, unless you allocate huge ammounts of cache memory to the NFS (can you even do this?) There's a discussion on hardware NFS servers for this on news.software.nntp, and Netcom says they're moving away from this to multiple servers. The most effective way to manage multiple machines for news seems to be to use one master machine for your incoming and outgoing feeds, which can have a very short term spool and have it feed reader machines in slave mode. Unfortunately, BSDI's emulation of SYSV shared memory doesn't work with the nnrp shared active patches for nntpd, or we could get a lot more efficient use of memeory for news reading. As it is, each nnrp has it's own copy of the active file in memory, which can cost you over a meg of memory per nnrpd, if you have a huge active file. This can be reduced considerably by trimming your active file of all the thouusands of groups that really see non-existant traffic, but if you're competing in a market where customers judge you by the quantity of newsgroups you carry rather than the quality of your feed, this may not be an option. -- John Henders