Return to BSD News archive
Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!metro!metro!asstdc.scgt.oz.au!nsw.news.telstra.net!act.news.telstra.net!psgrain!iafrica.com!pipex-sa.net!hole.news.pipex.net!pipex!tank.news.pipex.net!pipex!www.nntp.primenet.com!nntp.primenet.com!news1.best.com!nntp1.best.com!flash.noc.best.net!not-for-mail From: dillon@best.com (Matthew Dillon) Newsgroups: comp.unix.bsd.freebsd.misc Subject: Re: FreeBSD as news-server?? Date: 16 Oct 1996 17:32:47 -0700 Organization: Best Internet Communications, Inc. (info@best.com) Lines: 238 Distribution: world Message-ID: <543urf$ar3@flash.noc.best.net> References: <537ddl$3cc@amd40.wecs.org> <53pm5c$5ks@twwells.com> <53u1ic$61i@flash.noc.best.net> <53ucuj$8qh@twwells.com> NNTP-Posting-Host: flash.noc.best.net :In article <53ucuj$8qh@twwells.com>, T. William Wells <bill@twwells.com> wrote: :>Before I get into this, one thing I don't recall being mentioned, :>so I'll mention it just in case: be sure to compile the kernel on :>your news machine to allow lots of open files and lots of child :>processes. Otherwise, you _will_ run out of resources..... :> :>In article <53u1ic$61i@flash.noc.best.net>, :>Matthew Dillon <dillon@best.com> wrote: :>: :In article <53pm5c$5ks@twwells.com>, T. William Wells <bill@twwells.com> wrote: :>: :>No, it wouldn't. Almost certainly, INN is slower for a single :>: :>incoming newsfeed than C news. In this day of huge news spool :>: :>directories, it is absolutely necessary that the process :>: :>accepting incoming NNTP *not* write the articles to the spool. :>: :>The latency this introduces into the protocol slows it down way :>: :>too much. (No, streaming doesn't help -- many providers have :>: :>found quite the opposite and have stopped using it....) :>: :> :>: :>With bare INN, you cannot even get 2 articles/second on typical :>: :>: woa woa! Not true any more! Just make sure all of your feeds :>: understand INN's streaming mode. :> :>Streaming mode is bad unless you have *just one* feed. Otherwise, :>it steps on itself with latency. Even worse than nonstreaming :>feeds do. And not all providers will send you a streaming feed.... :> :>Also, experience (and my theoretical analysis) shows that multiple :>parallel feeds generally work better than streaming. Well, I've definitely never had a problem running streaming mode, and I *have* tested it with and without. The system definitely runs a whole lot better with streaming on. It catches up much faster (if I, say, kill innd for 30 minutes then restart it with streaming on, then repeat with it off). Perhaps the machine you were running it on wasn't tuned for it. I find that you get much better results with larger TCP window sizes... it tends to make the streaming much more efficient. With tiny TCP window sizes, one might as well not use streaming at all, but with big TCP window sizes INN can 'dwell' on a single descriptor for a dozen articles before going on to the next descriptor. This works especially well if you have multiple feeds and one or two of them get out of synchronization. :>: understand INN's streaming mode. I get about a 5 articles/sec :>: transfer rate from my main news machine to my nntp machine :>: under medium load conditions (around 200 nnrpd's users). :> :>I'm going to bet that you aren't using "typical PC hardware". :-) :>When I was doing INN with streaming mode, I wasn't even getting 2 :>articles/second. We used to use a Pentium-90 running FreeBSD. At the moment we are using a challenge S. The newsfeeds work equally well on both platforms. It's pretty run-of-the-mill hardware. :>There is something you want to keep in mind, regarding newfeeds. A :>typical article requires 64K of disk activity to write just the :>article. (Or, did, last year. This is an O(n) in the size of the :>newsfeed effect -- which means that article disk activity is :>O(n**2) in the newsfeed size.) :> :>What this means is that optimizations regarding the history file :>are generally pointless. Keeping the history file in memory cuts :>out at most 8K per article of disk activity -- while INN spends :>time waiting on that 64K (it's mostly directory stuff, so INN :>doesn't get buffer cache benefits for it). Since these two :>operations can be done somewhat asynchronously, you don't get :>much "win" by minimizing history accesses. Perhaps it is pointless with a single feed, but it certainly is NOT pointless if you have multiple redundant feeds. The reason it is not pointless is simple: Take us. 6 or 7 full feeds coming in. This means that we will have the article 6 out of 7 times. History file caching is EXTREMELY important, because it means that 6 out of 7 responses to IHAVE requests will be cached (because the response is 'I've already got the article'), and thus involve *NO* disk activity whatsoever. Anybody trying to run INND, streaming or not, who is not properly caching his history file lookups is not going to be very happy if he has multiple incoming feeds. :>Anyhow, regarding C news -- you need less RAM than you do with :>INN precisely because its components use less memory. Sure, that :>means more CPU time spent on, say, kernel calls in expire. But it :>doesn't increase disk activity (and may reduce it, actually.) :> :>Related to that, if you can at all do it, don't have innd accept :>nnrpd connections. Use a separate daemon like connectd to do it :>on a different IP address than innd's. This will not only make :>the initial connection *much* faster, it'll cut down on the space :>used as innd forks, and thus on swapping. :... :>: The partition containing the history file MUST have at *least* 1GB :>: free. The reason is that it must not only support the potentially :>: 100-200MB history file, it must also support the expire run's :>: history file rebuild *AND* support active references to unlinked :>: history files by nnrpd and other programs that will prevent the :>: 'old' history file's space from being reclaimed. :> :>This is unnecessary. You need space for just two copies of the :>history file, plus lots of log space. 300M free works. For now. :>:-) Better to be safe then sorry. Disk space is so cheap, and the history file so important, that it definitely does not pay to be stingy here. There are a thousand things that cause create references to unlinked history files... literally! I've seen everything from blown nnrpd processes to blown expire processes still hanging around... I've seen nnrpd users stay online for three days straight with the same process! Without enough room to manuever, any one of these items can completely destroy your history file if you do not have enough free space on the partition. :>: (c) Overview... it is not strictly necessary to put the overview :>: files on a separate physical disk if you (1) have three or more :>: disks for your main spool and (2) buffer the overview records :>: in the newsfeeds file correctly. :> :>But be sure to put the overview files in a separate directory tree :>-- otherwise overchan spends a lot of time directory searching. The overview file is normally near the beginning of the directory. Statistically, it's a wash. Besides, overchan is an asynchronous process. It does not really matter if it takes a little extra overhead... it's in the noise because the directory in question has *already* been cached by the act of writing out the article file in the first place. The namei caching works for .overview files as well. :>: * If you normally have more then a dozen or so active NNTP users, have :>: a *second* machine. That is, use one machine for your newsfeeds machine :>: with a minimal spool, and a second machine for your reader machine with :>: a huge spool. nnrpd processes *kill* INN. :> :>This is a relatively low time for me and I have 17 nnrpds. I've :>seen like 40 and it doesn't bother my innd at all. :> :>Then again, one of the major hacks in my server is that articles :>are stored in subdirectories of the newsgroup tree. Instead of :>"%ld", artnum, they're stored as "%07ld.a/%03ld", artnum / 1000, :>artnum % 1000. This made a *huge* difference in efficiency, both :>increasing the speed of innd and decreasing the effects of a :>large number of nnrpds. I'm beginning to wonder.... what kind of hardware are you running this stuff on? :>: (a) use the history file page table in-core option or the history :>: file mmap() option. I actually suggest the page table in-core :>: option because most UNIX system's buffer caching algorithms :>: seem to work better with lseek()/read() then with mmap()/access, :>: even though the overhead is greater with lseek()/read(). :> :>As I said, I don't think this makes much difference anymore. For :>sure, on the system I have, it makes things *much* worse to have :>a large data segment for innd. Any UNIX that implements vfork() will not care at all, and FreeBSD doesn't care whether you use fork() *or* vfork(). It's a big zero time-wise, even with huge data segments. Even so, if you are getting a lot of nnrpd starts, running a separate server will be a good idea. I've hacked numerous system daemons to run 'in the background' rather then be run from inetd.conf in order to be able to avoid the exec() entirely and just do an accept()/fork(). :>: (b) Buffer writes to the log file. It's another configuration option. :>: Be generous :-) This allows you to put the logs on the same :>: physical disk as the history file. :> :>It's not a configuration option. It's an argument to innd. If you :>don't specify it, you get buffering. Hmm.... ah. Well, there's one configuration option, DEFAULT_TIMEOUT for log file flushing. I guess the log buffering is just a stdio descriptor or something. I thought there was a buffer size parameter but I don't see one. :>: (c) Use the absolute latest INN release, with the streaming mode :>: extensions. :> :>That's inn1.4unoff4, I believe. There is a 1.5beta..... Probably. inn1.4unoff4 is what I use. :>: (d) If you run nnrpd, for gods sake use the shared-active patched :>: version! :> :>I might give that a try, but for now, I haven't seen a whole lot :>of evidence that it'll make much difference in my system. Then :>again, I don't swap much. If I were, I'd probably want that extra :>space consumed in each nnrpd. You are only running a few nnrpd's, it doesn't matter. But shared-active saves a huge amount of startup processing plus, if you have a full active file, on the order of the size of the active file (500K to 1MB usually) per nnrpd process. It more then halves the data segment size for nnrpd, which means you can run twice as many before you start to swap. This may not be readily apparent since most 'ps' programs do not distinguished between the shared and non-shared data set size, but the savings is definitely there. At peak, we have on the order of 250 nnrpd's running on the newsreader machine. Before putting in shared-active, I couldn't get better then 150 before the machine started to swap. :>: to such people that (a) channels have to fork/exec too, and with :>: much greater overhead doing so from innd rather then cron, and :>: (b) unless you have > 20 feeds, doing 20 fork/exec's from cron once :>: every 5 minutes has almost no overhead, and you can even stagger them :>: from cron to create less disk contention. This is verse the real :>: time channel feeds which, even when buffered, give you NO ability :>: to stagger their operational starts to reduce disk contention. :> :>And no ability to ensure that 20 * 60M won't really, really, :>screw you up memory-wise. Basically, it's a bad idea to run :>channel feeds. For that matter, I think I'm going to remove the :>last of mine (for overview). Then innd will *never* fork -- and :>that's one less thing to get in the way of shovelling articles as :>fast as possible. :-) Hmm.. file batching overview :-) :-) :-) -Matt -- Matthew Dillon Engineering, BEST Internet Communications, Inc. <dillon@best.net> [always include a portion of the original email in any response!]