Return to BSD News archive
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!agate!usenet.ins.cwru.edu!odin!chet From: chet@odin.ins.cwru.edu (Chet Ramey) Newsgroups: comp.os.386bsd.bugs Subject: Re: VM problems w/unlimited memory? Date: 17 Mar 1993 20:27:36 GMT Organization: Case Western Reserve University, Cleveland OH (USA) Lines: 56 Message-ID: <1o81joINNieh@usenet.INS.CWRU.Edu> References: <1o33btINNqoa@usenet.INS.CWRU.Edu> <1993Mar16.031317.8533@fcom.cc.utah.edu> <1o4spvINNl1v@usenet.INS.CWRU.Edu> <1993Mar16.221837.10302@fcom.cc.utah.edu> NNTP-Posting-Host: odin.ins.cwru.edu In article <1993Mar16.221837.10302@fcom.cc.utah.edu> terry@cs.weber.edu (A Wizard of Earth C) writes: >Sorry, but the fcntl fails not because it found allocated memory for the >closed fd, but because the fd exceeds fd_lastfile in the per process >open file table. The location, since it doesn't exist, isn't addressable. This requires that the programmer be aware of the implementation technique used by the kernel to manage the process `open file table'. It's a kernel problem -- we all agree on that. >The safest would be the call I suggested to return the largest open fd >and add one to it for your use of the fd; I don't understand from the >code why it's necessary to get anything other than the next available >descriptor; the ones the shell cares about, 0, 1, and 2, are already >taken at that point; if nothing else a search starting a 3 would be >reasonable. Nope. Not reasonable. If a user were to use fd 3 in a redirection spec, perhaps to save one of the open file descriptors to, or to open an auxiliary input, the file descriptor we so carefully opened to /dev/tty and count on to be connected to the terminal would suddenly be invalid. Bash attempts to choose an `unlikely' file descriptor -- nothing is totally safe, but this minimizes problems. Bash uses the same technique in shell.c to protect the file descriptor it's using to read a script -- all shells do pretty much the same thing. (Well, at least it does now. The distributed version of 1.12 does not do this.) Before Posix.2 it was `safe' to assume that fd 19 could be used for this -- a process was guaranteed to have at least 20 file descriptors available to it, and there were no multi-digit file descriptors in redirections. This is no longer the case. Since bash currently uses stdio for reading scripts, and stdio provides no portable way to change the file descriptor associated with a given FILE *, bash attempts to avoid having to do so. We used to just use the file descriptor returned from opening the script, but ran into a number of mysterious problems as a result. >I do *NOT* suggest 19, as this would tend to be bad for >most SVR4 systems, as it would have a tendency to push the number of >fd's over the bucket limit if NFPCHUNK were 20 and allocation were done >from index 20 on up. I don't really see how a loop going down starting at 19 will cause the fd `table' to grow beyond 20. (That is the code we were talking about, right?) I'll probably put something in the code to clamp the max file descriptor to something reasonable, like 256, and press for its inclusion in the next release of bash, whenever that is. Chet -- ``The use of history as therapy means the corruption of history as history.'' -- Arthur Schlesinger Chet Ramey, Case Western Reserve University Internet: chet@po.CWRU.Edu