Return to BSD News archive
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!olivea!uunet!enterpoop.mit.edu!cambridge-news.cygnus.com!athena.mit.edu!raeburn From: raeburn@athena.mit.edu (Ken Raeburn) Newsgroups: comp.os.386bsd.bugs Subject: Re: gcc 2.3.3 bug? Message-ID: <RAEBURN.93Mar30122651@cambridge.mit.edu> Date: 30 Mar 93 17:26:51 GMT References: <1p7c3mINNdh5@urmel.informatik.rwth-aachen.de> Organization: Massachusetts Institute of Technology Lines: 35 NNTP-Posting-Host: cambridge.cygnus.com In-reply-to: kuku@acds.physik.rwth-aachen.de's message of 29 Mar 1993 17:36:54 GMT In article <1p7c3mINNdh5@urmel.informatik.rwth-aachen.de> kuku@acds.physik.rwth-aachen.de (Christoph Kukulies) writes: #include <limits.h> int i = -2147483648; main() { printf("%d %d\n",INT_MIN,i); } My gcc2.3.3 issues a warning: integer constant so large that it is unsigned. Obviously it is in the range of INT_MIN and INT_MAX. Actually, no, it isn't. According to ANSI C, the minus sign isn't part of the number, so it parses as UNARY_MINUS NUMBER and the NUMBER is large enough that it becomes unsigned. Then this large unsigned constant gets negated, which produces another large unsigned constant. A proper definition for INT_MIN on most machines would be something like #define INT_MIN (- INT_MAX - 1) since it's a compile-time constant, but is signed. ~~ Ken Raeburn ~~ raeburn@cygnus.com, raeburn@mit.edu ~~ "Your Jedi mind tricks won't work on me, monkey boy!"