Re: Undefined Behavior. ..
Tim Rentsch wrote:
Exactly - it serves to make an extremely implausible but legal
implementation very marginally more plausible.
Yes - I can imagine reasons for wanting INT_MAX to be a power of 10; I
find it much harder to come up with reasons for wanting INT_MAX to be
2 times a power of 10. Again, it's just a matter of making an
intrinsically implausible implementation a little more plausible.
Which position - mine or theirs? My position is based upon the fact
that the standard explicitly allows for trap representations , and says
nothing to limit how many any given type may have. The opposing
position is based upon the claim that 6.2.6.2p2 defines the only trap
representation involving value bits that a signed integer type is
allowed to have. As I read it, 6.2.6.2p2 serves primarily to explain
the fact that the bit pattern that would otherwise represent negative
zero in 1's complement or sign-magnitude representations is allowed to
be a trap representation. This clears up any ambiguity that might
arise due to the fact that 0 has two distinct representations for such
types. It doesn't imply in any way that negative zero is the only
allowed trap representation. The fact that it also defines a bit
pattern for 2's complement representations that is allowed to be a
trap representation is a weak point in my argument. If my argument is
correct, that clause is redundant; but it doesn't directly contradict
my conclusion.
Tim Rentsch wrote:
James Kuyper <jameskuyper@ve rizon.netwrites :
>
>
I must admit to having trouble with this one. What's the basis for
the position you state? In the example given the presence of a
padding bit seems completely irrelevant (except perhaps to make the
number of bits a multiple of 8 while preserving round limits?)
>
[...]
The standard explicitly states (6.2.6.2p2) that any given signed integer
type has one bit pattern that might or might not be a trap
representation - it's up to the implementation to decide (and to
document their decision). For types that use a one's complement or
sign-magnitude representation, this is the bit pattern that would
otherwise represent negative 0. If the type uses a twos-complement
representation, this is the bit pattern that would otherwise represent
-2^N, where N is the number of value bits in the type.
Some people read that clause as allowing only that one trap
representation, and requiring that all other bit patterns must be valid.
I don't read it that way. It seems to me that what it says still allows
for the possibility of other trap representations as well. An
implementation that used 1 padding bit, 1 sign bit, and 30 value bits
for 'int' could set INT_MAX to 1000000000, and INT_MIN to -1000000000,
and declare that all bit patterns that would seem to represent values
outside that range are actually trap representations . It's been argued
that this violates the requirement that for any signed type "Each bit
that is a value bit shall have the same value as the same bit in the
object representation of the corresponding unsigned type." But every
does have that value, in every non-trap representation that has that bit
set.
The standard explicitly states (6.2.6.2p2) that any given signed integer
type has one bit pattern that might or might not be a trap
representation - it's up to the implementation to decide (and to
document their decision). For types that use a one's complement or
sign-magnitude representation, this is the bit pattern that would
otherwise represent negative 0. If the type uses a twos-complement
representation, this is the bit pattern that would otherwise represent
-2^N, where N is the number of value bits in the type.
Some people read that clause as allowing only that one trap
representation, and requiring that all other bit patterns must be valid.
I don't read it that way. It seems to me that what it says still allows
for the possibility of other trap representations as well. An
implementation that used 1 padding bit, 1 sign bit, and 30 value bits
for 'int' could set INT_MAX to 1000000000, and INT_MIN to -1000000000,
and declare that all bit patterns that would seem to represent values
outside that range are actually trap representations . It's been argued
that this violates the requirement that for any signed type "Each bit
that is a value bit shall have the same value as the same bit in the
object representation of the corresponding unsigned type." But every
does have that value, in every non-trap representation that has that bit
set.
I must admit to having trouble with this one. What's the basis for
the position you state? In the example given the presence of a
padding bit seems completely irrelevant (except perhaps to make the
number of bits a multiple of 8 while preserving round limits?)
implementation very marginally more plausible.
-- is
there any difference between this example and one using 31 value bits
to represent values in [-2000000000 .. 2000000000]?
there any difference between this example and one using 31 value bits
to represent values in [-2000000000 .. 2000000000]?
find it much harder to come up with reasons for wanting INT_MAX to be
2 times a power of 10. Again, it's just a matter of making an
intrinsically implausible implementation a little more plausible.
... It seems like
all you are saying is that you think some combinations of values
bits are allowed to be trap representations whereas other people
think they aren't (not counting the distinguished ones explicitly
identified in the Standard, of course). What's the argument to
support this position?
all you are saying is that you think some combinations of values
bits are allowed to be trap representations whereas other people
think they aren't (not counting the distinguished ones explicitly
identified in the Standard, of course). What's the argument to
support this position?
that the standard explicitly allows for trap representations , and says
nothing to limit how many any given type may have. The opposing
position is based upon the claim that 6.2.6.2p2 defines the only trap
representation involving value bits that a signed integer type is
allowed to have. As I read it, 6.2.6.2p2 serves primarily to explain
the fact that the bit pattern that would otherwise represent negative
zero in 1's complement or sign-magnitude representations is allowed to
be a trap representation. This clears up any ambiguity that might
arise due to the fact that 0 has two distinct representations for such
types. It doesn't imply in any way that negative zero is the only
allowed trap representation. The fact that it also defines a bit
pattern for 2's complement representations that is allowed to be a
trap representation is a weak point in my argument. If my argument is
correct, that clause is redundant; but it doesn't directly contradict
my conclusion.
Comment