I'm learning about binary representation of integers and tried to write a function that returns an int
multiplied by 2 using saturation. The thought process is if the value overflows positively the function returns INT_MAX
, and conversely if it overflows negatively it returns INT_MIN
. In all other cases the binary value is left shifted by 1.
What I'm wondering is why I have to cast the value 0xC0000000
as an int
in order to get my function to work correctly when I pass the argument x = 1
.
Here is my function:
int timestwo (int x){
if(x >= 0x40000000) // INT_MAX/2 + 1
return 0x7fffffff; // INT_MAX
else if(x < (int) 0xC0000000) // INT_MIN/2
return 0x80000000; // INT_MIN
else
return x << 1;
return 0;
}
Hexadecimal (and octal) literals in C are typed using the smallest promoted (=int
or a higher ranking type) type, signed or unsigned, that can accommodate the value.
This differs from decimal literals, which stay within signed types if they don't have the u
/U
suffix, or within unsigned types otherwise (6.4.4.1p5):
This makes 0xC0000000
on a system with 32-bit integers unsigned and comparing (or otherwise pairing by means of an operator) an unsigned with a signed of the same rank forces the signed to become unsigned (6.3.1.8), so without the (int)
cast you get an implicit (unsigned int)x < (unsigned int) 0xC0000000
.
The value specified by the constant 0xC0000000
will not fit in an int
(assuming 32 bit), but it does fit in an unsigned int
, so the type of this constant is unsigned int
. This unsigned value is larger than 1 so the comparison evaluates to false.
The result of the cast to int
is actually implementation defined, although on a two's complement system this will typically result in what you would expect.
User contributions licensed under CC BY-SA 3.0