Here's the code segment. It's performing x/2^n
, rounding towards 0. The first print statement calculates the correct value (-7 in this case), but the second statement, which is just the first statement with bias
replaced with ((x>>31) & ((1<<n)+0xffffffff))
(what bias is calculating anyways) and produces 9. What's going on here?
#include <stdio.h>
int main(void) {
int x = 0x80000004;
int n = 0x1c;
int bias = ((x>>31) & ((1<<n)+0xffffffff));
printf("%d\n", (x + bias) >> n);
printf("%d\n", (x + ((x>>31) & ((1<<n)+0xffffffff))) >> n);
return 0;
}
The expressions x + bias
and x + ((x>>31) & ((1<<n)+0xffffffff))
have different types; the first is an int
, the second an unsigned int
. The operator >>
preserves the sign bit for int
s, but not for unsigned
s. (The compiler does not have to do this, but it may do this.) To see clearly what's going on, I have expanded the code a little:
#include <stdio.h>
int main(void) {
int x = 0x80000004;
int n = 0x1c;
int bias = ((x>>31) & ((1<<n)+0xffffffff));
printf("%d\n", (x + bias) >> n);
printf("%d\n", (x + ((x>>31) & ((1<<n)+0xffffffff))) >> n);
printf("%d\n", (x + (int)((x>>31) & ((1<<n)+0xffffffff))) >> n);
printf ("(int) %d\n", x + bias);
printf ("(unsigned) %u\n", x + ((x>>31) & ((1<<n)+0xffffffff)));
return 0;
}
The output is:
-7
9
-7
(int) -1879048189
(unsigned) 2415919107
User contributions licensed under CC BY-SA 3.0