int x = 25;
unsigned int g = x & 0x80000000;
how did this code read the most significant bit of in the address of x? does the reference to 0x80000000, or binary 1000 0000 0000 0000 accomplished that task, or was it something else?
For char
the most significant bit is typically the sign bit as per Two's Complement, so this should be:
char x = 25;
unsigned int msb = x & (1 << 6);
Where (1 << 6)
means the 6 bit, counting from 0, or the 7th counting from 1st. It's the second-to-top bit and equivalent to 0x40
.
Since 25
is 0b00011001
you won't get a bit set. You'll need a value >= 64.
User contributions licensed under CC BY-SA 3.0