I have this number -16777216 in Int32 and its hex representation is FF000000. On right shifting this number right by 24 I get 0xFFFFFFFF and on masking by 0xFF000000 and then shifting right by 24 I get 0x000000FF.
Int32 a = -16777216;//FF000000 in hex
Int32 b = a >> 24;
Int32 c = (a & 0xFF000000) >> 24;
Why are b and c different numbers?
When you shifting a negative number to the right the bit 1
is added at the beginning, not 0
. So when you shift the value 0xFF000000
to the right you get 0xFFFFFFFF
(which is -1
btw.). This ensures that the negative number stays negative and not suddenly become positive because of the shift operation.
However, this applies for Int32 values you have written there. But with the code
(a & 0xFF000000)
you get a result of type Int64
or long
, not int
(or Int32
). So instead of having 0xFF000000
you actually have 0x00000000FF000000
, a positive number. If you shift it to the right you get 0x00000000000000FF
, which is the positive number 255
.
The value 0xFF000000
is a UInt32
value. Combining it with an Int32
value with the &
operator results in a Int64
value.
int a = 4;
uint b = 15;
object c = a & b;
Console.WriteLine($"{a} - {a.GetType()}");
Console.WriteLine($"{b} - {b.GetType()}");
Console.WriteLine($"{c} - {c.GetType()}");
This results in the following output:
4 - System.Int32
15 - System.UInt32
4 - System.Int64
No. This is actually not a Computer question - the number Systems themself do not allow this:
However the same Number can have a lot of possible String Representations.
In your case you seem to be mixing up 0xFF000000
and 0x000000FF
? Or are just wondering why the right shifting pads with 1's? It is hard to tell.
User contributions licensed under CC BY-SA 3.0