Here is the line of code I'm confused at:
mMaskRowBytes = (mWidth + 0x0000000F) & ~0x0000000F;
~ is the
NOT operator right?
mWidth is 960, or 1111000000
So what we get here is
00000000000000000000001111001111 & 11111111111111111111111111110000
which would just result in
Is the purpose to convert a decimal number into a binary number?
'~' is the bitwise complementation operator.
mWidth, all it does is round upwards to the next number that is a multiple of 16. (Since 960 already divides 16, it remains unchanged).
There are clearer ways of doing that, although the specifics would be down to your particular language.
The ~ operator takes the complement:
So for example: 0110 would become 1001
What the purpose is I'm not sure - you'd need to show how mMaskRowBytes is being used. But it looks as if it is being used for something that deals in 16bit (2bytes) chunks and therefore needs to ensure data is being aligned to the nearest 2byte address space. Often a compiler will do this to ensure the most efficient stack alignment. On 32bit systems this would be 4bytes and 64bit 4/8bytes. This means the compiler can then be confident to move the stack up and down by efficient steps. For modern OS this is all taken care-off by the compiler and the OS. If you're down at the system level then you might have to do this all for yourself.
User contributions licensed under CC BY-SA 3.0