I have a function that I am having a hard time understanding how the math is being performed:
unsigned long Fat_AuthAnswer(char b1, char b2, char b3, char b4)`
{
unsigned char* ptr = NULL;
unsigned short StartCRC = b1 + b2*256;
unsigned long ret = crcbuf(StartCRC, b3, &AuthBlock[b4]);
ret = (ret & 0x0000ffff) | (crcbuf(StartCRC, b4, &AuthBlock[b3])<<16);
}
With b1=0xAF b2=0x50
When the function is executed StartCRC = b1 + b2*256; Yields StartCRC = 0x4FAF
I would have expected the results for StartCRC to be 0x50AF.
My question is why does it seem that b2 is reduced by one? Any help would be appreciated. Thanks
It seems char
in your environment is signed and the value 0xAF
is sign-extended to value line 0xFFFFFFAF
. This will cause that 0xFF
is added to tha part of b2
and therefore it looks like b2
is reduced by one.
You should cast b1
to unsigned char
to avoid this.
unsigned short StartCRC = static_cast<unsigned char>(b1) + b2*256;
User contributions licensed under CC BY-SA 3.0