why char* takes 4 bytes instead of 1 byte

0

I wrote a program to find endianness of a system. It doesnt work as the pointer value is 0xffffffef instead of 0xef. Why is the pointer value 0xffffffef? I declared it as a char* which should take only 1 byte. I can fix the problem by *ptr&0xff but I don't know why the pointer value is 4bytes instead of 1byte.

int main() {

uint32_t value = 0xdeadbeef;
char *ptr;
ptr = (char*)&value;

if(*ptr == 0xef){
    printf("it is little endianeness");
}

printf(" 0x%x *ptr\n",);

}
c
asked on Stack Overflow Sep 13, 2019 by user968000

1 Answer

1

printf(" 0x%x *ptr\n",);

should be

printf(" 0x%x\n", *ptr);

but the reason it prints out as 0xffffffef instead of 0xef is because %x interprets the input value as an unsigned int and so the char value is first sign extended to match the size of an int. 0xef is binary 11101111, notice the high bit is set. When extended as a signed value, that high bit gets repeated in the added bits:

11111111 11111111 11111111 11101111

aka 0xffffffef.

To avoid sign extending, cast the char value to an unsigned value first so it then gets zero extended instead:

00000000 00000000 00000000 11101111

aka 0x000000ef, or just 0xef when leading zeros are omitted:

printf(" 0x%x\n", (unsigned char) *ptr);

Alternatively, in C++11 and later, you can use %hhx instead, which interprets the input value as an unsigned char, so you can avoid the explicit typecast:

printf(" 0x%hhx\n", *ptr);

answered on Stack Overflow Sep 13, 2019 by Remy Lebeau • edited Sep 14, 2019 by Remy Lebeau

User contributions licensed under CC BY-SA 3.0