I got confused of the type of an integer constant, as described here:
On the first row, if a constant ended without 'u'
, why decimal constant must be signed
type, while octal or hexadecimal constant can be an unsigned
type?
I think that taking the constant as an unsigned version if the signed
version do not fit has problem, for example:
long long l1 = 0xffffffff + 0xffffffff; // 0xffffffff is unsigned int
long long l2 = 4294967295 + 4294967295; // 4294967295 is signed long
l1 is fffffffe, while l2 is 1fffffffe. and obviously l1 is wrong
If I were to say, I'd answer with that hexadecimal and octal numbers represent bit pattern more closely than decimal ones, and therefore the C standard committee has decided that hex and oct numbers may be unsigned even without U
suffix.
Think about how many people would write code like this:
uint32_t b = a & 0xFFFFFFF0;
uint32_t b = a & 4294967280; // or -15?
The issue causes problems more because of using wrong type for the operations than the constants not being the right type.
// some_wide_type = some_narrow_type + some_narrow_type --> trouble
long long l1 = 0xffffffff + 0xffffffff;
long long l2 = 4294967295 + 4294967295;
Instead do the math using the target type
long long l1 = 0LL + 0xffffffff + 0xffffffff;
long long l2 = 0LL + 4294967295 + 4294967295;
or use 1 type (long long
) rather than the 3 (long long, unsigned long, long
)
long long l1 = 0xffffffffLL + 0xffffffffLL;
long long l2 = 4294967295LL + 4294967295LL;
User contributions licensed under CC BY-SA 3.0