I have a snippet of C code
unsigned int x = 0xDEADBEEF; unsigned short y = 0xFFFF; signed int z = -1; if (x > (signed short) y) printf("Hello"); if (x > z) printf("World");
I wanted to know if the comparison in the first statement evaluates to
DEADBEEF > FFFFFFFF
Am I right to assume that
y, which was an
unsigned short is first explicitly cast to a
signed short. The Bit representation remains the same. Then for the sake of comparison,
y is interpreted as a sign extended integer which is why it becomes
Also, can the underlying bit representation change while explicit casts? How are small types extended to be compared to big types? A short has only 2 allocated bytes while and int has 4. I am confused!
C 2018 6.5.4 5 tells us the cast operator in
x > (signed short) y performs a conversion:
Preceding an expression by a parenthesized type name converts the value of the expression to the unqualified version of the named type. This construction is called a cast…
126.96.36.199 tells us about conversions. The result depends on whether the value in
0xFFFF (which is 65535), can be represented in
signed short. The C standard requires
signed short to represent up to 32767, but it could be more. If so, then paragraph 1 tells us the result has the same value. Otherwise, paragraph 3 says:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
signed short is 16 bits, then,
(signed short) y has an implementation-defined value or a signal is raised.
In many C implementations, the result will be −1. This
signed short value is automatically promoted to
int (due to the usual integer promotions in 188.8.131.52 1), and
x > (signed short) y is effectively
x > -1. At that point, the specification of the
> operator in 6.5.8 3 tells us (by reference to the usual arithmetic conversions in 184.108.40.206) the
int is converted to
unsigned int to match
x. The conversion is performed according to 220.127.116.11 2:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
This causes −1 to be converted to
UINT_MAX-1, and the expression is effectively
0xDEADBEEF > UINT_MAX-1, which is false because
UINT_MAX is at least
Am I right to assume that y, which was an unsigned short is first explicitly cast to a signed short. The Bit representation remains the same.
No, the bit representation is not required to remain the same.
Also, can the underlying bit representation change while explicit casts?
Yes. Casts in C are largely defined by how they affect the value. They are not defined to be a reinterpretation of the bits of the source value or, generally, to preserve the bits of the source value.
How are small types extended to be compared to big types?
There are three cases for converting a narrower integer to a wider integer:
When the result has the same value:
The result of the cast is implementation defined.
18.104.22.168 [Signed and unsigned integers]
1 When a value with integer type is converted to another integer type other than _Bool [....]
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
User contributions licensed under CC BY-SA 3.0