I have a snippet of C code

```
unsigned int x = 0xDEADBEEF;
unsigned short y = 0xFFFF;
signed int z = -1;
if (x > (signed short) y)
printf("Hello");
if (x > z)
printf("World");
```

I wanted to know if the comparison in the first statement evaluates to

```
DEADBEEF > FFFFFFFF
```

Am I right to assume that `y`

, which was an `unsigned short`

is first explicitly cast to a `signed short`

. The Bit representation remains the same. Then for the sake of comparison, `y`

is interpreted as a sign extended integer which is why it becomes `FFFFFFFF`

from `FFFF`

.

Also, can the underlying bit representation change while explicit casts? How are small types extended to be compared to big types? A short has only 2 allocated bytes while and int has 4. I am confused!

C 2018 6.5.4 5 tells us the cast operator in `x > (signed short) y`

performs a conversion:

Preceding an expression by a parenthesized type name converts the value of the expression to the unqualified version of the named type. This construction is called a

cast…

6.3.1.3 tells us about conversions. The result depends on whether the value in `y`

, `0xFFFF`

(which is 65535), can be represented in `signed short`

. The C standard requires `signed short`

to represent up to 32767, but it could be more. If so, then paragraph 1 tells us the result has the same value. Otherwise, paragraph 3 says:

Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

Thus, if `signed short`

is 16 bits, then, `(signed short) y`

has an implementation-defined value or a signal is raised.

In many C implementations, the result will be −1. This `signed short`

value is automatically promoted to `int`

(due to the *usual integer promotions* in 6.3.1.1 1), and `x > (signed short) y`

is effectively `x > -1`

. At that point, the specification of the `>`

operator in 6.5.8 3 tells us (by reference to the *usual arithmetic conversions* in 6.3.1.8) the `int`

is converted to `unsigned int`

to match `x`

. The conversion is performed according to 6.3.1.3 2:

Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

This causes −1 to be converted to `UINT_MAX-1`

, and the expression is effectively `0xDEADBEEF > UINT_MAX-1`

, which is false because `UINT_MAX`

is at least `0xFFFFFFFF`

.

Am I right to assume that y, which was an unsigned short is first explicitly cast to a signed short. The Bit representation remains the same.

No, the bit representation is not required to remain the same.

Also, can the underlying bit representation change while explicit casts?

Yes. Casts in C are largely defined by how they affect the **value**. They are not defined to be a reinterpretation of the bits of the source value or, generally, to preserve the bits of the source value.

How are small types extended to be compared to big types?

There are three cases for converting a narrower integer to a wider integer:

- If both types are signed or both are unsigned, the result is the same value.
- If an unsigned type is converted to a wider signed type, the result is the same value.
- If a signed type is converted to an unsigned type, the result is the same if it is not negative. If it is negative, the rule from 6.3.1.3 2 quoted above applies.

When the result has the same value:

- For positive integers, the resulting value is represented with additional zero bits.
- For negative integers, the resulting value is represented with additional one bits if two’s complement is used. However, the C standard permits one’s complement and sign-and-magnitude representations, so esoteric or ancient C implementations using those would produce whatever bit patterns are needed to represent the required value.

answered on Stack Overflow Jun 27, 2019 by Eric Postpischil • edited Jun 27, 2019 by Eric Postpischil

The result of the cast is implementation defined.

6.3.1.3 [Signed and unsigned integers]

1 When a value with integer type is converted to another integer type other than _Bool [....]

3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

answered on Stack Overflow Jun 27, 2019 by n. 'pronouns' m.

User contributions licensed under CC BY-SA 3.0