TL;DR: Is it right to assume, given enum NAME {...};
, that enum NAME n
is the same as int n
during execution? Can n
be operated on as if it were a signed int
, even though it is declared as enum NAME
? The reason: I really want to use enum
types for return flags, as a type 'closed' with respect to bit-operations.
For example: Let typedef enum FLAGS { F1 = 0x00000001, F2 = 0x00000002, F3 = 0x00000004 } FLAGS ;
Then, FLAGS f = F1 | F2;
assigns 3
to f
, throwing no related errors or warnings. This and numerous other compiler-permitted usage scenarios, such as f++
, makes me think I could legit treat f
as if it were a signed int
. Compiler used: MSVC'19, 16.9.1, with setting "C17 (2018) Standard (/std:c17)";
I searched the standard (the sketch here) and looked at other related questions, to find no mention of what suspect (and wished) to be a "silent promotion" of enum NAME x
to signed int x
, even though the identifiers have that type. This leads me to believe that the way enum
behaves when assigned a value that isn't a member, is implementation dependent. I'm asking, in part, in order to confirm or deny this claim.
C 2018 6.7.2.2 4 says:
Each enumerated type shall be compatible with
char
, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined, but shall be capable of representing the values of all the members of the enumeration…
So the answer to “Can I treat an enum
variable as an int
in C17?” is no, as an object with enumerated type might be effectively a char
or other integer type different from int
.
However, it is effectively an integer type, so FLAGS f = F1 | F2;
will work: The FLAGS
type must be capable of representing its values F1
and F2
, so whatever type is used for FLAGS
must contain all the bits of F1
and of F2
, so it contains all the bits of F1 | F2
.
Technically, you could construct a trap representation by manipulating bits, so it is not guaranteed that the type is closed under bit operations. For example, if a C implementation used two’s complement for 32-bit int
but reserved the bit pattern 1000…0000
as a trap representation, then INT_MIN & -2
would be a trap representation. (INT_MIN
would have the bit pattern 1000…0001
, for 231−1, and -2
would have the pattern 1111…1110
.) This does not occur in C implementations without trap representations in its integer types.
We might question whether the fact that two types (an enumeration and its implementation-defined integer type) are compatible means we can use one as the other. Two types are compatible if they are the same (6.2.7 1), and the only things that can make types compatible but not the same involve qualifiers (like const
) that are not an issue for this or involve other properties (such as array dimensions) that are not relevant to simple integer types.
This is in chapter 6.4.4.3 of the PDF you linked:
An identifier declared as an enumeration constant has type
int
.
Your thought of a promotion of enum NAME x
to signed int x
is not really true, as it is the identifier NAME
that is of type int
. The value x
is of the type you use to define the identifier, and it is promoted to int
.
Additionally, integer promotion takes place in integer operations.
EDIT
Some compilers are quite serious about the difference between enum
and int
, especially if they have an option to reduce the bit width to the smallest possible. For example, the one I'm using in a job's project, automatically inserts checks on each usage of an enum
value against the defined values. Additionally, IIRC, it rejects all implicite conversions, we need to cast explicitely similarly to:
FLAGS f = (FLAGS)((int)F1 | (int)F2);
But this is an extension of this special beast called with specific safety options...
User contributions licensed under CC BY-SA 3.0