Bit order in struct is not what I would have expected

4

I have a framework which uses 16 bit floats, and I wanted to separate its components to then use for 32bit floats. In my first approach I used bit shifts and similar, and while that worked, it was wildly chaotic to read.

I then wanted to use custom bit sized structs instead, and use a union to write to that struct.

The code to reproduce the issue:

#include <iostream>
#include <stdint.h>

union float16_and_int16
{
    struct
    {
        uint16_t    Mantissa : 10;
        uint16_t    Exponent : 5;
        uint16_t    Sign : 1;
    } Components;

    uint16_t bitMask;
};

int main()
{
    uint16_t input = 0x153F;

    float16_and_int16 result;
    result.bitMask = input;

    printf("Mantissa: %#010x\n", result.Components.Mantissa);
    printf("Exponent: %#010x\n", result.Components.Exponent);
    printf("Sign:     %#010x\n", result.Components.Sign);
    return 0;
}

In the example I would expect my Mantissa to be 0x00000054, the exponent to be 0x0000001F, and sign 0x00000001

Instead I get Mantissa: 0x0000013f, Exponent: 0x00000005, Sign: 0x00000000

Which means that from my bit mask first the Sign was taken (first bit), next 5 bits to exponent, then 10 bit to mantissa, so the order is inverse of what I wanted. Why is that happening?

c++
endianness
bit-fields
asked on Stack Overflow Jun 4, 2019 by SinisterMJ • edited Jun 4, 2019 by SinisterMJ

2 Answers

5

The worse part is that a different compiler could give the expected order. The standard has never specified the implementation details for bitfields, and specifically the order. The rationale being as usual that it is an implementation detail and that programmers should not rely nor depend on that.

The downside is that it is not possible to use bitfields in cross language programs, and that programmers cannot use bitfields for processing data having well known bitfields (for example in network protocol headers) because it is too complex to make sure how the implementation will process them.

For that reason I have always thought that it was just an unuseable feature and I only use bitmask on unsigned types instead of bitfields. But that last part is no more than my own opinion...

answered on Stack Overflow Jun 4, 2019 by Serge Ballesta • edited Jun 4, 2019 by Serge Ballesta
2

I would say your input is incorrect, for this compiler anyway. This is what the float16_and_int16 order looks like.

 sign   exponent  mantissa
 [15]   [14:10]    [9:0]

or

SGN |  E  X  P  O  N  E  N  T|     M  A   N   T   I   S   S   A                |
 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |

if input = 0x153F then bitMask ==

SGN |  E  X  P  O  N  E  N  T|     M  A   N   T   I   S   S   A                |
 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
 0     0    0    1    0    1    0    1    0    0    1    1    1    1    1    1

so

MANTISSA == 0100111111  (0x13F)
EXPONENT == 00101 (0x5)
SIGN == 0 (0x0)

If you want mantissa to be 0x54, exponent 0x1f and sign 0x1 you need

SGN |  E  X  P  O  N  E  N  T|     M  A   N   T   I   S   S   A                |
 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
 1     1    1    1    1    1    0    0    0    1    0    1    0    1    0    0

or

input = 0xFC64
answered on Stack Overflow Jun 4, 2019 by pm101

User contributions licensed under CC BY-SA 3.0