Problem converting endianness


I'm following this tutorial for using OpenAL in C++:

As you can see in the tutorial, they leave a few methods unimplemented, and I am having trouble implementing file_read_int32_le(char*, FILE*) and file_read_int16_le(char*, FILE*). Apparently what it should do is load 4 bytes from the file (or 2 in the case of int16 I guess..), convert it from little-endian to big endian and then return it as an unsigned integer. Here's the code:

static unsigned int file_read_int32_le(char* buffer, FILE* file) {
    size_t bytesRead = fread(buffer, 1, 4, file);
    printf("%x\n",(unsigned int)*buffer);
    unsigned int* newBuffer = (unsigned int*)malloc(4);
    *newBuffer = ((*buffer << 24) & 0xFF000000U) | ((*buffer << 8) & 0x00FF0000U) | ((*buffer >> 8) & 0x0000FF00U) | ((*buffer >> 24) & 0x000000FFU);
    printf("%x\n", *newBuffer);
    return (unsigned int)*newBuffer;

When debugging (in XCode) it says that the hexadecimal value of *buffer is 0x72, which is only one byte. When I create newBuffer using malloc(4), I get a 4-byte buffer (*newBuffer is something like 0xC0000003) which then, after the operations, becomes 0x72000000. I assume the result I'm looking for is 0x00000027 (edit: actually 0x00000072), but how would I achieve this? Is it something to do with converting between the char* buffer and the unsigned int* newBuffer?

asked on Stack Overflow Jul 28, 2011 by benwad

3 Answers


there's a whole range of functions called "htons/htonl/hton" whose sole purpose in life is to convert from "host" to "network" byte order.

Each function has a reciprocal that does the opposite.

Now, these functions won't help you necessarily because they intrinsically convert from your hosts specific byte order, so please just use this answer as a starting point to find what you need. Generally code should never make assumptions about what architecture it's on.

Intel == "Little Endian". Network == "Big Endian".

Hope this starts you out on the right track.

answered on Stack Overflow Jul 28, 2011 by Harry Seward

I've used the following for integral types. On some platforms, it's not safe for non-integral types.

template <typename T> T byte_reverse(T in) {
   T out;
   char* in_c = reinterpret_cast<char *>(&in);
   char* out_c = reinterpret_cast<char *>(&out);
   std::reverse_copy(in_c, in_c+sizeof(T), out_c);
   return out;

So, to put that in your file reader (why are you passing the buffer in, since it appears that it could be a temporary)

static unsigned int file_read_int32_le(FILE* file) {
    unsigned int int_buffer;
    size_t bytesRead = fread(&int_buffer, 1, sizeof(int_buffer), file);
    /* Error or less than 4 bytes should be checked */
    return byte_reverse(int_buffer);
answered on Stack Overflow Jul 28, 2011 by Dave S • edited Jul 28, 2011 by Dave S

Yes, *buffer will read in Xcode's debugger as 0x72, because buffer is a pointer to a char.

If the first four bytes in the memory block pointed to by buffer are (hex) 72 00 00 00, then the return value should be 0x00000072, not 0x00000027. The bytes should get swapped, but not the two "nybbles" that make up each byte.

This code leaks the memory you malloc'd, and you don't need to malloc here anyway.

Your byte-swapping is correct on a PowerPC or 68K Mac, but not on an Intel Mac or ARM-based iOS. On those platforms, you don't have to do any byte-swapping because they're natively little-endian.

Core Foundation provides a way to do this all much more easily:

static uint32_t file_read_int32_le(char* buffer, FILE* file) {
    fread(buffer, 1, 4, file);            // Get four bytes from the file
    uint32_t val = *(uint32_t*)buffer;    // Turn them into a 32-bit integer

    // Swap on a big-endian Mac, do nothing on a little-endian Mac or iOS
    return CFSwapInt32LittleToHost(val);
answered on Stack Overflow Jul 28, 2011 by Bob Murphy

User contributions licensed under CC BY-SA 3.0