Suppose I have a function that should be able to extract any integer type (char, int, long, etc) from any other integer type because some data outputting device likes to pack small data into bigger data types (ie 4 chars in a uint32_t). I write a function that
Does this snippet of the function make sense? I am losing confidence the more I think about it
if(data_type_size == SIZE_OF_UINT32){
uint32_t d = 0;
if(vector_type_size == SIZE_OF_CHAR){
for(size_t i = 0; i < length; i++){
d = data[i];
if(big_endian){
v.push_back((d & 0xFF000000) >> 24);
v.push_back((d & 0x00FF0000) >> 16);
v.push_back((d & 0x0000FF00) >> 8);
v.push_back((d & 0x000000FF));
}
// little endian
else {
v.push_back((d & 0x000000FF));
v.push_back((d & 0x0000FF00) >> 8);
v.push_back((d & 0x00FF0000) >> 16);
v.push_back((d & 0xFF000000) >> 24);
}
}
}
}
Given a uint32_t array
data[0] = 0x0A0B0C0D;
data[1] = 0x0E0F1011;
The above code outputs
0xa,0xb,0xc,0xd,0xe,0xf,0x10,0x11
0xd,0xc,0xb,0xa,0x11,0x10,0xf,0xe
Which is what I expect. I suppose where I started to get paranoid is thinking about.. well.. is 4F actually F4 in little endianess? But I don't think so, because this is byte stuff, not nibble (?) stuff.
User contributions licensed under CC BY-SA 3.0