I've got a vector of four bytes:
std::vector<int8_t> src = { 0x0, 0x0, 0x0, 0xa };
, which in hexadecimal is. (0x0000000a)
How do i type-pun it into an int32_t
so that i get 10 using reinterpret_cast
?
This can be done easily using bitshifts like this:
(int32_t)((src[offset] << 24) | (src[offset+ 1] << 16) | (src[offset+ 2] << 8) | (src[offset+ 3]));
but as far as i'm aware of how reinterpret_cast
work, it can be used here at its best, but i can't firgure out how exactly do that in code itself.
reinterpret_cast<int32_t*>(&0x0000000a);
ps: This is not just for int32_t or so, this can be reinterpreted to anything i would wish. That's the point.
This is not possible; an attempt would be undefined behaviour due to strict aliasing violation. An array of int8_t
cannot be accessed as some other type (except for this specific cases listed in the rule, such as uint8_t
).
Using bitshifts or memcpy are correct ways (as is the idea of S.M.'s answer)
#include <iostream>
#include <numeric>
#include <vector>
int main() {
std::vector<int8_t> src = { 0x0, 0x0, 0x27, 0x10 };
std::cout <<
std::accumulate(src.begin(), src.end(), // or src.begin() + sizeof(int32_t)
0, [](const uint32_t &a, const uint8_t &b){ return a * 256 + b; });
std::cout << std::endl;
return 0;
}
User contributions licensed under CC BY-SA 3.0