I'm working on an assignment in which I have to generate a random number of variable sizes, store each individual byte inside of an array, and then reconstruct that number by concatenating the bytes.

For example, if our number is the 16 bit binary `1100001111110000`

, we would have one function which puts this number into an array. Our array would contain 2 indices: `110000111`

and `1110000`

. Then we have another function that knows the size, and combines the two indices to create the original number.

We then have to calculate the sum of all of the 16-bit integers in our array. Here is what I have for loading the data and summing it mem[] is a block which has been allocated using malloc of size `size`

:

```
void loadhalfwdata(char mem[], int size) {
int i, result;
for (i = 0; i < (size >> 1); i++) {
result = (rand() & 0x7fff);
*mem = (char)(result >> 8);
*(mem + 1) = (char)(result & 0xff);
mem += 2;
}
}
int sumhalfwdata(char mem[], int size) {
int i, sum, result;
sum = 0;
for(i = 0; i < (size >> 1); i++) {
result = *mem << 8;
result |= (*(mem + 1) & 0xff);
sum += result;
mem+= 2;
}
return sum;
}
```

This works, and everything is great! Now when I try and extend this to a 32bit integer, things don't seem to work so nicely.

```
void loadworddata(char mem[], int size) {
int i, result;
for(i = 0; i < (size >> 2); i++) {
result = (rand() & 0x7fffffff);
*mem = (char)(result >> 24);
*(mem + 1) = (char)(result >> 16 & 0xff);
*(mem + 2) = (char)(result >> 8 & 0xff);
*(mem + 3) = (char)(result & 0xff);
mem += 4;
}
}
int sumworddata(char mem[], int size) {
int i, sum, result;
sum = 0;
for(i = 0; i < (size >> 2); i++) {
result = (*mem << 24) | ( *(mem + 1) << 16 ) | ( *(mem + 2) << 8 ) | ( *(mem + 3) );
sum += result;
mem += 4;
}
return sum;
}
```

I found a function online to help me convert these integers to binary, and when I load the number this is the output:

```
LOADING WORD:
Word: 1102520059
As Binary: 01000001101101110001111011111011
First 8 bits: 01000001
Second 8 bits: 10110111
Third 8 bits: 00011110
Fourth 8 bits: 11111011
```

However when I do the same thing in the sum function:

```
SUMMING WORD:
Word: -14
As Binary: 11111111111111111111111111110010
First 8 bits: 00101110
Second 8 bits: 10110001
Third 8 bits: 01000001
Fourth 8 bits: 11110010
```

I'm assuming it has something to do with the following statement:
`result = (*mem << 24) | ( *(mem + 1) << 16 ) | ( *(mem + 2) << 8 ) | ( *(mem + 3) );`

I just cannot for the life of me figure out what it is! Thanks, all!

asked on Stack Overflow Sep 23, 2020 by JohnnyLeek

I would say your guess about the line is correct.

If you closely look at `(*mem << 24)`

you'll notice that `*mem`

is a **char**. So another type as in your 1st conversion.

I think this question might enlighten the situation.

answered on Stack Overflow Sep 23, 2020 by hiddenAlpha

This may not be the best approach, but it's what ended up working for me in context of what I needed for my assignment, and for simplicities sake:

I had to grab the lower 8 bits (via masking) before shifting them, as follows:

```
int sumworddata(char mem[], int size) {
int i, sum, result;
sum = 0;
for(i = 0; i < (size >> 2); i++) {
// COMBINE EACH SET OF 4 BITS
result = (*mem & 0xff) << 24;
result |= (*(mem + 1) & 0xff) << 16;
result |= (*(mem + 2) & 0xff) << 8;
result |= (*(mem + 3) & 0xff);
sum += result;
mem += 4;
}
return sum;
}
```

I then extended this further to work with a double-word (long long) data type:

```
long long sumdoublewdata(char mem[], int size) {
int i;
long long sum = 0;
long long result;
for(i = 0; i < (size >> 3); i++) {
result = (long long)(*mem & 0xff) << 56;
result |= (long long)(*(mem + 1) & 0xff) << 48;
result |= (long long)(*(mem + 2) & 0xff) << 40;
result |= (long long)(*(mem + 3) & 0xff) << 32;
result |= (long long)(*(mem + 4) & 0xff) << 24;
result |= (long long)(*(mem + 5) & 0xff) << 16;
result |= (long long)(*(mem + 6) & 0xff) << 8;
result |= (long long)(*(mem + 7) & 0xff);
sum += result;
mem += 8;
}
return sum;
}
```

I appreciate all the help from everyone here and in the comments!

answered on Stack Overflow Sep 24, 2020 by JohnnyLeek

I would do something like this to avoid operation on signed integers

```
void code(unsigned char *buff, int x)
{
union
{
unsigned int u;
int i;
}ui = {.i = x};
for(size_t index = 0; index < sizeof(x); index++)
{
*buff++ = ui.u;
ui.u >>= CHAR_BIT;
}
}
unsigned int decode(unsigned char *buff)
{
union
{
unsigned int u;
int i;
}ui = {.u = 0};
for(size_t index = 0; index < sizeof(ui); index++)
{
ui.u |= (unsigned int)*buff++ << index * CHAR_BIT;
}
return ui.i;
}
int main(void)
{
int x = -45678 ,y;
unsigned char buff[sizeof(x)];
code(buff, x);
y = decode(buff);
printf("%d\n", y);
}
```

answered on Stack Overflow Sep 23, 2020 by P__J__

User contributions licensed under CC BY-SA 3.0