Binary edit leading to erase of previously binary added number

-1

I have a C firmware for a device which sends a set of 6 flags encoded in binary to a desktop application. In a given moment, the firmware creates an unsigned int variable where it tries to add six numbers varying from 0 to 30 encoded in 5 bits to it. A similar code to the one used is as follows:

KVAr_estagios_1_to_6 =  1;    
KVAr_estagios_1_to_6 =  KVAr_estagios_1_to_6 + (2 * (0x00000020));
KVAr_estagios_1_to_6 =  KVAr_estagios_1_to_6 + (3 * (0x00000400));
KVAr_estagios_1_to_6 =  KVAr_estagios_1_to_6 + (4 * (0x00008000));
KVAr_estagios_1_to_6 =  KVAr_estagios_1_to_6 + (5 * (0x00100000));
KVAr_estagios_1_to_6 =  KVAr_estagios_1_to_6 + (6 * (0x02000000));

Doing tests, I found out that the first of the sixt numbers always comes wrong: it was always zero independent on the value that was set to it (in my code example above, 1). Moreover, that only happed after the sixth, last line was added to the code; If only 5 additions were made, no problem appears.

So in the code example above, I'ld expect to see the decimal number 206703681 to be shown, equivalent of the binary 001100010100100000110001000001. Yet, it instead shows 206703680 or 1100010100100000110001000000. And if I decide not to insert the last code line (KVAr_estagios_1_to_6 = KVAr_estagios_1_to_6 + (6 * (0x02000000));), the problem goes away and the first number is properly set, but now lacking the last flag I want to send.

I tried many different ways of doing the binary addition without success. The impression I have is somehow adding the extra information "overflows" the varible, something that shouldn't happen since I'm working with unsigned int, that is, 32 bits are avaiable.

Any help is appreciated. Feel free to do clarification questions; it was a hard task describing what is happening.

c
binary
integer-overflow
asked on Stack Overflow Sep 3, 2018 by Momergil

2 Answers

0

Cannot reproduce (see code below).

int main() {
    uint32_t result = 0;
    for(int i=0; i<6; i++) {
        uint32_t factor = 1 << (i*5); // left shift 1 by "i times 5 bits"
        result += (i+1)*factor;
        printf("factor for position %d: %08X\n",i+1,factor);
    }

    printf("result: %u\n",result);

    for(int i=0; i<6; i++) {
        uint32_t value = result & 0x1F;
        printf("value for position %d: %d\n",i+1,value);
        result >>= 5;
    }
}

Output:

factor for position 1: 00000001
factor for position 2: 00000020
factor for position 3: 00000400
factor for position 4: 00008000
factor for position 5: 00100000
factor for position 6: 02000000
result: 206703681
value for position 1: 1
value for position 2: 2
value for position 3: 3
value for position 4: 4
value for position 5: 5
value for position 6: 6
answered on Stack Overflow Sep 3, 2018 by Stephan Lechner
0

first of all, I'm sorry if I wasn't able to appropriately describe my problem. It's not even the kind of programming I normally work with.

A collegue of mine managed to discover what was the problem: in a given moment, the information was being converted to a float variable, and this somehow screwed things up. That is probably related to the bits used for the decimal representation, even though I'm not sure how the problem worked since the conversion was being done independently of the adittion of the 6th number. When the floating conversion was removed, the problem ceased to appear and things are working fine now.

answered on Stack Overflow Oct 2, 2018 by Momergil

User contributions licensed under CC BY-SA 3.0