I am debugging a production code written in C and its simplest form can be shown as -
void
test_fun(int sr)
{
int hr = 0;
#define ME 65535
#define SE 256
sr = sr/SE; <-- This should yield 0
if(sr == 1)
hr = ME;
else
hr = (ME+1)/sr; <-- We should crash here.
}
We are passing sr
as 128, which ideally should yield in divide by zero error in processor. I see that this division happens successfully with quotient as 0x7ffffffff (hr
is this value).
This does not happens (it crashes when attempts the division by zero) when I compile and run the same on Intel platform with gcc.
Want to to know principle behind this big quotient. Not sure if it is just some other bug I still need to uncover. Can someone help me with another program that does the same?
Division by zero is undefined behaviour, see C11 standard 6.5.5#5 (final draft).
Getting a trap or SIGFPE is just a courtesy of the CPU/OS. PowerPC as typical RISC CPU does not catch it, as it can safely be detected by a simple check of the divisor right before doing the actual division. x86 OTOH does catch this - typical CISC behaviour.
If required by a higher layer standard, you probably have missed a compiler option which emits this check automatically. POSIX for instance does not enforce SIGFPE, this is optional.
Per the PPC architecture manual (which you can get from IBM), divide by 0 on a PPC does not result in any kind of signal or trap; instead, you just get some undefined value that varies from processor to processor. In your case, it looks the particular PPC variant you have generates MAXINT (largest positive integer) when dividing a positive number by 0.
User contributions licensed under CC BY-SA 3.0