I have been going crazy trying to read a binary file that was written using a Java program (I am porting a Java library to C# and want to maintain compatibility with the Java version).

The author of the component chose to use a `float`

along with multiplication to determine the start/end offsets of a piece of data. Unfortunately, there are differences in the way it works in .NET than from Java. In Java, the library uses `Float.intBitsToFloat(someInt)`

where the value of `someInt`

is `1080001175`

.

```
int someInt = 1080001175;
float result = Float.intBitsToFloat(someInt);
// result (as viewed in Eclipse): 3.4923456
```

Later, this number is multiplied by a value to determine start and end position. In this case, the problem occurs when the index value is `2025`

.

```
int idx = 2025;
long result2 = (long)(idx * result);
// result2: 7072
```

According to my calculator, the result of this calculation should be `7071.99984`

. But in Java it is *exactly* `7072`

before it is cast to a long, in which case it is still `7072`

. In order for the factor to be *exactly* `7072`

, the value of the float would have to be `3.492345679012346`

.

`3.492345679012346`

instead of `3.4923456`

(the value shown in Eclipse)?Now, I am searching for a way to get the exact same result in .NET. But so far, I have only been able to read this one file using a hack, and I am not entirely certain the hack will work for *any* file that is generated by the library in Java.

According to intBitsToFloat method in Java VS C#?, the equivalent functionality is using:

```
int someInt = 1080001175;
int result = BitConverter.ToSingle(BitConverter.GetBytes(someInt), 0);
// result: 3.49234557
```

This makes the calculation:

```
int idx = 2025;
long result2 = (long)(idx * result);
// result2: 7071
```

The result before casting to long is `7071.99977925`

, which is shy of the `7072`

value that Java yields.

From there, I assumed that there must be some difference in the math between `Float.intBitsToFloat(someInt)`

and `BitConverter.ToSingle(BitConverter.GetBytes(value), 0)`

to receive such different results. So, I consulted the javadocs for intBitsToFloat(int) to see if I can reproduce the Java results in .NET. I ended up with:

```
public static float Int32BitsToSingle(int value)
{
if (value == 0x7f800000)
{
return float.PositiveInfinity;
}
else if ((uint)value == 0xff800000)
{
return float.NegativeInfinity;
}
else if ((value >= 0x7f800001 && value <= 0x7fffffff) || ((uint)value >= 0xff800001 && (uint)value <= 0xffffffff))
{
return float.NaN;
}
int bits = value;
int s = ((bits >> 31) == 0) ? 1 : -1;
int e = ((bits >> 23) & 0xff);
int m = (e == 0) ? (bits & 0x7fffff) >> 1 : (bits & 0x7fffff) | 0x800000;
//double r = (s * m * Math.Pow(2, e - 150));
// value of r: 3.4923455715179443
float result = (float)(s * m * Math.Pow(2, e - 150));
// value of result: 3.49234557
return result;
}
```

As you can see, the result is *exactly* the same as when using `BitConverter`

, and before casting to a `float`

the number is quite a bit lower (`3.4923455715179443`

) than the presumed Java value of (`3.492345679012346`

) that is needed for the result to be exactly `7072`

.

I tried this solution, but the resultant value is exactly the same, `3.49234557`

.

I also tried rounding and truncating, but of course that makes all of the other values that are not very close to the whole number wrong.

I was able to hack through this by changing the calculation when the float value is within a certain range of a whole number, but as there could be other places where the calculation is very close to the whole number, this solution probably won't work universally.

```
float avg = (idx * averages[block]);
avgValue = (long)avg; // yields 7071
if ((avgValue + 1) - avg < 0.0001)
{
avgValue = Convert.ToInt64(avg); // yields 7072
}
```

Note that the `Convert.ToInt64`

function doesn't work in most cases either, but it has the effect of rounding in this particular case.

How can I make a function in .NET that returns *exactly* the same result as `Float.intBitsToFloat(int)`

in Java? Or, how can I otherwise normalize the differences in float calculation so this result is `7072`

(not `7071`

) given the values `1080001175`

and `2025`

?

Note: It should work the same as Java for all other possible integer values as well. The above case is just one of potentially many places where the calculation is different in .NET.

I am using .NET Framework 4.5.1 and .NET Standard 1.5 and it should produce the same results in both

`x86`

and`x64`

environments.

The definition of 4-byte floating point number in C# and Java (and any other decent programming platform) is based on IEEE standards, so the binary format is the same.

So, it should work. And in fact it does work, but only for X64 targets (my earlier comments about .NET 2 and 4 may be wrong or right, I can't really test old platform binaries).

If you want it to work for all targets, you'll have to define it like this:

```
long result2 = (long)(float)(idx * result);
```

If you look at the generated IL, it adds a supplemental conv.r4 opcode after the multiplication. I guess this forces a float number realization in the compiled x86 code. I suppose it's a jit optimization issue.

I don't know enough about jit optimization to determine if it's a bug or not. The funny thing is the Visual Studio 2017 IDE even grays the cast `(float)`

text and reports that cast as "redundant" or "unnecessary", so it doesn't smell good.

answered on Stack Overflow May 16, 2017 by Simon Mourier

FYI - Now added directly in .net.

```
int myFloat = BitConverter.Int32BitsToSingle( myInt );
```

So... (as in the question)

```
int someInt = 1080001175;
float result = BitConverter.Int32BitsToSingle(someInt);
```

Returns 3.49234557 (almost the same as Eclipse)

User contributions licensed under CC BY-SA 3.0