I have a char byte that I want to convert to an int. Basically I am getting the value (which is 0x13) from a file using the fopen command and storing it into a char buffer called buff.
I am doing the following:
//assume buff[17] = 0x13
v->infoFrameSize = (int)buff[17] * ( 128^0 );
infoFrameSize is a type int that is stored in a structure called 'v'.
The value I get for v->infoFrameSize is 0x00000980. Should this be 0x00000013?
I tried taking out the multiply by 128 ^ 0 and I get the correct output:
v->infoFrameSize = 0x00000013
Any info or suggested reading material on what is happening here would be great. Thanks!
^
is bitwise xor operation, not exponentiation.
Operator ^ in C does bit operation - XOR. 128 xor 0 equals 128.
128^0 is not doing what you think it does.
cout << (128^0)
returns 128.
Try pow(128,0)
. Then, add the following to the top of your code:
#include <math.h>
Also, note that pow always returns a float. So you'll need to cast your final answer to an int. So:
(int)(buff[17] * pow(128,0));
In C 128 ^ 0
equates the bitwise XOR of 128 and 0, it doesn't raise 128 to the power of 0 (which is just 1).
A char
is simply an integer consisting of a single byte, to "convert" it to an int
(which isn't really converting, you're just storing the byte into a larger data type) you do:
char c = 5;
int i = (int)c
tada.
There is no point in the ^0
term. Anything xor'd with zero remains unchanged (so 128^0
is 128).
The value you get is correct; when you multiply 0x13 (aka 19) by 128 (aka 0x80), you get 0x0980 (aka 2432).
Why would you expect the assignment to ignore the multiplication?
To convert a char
to an int
, you merely cast it:
char c = ...;
int x = (int) c;
K&R would have you read the one byte from the file using getc() and store it directly into an int which eliminates any issues you might be seeing. However, if you are reading from the file into an array of bytes, simply cast to int as follows:
v->infoFrameSize = (int)buff[17];
I'm not sure why you're multiplying by 128^0.
The only problem I know of when converting from char
to int
is that char
can be signed or unsigned, depending on the platform. If it happens to be signed, a big positive number stored inside a char may end up being considered as negative. When you will print it, it will either be a negative number or an abnormally big number (if you print it as an unsigned integer).
The solution is simply to use signed char
or unsigned char
explicitly in cases like this one.
"^" is a bitwise XOR Operation, if you want to do an exponent use
pow(128,0);
Why are you multiplying by one?
You can convert from a char to an int by simply defining an int and setting it like so:
char x = 0x13;
int y;
y = (int)x;
User contributions licensed under CC BY-SA 3.0