Computer uses two's complement to store integers. Say, for int32 signed, 0xFFFFFFFF represents '-1'. According to this theory, it is not hard to write such code in C to init a signed integer to -1;
int a = 0xffffffff; printf("%d\n", a);
Obviously, the result is
However, in Go, the same logic dumps differently.
a := int(0xffffffff) fmt.Printf("%d\n", c)
The code snippet prints
4294967295, the maximum number an uint32 type can hold. Even if I cast
c explicitly in
fmt.Printf("%d\n", int(c)), the result is still the same.
The same problem happens when some bit operations are imposed on signed integer as well, make signed become unsigned.
So, what happens to Go in such a situation?
The problem here is that size of
int is not fixed, it is platform dependent. It may be 32 or 64 bits. In the latter case assigning
0xffffffff to it is equivalent to assigning
4294967295 to it, which is what you see printed.
Now if you convert that value to
int32 (which is 32-bit), you'll get your
a := int(0xffffffff) fmt.Printf("%d\n", a) b := int32(a) fmt.Printf("%d\n", b)
This will output (try it on the Go Playgroung):
Also note that in Go it is not possible to assign
0xffffffff directly to a value of type
int32, because the value would overflow; nor it is valid to create a typed constant having an illegal value, such as
int32(0xffffffff). Spec: Constants:
The values of typed constants must always be accurately representable by values of the constant type.
So this gives a compile-time error:
var c int32 = 0xffffffff // constant 4294967295 overflows int32
But you may simply do:
var c int32 = -1
You may also do:
var c = ^int32(0) // -1
User contributions licensed under CC BY-SA 3.0