Does anybody know how Python manage internally int and long types?
How should I understand the code below?
>>> print type(65535) <type 'int'> >>> print type(65536*65536) <type 'long'>
>>> print type(0x7fffffff) <type 'int'> >>> print type(0x80000000) <type 'long'>
long were "unified" a few versions back. Before that it was possible to overflow an int through math ops.
3.x has further advanced this by eliminating long altogether and only having int.
sys.maxintcontains the maximum value a Python int can hold.
sys.maxsizecontains the maximum size in bytes a Python int can be.
Python 2 will automatically set the type based on the size of the value. A guide of max values can be found below.
The Max value of the default Int in Python 2 is 65535, anything above that will be a long
>> print type(65535) <type 'int'> >>> print type(65536*65536) <type 'long'>
In Python 3 the long datatype has been removed and all integer values are handled by the Int class. The default size of Int will depend on your CPU architecture.
The min/max values of each type can be found below:
If the size of your Int exceeds the limits mentioned above, python will automatically change it's type and allocate more memory to handle this increase in min/max values. Where in Python 2, it would convert into 'long', it now just converts into the next size of Int.
Example: If you are using a 32 bit operating system, your max value of an Int will be 2147483647 by default. If a value of 2147483648 or more is assigned, the type will be changed to Int64.
There are different ways to check the size of the int and it's memory allocation.
Note: In Python 3, using the built-in type() method will always return
<class 'int'> no matter what size Int you are using.
On my machine:
>>> print type(1<<30) <type 'int'> >>> print type(1<<31) <type 'long'> >>> print type(0x7FFFFFFF) <type 'int'> >>> print type(0x7FFFFFFF+1) <type 'long'>
Python uses ints (32 bit signed integers, I don't know if they are C ints under the hood or not) for values that fit into 32 bit, but automatically switches to longs (arbitrarily large number of bits - i.e. bignums) for anything larger. I'm guessing this speeds things up for smaller values while avoiding any overflows with a seamless transition to bignums.
From python 3.x, the unified integer libries are even more smarter than older versions. On my (i7 Ubuntu) box I got the following,
>>> type(math.factorial(30)) <class 'int'>
For implementation details refer
Include/longintrepr.h, Objects/longobject.c and Modules/mathmodule.c files. The last file is a dynamic module (compiled to an so file). The code is well commented to follow.
Just to continue to all the answers that were given here, especially @James Lanes
the size of the integer type can be expressed by this formula:
total range = (2 ^ bit system)
lower limit = -(2 ^ bit system)*0.5 upper limit = ((2 ^ bit system)*0.5) - 1
It manages them because
long are sibling class definitions. They have appropriate methods for +, -, *, /, etc., that will produce results of the appropriate class.
>>> a=1<<30 >>> type(a) <type 'int'> >>> b=a*2 >>> type(b) <type 'long'>
In this case, the class
int has a
__mul__ method (the one that implements *) which creates a
long result when required.
User contributions licensed under CC BY-SA 3.0