I am learning Python and I'm little confused about the int class constructor
Does the int class constructor accept bytes, or bytearray instance?
From the doc: https://docs.python.org/3/library/functions.html#int
class int([x])
class int(x, base=10)
If x is not a number or if base is given, then x must be a string, bytes, or bytearray instance representing an integer literal in radix base.
If I pass the bytes instance, then I am getting the below error.
i = 10
b = bytes(i)
val = int(b)
Output:
Traceback (most recent call last):
File "test.py", line 4, in <module>
val = int(b)
ValueError: invalid literal for int() with base 10: b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
That refers to bytes of text:
>>> int(b'123')
123
You seem to be looking for int.from_bytes.
Related
In the code below, int(x) throws an exception. I understand that x should be a string but -numeric or non-numeric string?
def temp_convert(var):
try:
return int(var)
except ValueError, Argument:
print "The argument does not contain numbers\n", Argument
# Call above function here.
temp_convert("xyz")
The string you supply as the function argument has to be representable as an integer. What would you consider the numerical representation of "xyz" to be?
If you pass the function string representations of numbers, positive or negative, then you won't trigger the exception.
When numbers are encoded as strings there are no problems,
>>> int("10")
10
>>> int("-10")
-10
When symbols that aren't readily represented by a number is supplied to the function the exception will triggered,
>>> int("-10a")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '-10a'
int(x) does not accept floating-point numbers either:
>>> int("10.0")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '10.0'
i = 0x0800
What I understand here is that 0x0800 is a hexadecimal number where '0x' denotes the hex type and the following number '0800' is a 2 byte hexadecimal number. On assigning it to a variable 'i', when its type is checked I got this error
>>> type(i)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
Here I make out that 'i' is suppose to be an int object. I got more confused when I tried this
>>> print i
2048
What is '2048' exactly .. Can anyone throw some light here ?
i is an integer, but you redefined type:
>>> i = 0x0800
>>> i
2048
>>> type(i)
<type 'int'>
>>> type = 42
>>> type(i)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
>>> del type
>>> type(i)
<type 'int'>
Note the type = 42 line; I created a new global name type and that is found before the built-in. You could also use import __builtin__; __builtin__.type(i) in Python 2, or import builtins; builtins.type(i) in Python 3 to access the original built-in type() function:
>>> import __builtin__
>>> type = 42
>>> __builtin__.type(type)
<type 'int'>
>>> type(type)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
>>> del type
>>> type(type)
<type 'type'>
The 0x notation is just one of several ways of specifying an integer literal. You are still producing a regular integer, only the syntax for how you define the value differs here. All of the following notations produce the exact same integer value:
0x0800 # hexadecimal
0o04000 # octal, Python 2 also accepts 0400
0b100000000000 # binary
2048 # decimal
See the Integer Literals reference documentation.
I'll quickly put the answer that I figured out ....
i = 0x0800 will assign an int equivalent of the hexadecimal number (0800) to i.
So if we go down to pieces this looks like
>>> i
2048
>>>
>>> (pow(16,3) * 0) + ( pow(16,2) * 8 ) + (pow (16,1) * 0 ) + (pow(16,0) * 0)
2048
In Python, when you have an object you can convert it to an integer using the int function.
For example int(1.3) will return 1. This works internally by using the __int__ magic method of the object, in this particular case float.__int__.
In Python Fraction objects can be used to construct exact fractions.
from fractions import Fraction
x = Fraction(4, 3)
Fraction objects lack an __int__ method, but you can still call int() on them and get a sensible integer back. I was wondering how this was possible with no __int__ method being defined.
In [38]: x = Fraction(4, 3)
In [39]: int(x)
Out[39]: 1
The __trunc__ method is used.
>>> class X(object):
def __trunc__(self):
return 2.
>>> int(X())
2
__float__ does not work
>>> class X(object):
def __float__(self):
return 2.
>>> int(X())
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
int(X())
TypeError: int() argument must be a string, a bytes-like object or a number, not 'X'
The CPython source shows when __trunc__ is used.
I was going through the code for six.py in the django utils, which, for non Jython implementations, tries, to find the MAXSIZE for the int. Now, the way this is done is interesting - instead of catching an exception on the statement itself, the statement is wrapped within a __len__ method in a custom class. What may be the reason(s) to do so?
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
If I'm not wrong, the same could have been shortened to the below as well, right?
try:
1 << 31
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
int in python3 is a poly-glot sort of class that can represent machine ints as well as big-ints; a feature that superseedes the distinction between int and long in python2. On python3, the construction int(1 << n) never throws an error.
So to solve that, six is using a neat trick that forces python to cram something into a machine sized int. The len builtin always trys to convert the return value of __len__ into a machine sized thing:
>>> class Lengthy(object):
... def __init__(self, x):
... self.x = x
... def __len__(self):
... return self.x
...
>>> int(1<<100)
1267650600228229401496703205376L
>>> type(int(1<<100))
<type 'long'>
>>> len(Lengthy(1<<100))
Traceback (most recent call last):
File "<ipython-input-6-6b1b77348950>", line 1, in <module>
len(Lengthy(1<<100))
OverflowError: long int too large to convert to int
>>>
or, in Python 3, the exception is slightly different:
>>> len(Lengthy(1<<100))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'int' into an index-sized integer
>>>
Is there a way to disable silent conversions in numpy?
import numpy as np
a = np.empty(10, int)
a[2] = 4 # OK
a[3] = 4.9 # Will silently convert to 4, but I would prefer a TypeError
a[4] = 4j # TypeError: can't convert complex to long
Can numpy.ndarray objects be configured to return a TypeError when assigning any value which is not isinstance() of the ndarray type?
If not, would the best alternative be to subclass numpy.ndarray (and override __setattr__ or __setitem__)?
Unfortunately numpy doesn't offer this feature in array creation, you can set if casting is allowed only when you are converting an array (check the documentation for numpy.ndarray.astype).
You could use that feature, or subclass numpy.ndarray, but also consider using the array module offered by python itself to create a typed array:
from array import array
a = array('i', [0] * 10)
a[2] = 4 # OK
a[3] = 4.9 # TypeError: integer argument expected, got float
Just an idea.
#Python 2.7.3
>>> def test(value):
... if '.' in str(value):
... return str(value)
... else:
... return value
...
>>> a[3]=test(4.0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for long() with base 10: '4.0'