Unable to create a tensor using torch.Tensor - python

i was trying to create a tensor as below.
import torch
t = torch.tensor(2,3)
i got the following error.
TypeError Traceback (most recent call
last) in ()
----> 1 a=torch.tensor(2,3)
TypeError: tensor() takes 1 positional argument but 2 were given
so, i tried the following
import torch
t = torch.Tensor(2,3)
# No error while creating the tensor
# When i print i get an error
print(t)
i get the following error
RuntimeError Traceback (most recent call
last) in ()
----> 1 print(a)
D:\softwares\anaconda\lib\site-packages\torch\tensor.py in
repr(self)
55 # characters to replace unicode characters with.
56 if sys.version_info > (3,):
---> 57 return torch._tensor_str._str(self)
58 else:
59 if hasattr(sys.stdout, 'encoding'):
D:\softwares\anaconda\lib\site-packages\torch_tensor_str.py in
_str(self)
216 suffix = ', dtype=' + str(self.dtype) + suffix
217
--> 218 fmt, scale, sz = _number_format(self)
219 if scale != 1:
220 prefix = prefix + SCALE_FORMAT.format(scale) + ' ' * indent
D:\softwares\anaconda\lib\site-packages\torch_tensor_str.py in
_number_format(tensor, min_sz)
94 # TODO: use fmod?
95 for value in tensor:
---> 96 if value != math.ceil(value.item()):
97 int_mode = False
98 break
RuntimeError: Overflow when unpacking long
But, according to This SO Post, he was able to create a tensor. Am i missing something here. Also, why was i able to create a tensor with Tensor(capital T) and not with tensor(small t)

torch.tensor() expects a sequence or array_like to create a tensor whereas torch.Tensor() class can create a tensor with just shape information.
Here's the signature of torch.tensor():
Docstring:
tensor(data, dtype=None, device=None, requires_grad=False) -> Tensor
Constructs a tensor with :attr:data.
Args:
data (array_like): Initial data for the tensor. Can be a list, tuple,
NumPy ndarray, scalar, and other types.
dtype (:class:torch.dtype, optional): the desired data type of returned tensor.
Regarding the RuntimeError: I cannot reproduce the error in Linux distros. Printing the tensor works perfectly fine from ipython terminal.
Taking a closer look at the error, this seems to be a problem only in Windows OS. As mentioned in the comments, have a look at the issues/6339: Error when printing tensors containing large values

Related

How to fix the error of this code: "dirac[:N / 2] = 1"?

I got this python code from the internet, and it's for calculating the modulation spread function (MTF) from an input image. Here is the
full code.
The problem is that the code is not functioning on my PC due to an error in this line :
TypeError Traceback (most recent call last)
<ipython-input-1-035feef9e484> in <module>
54 N = 250
55 dirac = np.zeros(N)
---> 56 dirac[:N / 2] = 1
57
58 # Filter edge
TypeError: slice indices must be integers or None or have an __index__ method
Simply make N/2 an integer again.
dirac[:int(N/2)] = 1

zip argument #2 must support iterationzip argument #2 must support iteration column transformer error

i am using the column transformer for the first time and I keep getting a Type Error, is something wrong with the code?
code :
transformer = ColumnTransformer(('cat',OrdinalEncoder(),['job_industry_category','job_title','wealth_segmenr','gender']),
('numb',MinMaxScaler(),['tenure', 'age'])
data_impute = transformer.fit_transform(data_impute)
error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-109-ca10f9ee762b> in <module>
----> 1 data_impute = transformer.fit_transform(data_impute)
~\anaconda3\lib\site-packages\sklearn\compose\_column_transformer.py in fit_transform(self, X, y)
512 self._feature_names_in = None
513 X = _check_X(X)
--> 514 self._validate_transformers()
515 self._validate_column_callables(X)
516 self._validate_remainder(X)
~\anaconda3\lib\site-packages\sklearn\compose\_column_transformer.py in _validate_transformers(self)
271 return
272
--> 273 names, transformers, _ = zip(*self.transformers)
274
275 # validate names
TypeError: zip argument #2 must support iteration
A wrong construction of the ColumnTransformer object results in passing wrong arguments to the internal zip call, resulting in a useless error message for the mere mortal.
constructor is
class sklearn.compose.ColumnTransformer(transformers, *, remainder='drop', sparse_threshold=0.3, n_jobs=None, transformer_weights=None, verbose=False)[source]ΒΆ
The wrong construction code passes transformers as ('cat',OrdinalEncoder(),'job_industry_category','job_title','wealth_segmenr','gender']) then remainder as ('numb',MinMaxScaler(),['tenure', 'age']), then it fails miserably...
from an example in the docs:
ct = ColumnTransformer(
[("norm1", Normalizer(norm='l1'), [0, 1]),
("norm2", Normalizer(norm='l1'), slice(2, 4))])
you must pass a list of tuples, not the tuples as arguments
fix:
transformer = ColumnTransformer([ # note the [ to start the list
('cat',OrdinalEncoder(),'job_industry_category','job_title','wealth_segmenr','gender']),
('numb',MinMaxScaler(),['tenure', 'age'])
]) # note the ] to end the list
The datascience modules generally lack strong type hinting or type checking. Read the docs + copy/adapt the examples instead of starting from a blank page!

df_data not defined, unsure of the cause

working through a tutorial that is supposed to help students do the assignment, but I'm encountering a problem. I'm using python on a notebook project in IBM. Right now the section is simply data exploration. However this error is occurring and I'm not sure how to fix it, no one else seemed to have this problem in this class and the teacher is rather slow to help so I came here!
I tried just defining the variable before its called, but no dice either way.
All the code prior to this is just importing libraries and then parsing the data
# Infer the data type of each column and convert the data to the inferred data type
from ingest import *
eu = ExtensionUtils(sqlContext)
df_data_1 = eu.convertTypes(df_data_1)
df_data_1.printSchema()
the error I'm getting is
TypeError Traceback (most recent call last)
<ipython-input-14-33250ae79106> in <module>()
2 from ingest import *
3 eu = ExtensionUtils(sqlContext)
----> 4 df_data_1 = eu.convertTypes(df_data_1)
5 df_data_1.printSchema()
/opt/ibm/third-party/libs/python3/ingest/extension_utils.py in convertTypes(self, input_obj, dictVal)
304 """
305
--> 306 checkEnrichType_or_DataFrame("input_obj",input_obj)
307 self.logger = self._jLogger.getLogger(__name__)
308 methodname = str(inspect.stack()[0][3])
/opt/ibm/third-party/libs/python3/ingest/extension_utils.py in checkEnrichType_or_DataFrame(param, paramval)
81 if not isinstance(paramval,(EnrichType ,DataFrame)):
82 raise TypeError("%s should be a EnrichType class object or DataFrame, got type %s"
---> 83 % (str(param), type(paramval)))
84
85
TypeError: input_obj should be a EnrichType class object or DataFrame, got type <class 'NoneType'>
The solution was not with the code itself but rather with the notebook. A code snippet from a built in function needed to be inserted first before this.

numpy TypeError: ufunc 'invert' not supported for the input types, and the inputs

For the code below:
def makePrediction(mytheta, myx):
# -----------------------------------------------------------------
pr = sigmoid(np.dot(myx, mytheta))
pr[pr < 0.5] =0
pr[pr >= 0.5] = 1
return pr
# -----------------------------------------------------------------
# Compute the percentage of samples I got correct:
pos_correct = float(np.sum(makePrediction(theta,pos)))
neg_correct = float(np.sum(np.invert(makePrediction(theta,neg))))
tot = len(pos)+len(neg)
prcnt_correct = float(pos_correct+neg_correct)/tot
print("Fraction of training samples correctly predicted: %f." % prcnt_correct)
I get this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-33-f0c91286cd02> in <module>()
13 # Compute the percentage of samples I got correct:
14 pos_correct = float(np.sum(makePrediction(theta,pos)))
---> 15 neg_correct = float(np.sum(np.invert(makePrediction(theta,neg))))
16 tot = len(pos)+len(neg)
17 prcnt_correct = float(pos_correct+neg_correct)/tot
TypeError: ufunc 'invert' not supported for the input types, and the inputs
Why is it happening and how can I fix it?
np.invert requires ints or bools, use the method np.linalg.inv instead.
From the documentation:
Parameters:
x : array_like.
Only integer and boolean types are handled."
Your original array is floating point type (the return value of sigmoid()); setting values in it to 0 and 1 won't change the type. You need to use astype(np.int):
neg_correct = float(np.sum(np.invert(makePrediction(theta,neg).astype(np.int))))
should do it (untested).
Doing that, the float() cast you have also makes more sense. Though I would just remove the cast, and rely on Python doing the right thing.
In case you are still using Python 2 (but please use Python 3), just add
from __future__ import division
to let Python do the right thing (it won't hurt if you do it in Python 3; it just doesn't do anything). With that (or in Python 3 anyway), you can remove numerous other float() casts you have elsewhere in your code, improving readability.

rpy2: how to convert str vector into numeric vector

Anyone know how to convert str vector in rpy2 to a numeric vector?
r('x_num = as.numeric(x)')
works, but x_num is not in the python environment. I can't call it from python.
I tried:
x_num = base.as_numeric(x)
r('class(x_num)')
which shows:
'<StrVector - Python:0x7fe602a54d88 / R:0xa06bb28>
[str]'
The reason why I want to do this is because, when I pass a numpy array to robjects.FloatVector, the class of the object is str vector, which causes problems for my further analysis.
e.g.
x = pd.read_csv('x.csv', index_col=0).values.flatten()
x_ro = robjects.FloatVector(x)
r('class(x_ro)')
'<StrVector - Python:0x7fe605062098 / R:0xa16c158>
[str]'
Thank you very much!
edit:
I've already added x_ro to the env. I forgot to copy it here
robjects.globalenv["x_ro"] = x_ro
Regarding to your 1st problem, if the x_num variable is in the format you want in the R environment, you can get its view in python using the numpy.asarray() method (as stated in the documentation), so changes you could made to this array in python will also act on the underlying R vector :
my_view = numpy.asarray(r("x_num"))
It can also be done automatically if you enter these line of code :
from rpy2.robjects import numpy2ri
numpy2ri.activate()
So calling r("x_num") should return a numpy array if possible.
Also in your last snippet of code, are you sure that this is the "same" x_ro object, as you are not setting it in the R environnement ?
I guess you should do something like :
x_ro = robjects.FloatVector(x)
robjects.globalenv["x_ro"] = x_ro
then try again r('class(x_ro)') and see if you have the correct output.
It easier to identify an issue with a fully working example. Without it I am tempted to say that it is working as expected.
In [1]: import rpy2.robjects as ro
In [2]: ro.vectors.FloatVector((1,2,3,4,5))
Out[2]:
<FloatVector - Python:0x7f3541c68788 / R:0x3541468>
[1.000000, 2.000000, 3.000000, 4.000000, 5.000000]
In [3]: ro.vectors.FloatVector(('1','2','3','4','5'))
Out[3]:
<FloatVector - Python:0x7f353bff7d88 / R:0x3541398>
[1.000000, 2.000000, 3.000000, 4.000000, 5.000000]
In [4]: ro.vectors.FloatVector(('1','2','3','a','5'))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-263bdc61f184> in <module>()
----> 1 ro.vectors.FloatVector(('1','2','3','a','5'))
/usr/local/lib/python3.5/dist-packages/rpy2/robjects/vectors.py in __init__(self, obj)
454
455 def __init__(self, obj):
--> 456 obj = FloatSexpVector(obj)
457 super(FloatVector, self).__init__(obj)
458
ValueError: Error while trying to convert element 3 to a double.
In [5]: ro.vectors.FloatVector(ro.vectors.StrVector(('1','2','3','a','5')))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-26578834d7ec> in <module>()
----> 1 ro.vectors.FloatVector(ro.vectors.StrVector(('1','2','3','a','5')))
/usr/local/lib/python3.5/dist-packages/rpy2/robjects/vectors.py in __init__(self, obj)
454
455 def __init__(self, obj):
--> 456 obj = FloatSexpVector(obj)
457 super(FloatVector, self).__init__(obj)
458
ValueError: Invalid SEXP type '16' (should be 14).
Having established that we are able to build R vectors of float from Python, we can look at whether binding it to a symbol in R and accessing that object from R makes any difference. It does not:
In [1]: import rpy2.robjects as ro
In [2]: v = ro.vectors.FloatVector((1,2,3,4,5))
In [3]: ro.globalenv['v'] = v
In [4]: ro.r("print(v)")
[1]
1
2
3
4
5
Out[4]:
<FloatVector - Python:0x7fb4791e5f08 / R:0x2f7eed0>
[1.000000, 2.000000, 3.000000, 4.000000, 5.000000]
In [5]: ro.r("class(v)")
Out[5]:
<StrVector - Python:0x7fb4791e5548 / R:0x2d02658>
['numeric']

Categories

Resources