I declared ReLU function like this:
def relu(x):
return (x if x > 0 else 0)
and an ValueError has occured and its traceback message is
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
But if I change ReLU function with numpy, it works:
def relu_np(x):
return np.maximum(0, x)
Why this function(relu(x)) doesn't work? I cannot understand it...
================================
Used code:
>>> x = np.arange(-5.0, 5.0, 0.1)
>>> y = relu(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "filename", line, in relu
return (x if x > 0 else 0)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
TLDR; Your first function is not using vectorized methods which means it expects a single float/int value as input, while your second function takes advantage of Numpy's vectorization.
Vectorization in NumPy
Your second function uses numpy functions which are vectorized and run on each individual element of the array.
import numpy as np
arr = np.arange(-5.0, 5.0, 0.5)
def relu_np(x):
return np.maximum(0, x)
relu_np(arr)
# array([0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 1. ,
# 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
Your second function however uses a ternary operator (x if x > 0 else 0) which expects a single value input and outputs a single value. This is why when you pass a single element, it would work, but on passing an array it fails to run the function on each element independently.
def relu(x):
return (x if x > 0 else 0)
relu(-8)
## 0
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Note: The reason for this error is due to the ternary operator you are using (x if x > 0 else 0). The condition x>0 can only take the value True or False for a given integer/float value. However, when you pass an array, you will need to use something like any() or all() to aggregate that list of boolean values to a single one, before you can apply your if, else clause.
Solutions -
There are a few ways you can make this work -
1. Using np.vectorize (not recommended, lower performance than pure numpy approach)
import numpy as np
arr = np.arange(-5.0, 5.0, 0.5)
def relu(x):
return (x if x > 0.0 else 0.0)
relu_vec = np.vectorize(relu)
relu_vec(arr)
# array([0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 1. ,
# 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
2. Iteration over the array with list comprehension
import numpy as np
arr = np.arange(-5.0, 5.0, 0.5)
def relu(x):
return (x if x > 0 else 0)
arr = np.array(arr)
np.array([relu(i) for i in arr])
# array([0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 1. ,
# 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
Keep in mind that x > 0 is an array of booleans, a mask if you like:
array([False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True])
So it does not make sense to do if x>0, since x contains several elements, which can be True or False. This is the source of your error.
Your second implementation of numpy is good ! Another implementation (maybe more clear?) might be:
def relu(x):
return x * (x > 0)
In this implementation, we do an elementwise multiplication of x, which is a range of values along the x axis, by 0 if the element of x is below 0, and 1 if the element is above.
Disclaimer: please someone correct me if I'm wrong, I'm not 100% sure about how numpy does things.
Your function relu expects a single numerical value and compares it to 0 and returns whatever is larger. x if x > 0 else 0 would be equal to max(x, 0) where max is a builtin Python function.
relu_np on the other hand uses the numpy function maximum which accepts 2 numbers or arrays or iterables. That means you can pass your numpy array x and it applies the maximum function to every single item automatically. I believe this is called 'vectorized'.
To make the relu function you have work the way it is, you need to call it differently. You'd have to manually apply your function to every element. You could do something like y = np.array(list(map(relu, x))).
Related
Hello I have the following code which transposes one matrix and compares it to the original matrix in order to check if the matrix is symetrical.
def sym(x):
mat = x
transpose = 0;
(m,n) = x.shape
if(m != n):
print("Matrix must be square")
return
else:
transpose = ([[x[j][i] for j in range(len(x))] for i in range(len(x[0]))])
print(transpose)
print(x)
if(transpose == mat):
print("Symetrical")
else:
print("Not symetrical")
return
A = np.random.rand(5,5)
SymMatrix = (A + A.T)/2
sym(SymMatrix)
I recieve the following data from the prints:
[[0.17439677739055337, 0.4578837676040824, 0.35842887026076997, 0.8610456087667133, 0.2967753611380975], [0.4578837676040824, 0.6694101430064164, 0.6718596412137644, 0.5107862111816033, 0.6429698779871544], [0.35842887026076997, 0.6718596412137644, 0.5387701626024015, 0.708555677626843, 0.5756758392540096], [0.8610456087667133, 0.5107862111816033, 0.708555677626843, 0.37095928395815847, 0.7062962356554356], [0.2967753611380975, 0.6429698779871544, 0.5756758392540096, 0.7062962356554356, 0.3807024190850993]]
[[0.17439678 0.45788377 0.35842887 0.86104561 0.29677536]
[0.45788377 0.66941014 0.67185964 0.51078621 0.64296988]
[0.35842887 0.67185964 0.53877016 0.70855568 0.57567584]
[0.86104561 0.51078621 0.70855568 0.37095928 0.70629624]
[0.29677536 0.64296988 0.57567584 0.70629624 0.38070242]]
along with this error:
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
The first issue i see is that the transposes matrix has extra decimals in each value which i dont understand and I am not sure if this is the cause of the error. Any help on this would be appreciated. Thanks!
What you are doing is trying to compare two ndarrays, which results in something like this:
transpose == mat
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True]])
The 'if' statement does not know which boolean he should pick. Should it be Trueif any of the values are True, or only if all values are True?
To indicate that all values should be True, add: .all() to your statement:
if((transpose == mat).all())
This results in a single boolean (in this case: True).
I am working with the implementation of Advances in Financial Machine Learning in order to get the scores of a cross validation in Python. The code I have is the next one:
cv = PurgedKFold(n_splits = 10,
samples_info_sets = pd.Series(train['close_datetime'].values, index = train['opendatetime'].values),
pct_embargo = 0.02)
scores = ml_cross_val_score(classifier = classifier,
X = X, y = y, cv_gen = cv)
The problem is that when I run the last command line, I get the next error:
IndexError: .iloc requires numeric indexers, got [array([False, False, False, ..., False, False, False])
array([False, False, False, ..., False, False, False])
array([False, False, False, ..., False, False, False]) ...
array([False, False, False, ..., True, False, False]) 8428
array([False, False, False, ..., False, False, True])]
Something is going wrong with my code, and maybe I am configuring bad the format of X and y dataframes for being inspected by the cross validator. Can anyone help me understanding why is that error being raised?
After some trials, I found the solution. This error happens due to the Purged K Fold needs the index value to be unique. If there are two indexes ('opendatetime') that are equal, at the time of splitting the different partitions of data, an error is araised.
The solution is to check if there are rows with the same index. If you change the index value of those equal ocurrences to be different between them, it works!
I have a array like this
k = np.array([[ 1. , -120.8, 39.5],
[ 0. , -120.5, 39.5],
[ 1. , -120.4, 39.5],
[ 1. , -120.3, 39.5]])
I am trying to remove the following row which is also at index 1 position.
b=np.array([ 0. , -120.5, 39.5])
I have tried the traditional methods like the following:
k==b #try to get all True values at index 1 but instead got this
array([[False, False, False],
[ True, False, False],
[False, False, False],
[False, False, False]])
Other thing I tried:
k[~(k[:,0]==0.) & (k[:,1]==-120.5) & (k[:,1]==39.5)]
Got the result like this:
array([], shape=(0, 3), dtype=float64)
I am really surprised why the above methods not working. By the way in the first method I am just trying to get the index so that i can use np.delete later. Also for this problem, I am assuming I don't know the index.
Both k and b are floats, so equality comparisons are subject to floating point inaccuracies. Use np.isclose instead:
k[~np.isclose(k, b).all(axis=1)]
# array([[ 1. , -120.8, 39.5],
# [ 1. , -120.4, 39.5],
# [ 1. , -120.3, 39.5]])
Where
np.isclose(k, b).all(axis=1)
# array([False, True, False, False])
Tells you which row of k matches b.
Given a numpy array:
x = np.array([False, True, True, False, False, False, False, False, True, False])
How do I find the number of times the values transitions from False to True?
For the above example, the answer would be 2. I don't want to include transitions from True to False in the count.
From the answers to How do I identify sequences of values in a boolean array?, the following produces the indices at which the values are about to change, which is not what I want as this includes True-False transitions.
np.argwhere(np.diff(x)).squeeze()
# [0 2 7 8]
I know that this can be done by looping through the array, however I was wondering if there was a faster way to do this?
Get one-off slices - x[:-1] (starting from the first elem and ending in second last elem) and x[1:] (starting from the second elem and going on until the end), then look for the first slice being lesser than the second one, i.e. catch the pattern of [False, True] and finally get the count with ndarray.sum() or np.count_nonzero() -
(x[:-1] < x[1:]).sum()
np.count_nonzero(x[:-1] < x[1:])
Another way would be to look for the first slice being False and the second one as True, the idea again being to catch that pattern of [False, True] -
(~x[:-1] & x[1:]).sum()
np.count_nonzero(~x[:-1] & x[1:])
I kind of like to use numpy method "roll" for this kind of problems...
"roll" rotates the array to left some step length : (-1,-2,...) or to right (1,2,...)
import numpy as np
np.roll(x,-1)
...this will give x but shifted one step to the left:
array([ True, True, False, False, False, False, False, True, False, False],
dtype=bool)
A False followed by a True can then be expressed as:
~x & np.roll(x,-1)
array([ True, False, False, False, False, False, False, True, False, False],
dtype=bool)
I have a large data in matrix x and I need to analyze some some submatrices.
I am using the following code to select the submatrix:
>>> import numpy as np
>>> x = np.random.normal(0,1,(20,2))
>>> x
array([[-1.03266826, 0.04646684],
[ 0.05898304, 0.31834926],
[-0.1916809 , -0.97929025],
[-0.48837085, -0.62295003],
[-0.50731017, 0.50305894],
[ 0.06457385, -0.10670002],
[-0.72573604, 1.10026385],
[-0.90893845, 0.99827162],
[ 0.20714399, -0.56965615],
[ 0.8041371 , 0.21910274],
[-0.65882317, 0.2657183 ],
[-1.1214074 , -0.39886425],
[ 0.0784783 , -0.21630006],
[-0.91802557, -0.20178683],
[ 0.88268539, -0.66470235],
[-0.03652459, 1.49798484],
[ 1.76329838, -0.26554555],
[-0.97546845, -2.41823586],
[ 0.32335103, -1.35091711],
[-0.12981597, 0.27591674]])
>>> index = x[:,1] > 0
>>> index
array([ True, True, False, False, True, False, True, True, False,
True, True, False, False, False, False, True, False, False,
False, True], dtype=bool)
>>> x1 = x[index, :] #x1 is a copy of the submatrix
>>> x1
array([[-1.03266826, 0.04646684],
[ 0.05898304, 0.31834926],
[-0.50731017, 0.50305894],
[-0.72573604, 1.10026385],
[-0.90893845, 0.99827162],
[ 0.8041371 , 0.21910274],
[-0.65882317, 0.2657183 ],
[-0.03652459, 1.49798484],
[-0.12981597, 0.27591674]])
>>> x1[0,0] = 1000
>>> x1
array([[ 1.00000000e+03, 4.64668400e-02],
[ 5.89830401e-02, 3.18349259e-01],
[ -5.07310170e-01, 5.03058935e-01],
[ -7.25736045e-01, 1.10026385e+00],
[ -9.08938455e-01, 9.98271624e-01],
[ 8.04137104e-01, 2.19102741e-01],
[ -6.58823174e-01, 2.65718300e-01],
[ -3.65245877e-02, 1.49798484e+00],
[ -1.29815968e-01, 2.75916735e-01]])
>>> x
array([[-1.03266826, 0.04646684],
[ 0.05898304, 0.31834926],
[-0.1916809 , -0.97929025],
[-0.48837085, -0.62295003],
[-0.50731017, 0.50305894],
[ 0.06457385, -0.10670002],
[-0.72573604, 1.10026385],
[-0.90893845, 0.99827162],
[ 0.20714399, -0.56965615],
[ 0.8041371 , 0.21910274],
[-0.65882317, 0.2657183 ],
[-1.1214074 , -0.39886425],
[ 0.0784783 , -0.21630006],
[-0.91802557, -0.20178683],
[ 0.88268539, -0.66470235],
[-0.03652459, 1.49798484],
[ 1.76329838, -0.26554555],
[-0.97546845, -2.41823586],
[ 0.32335103, -1.35091711],
[-0.12981597, 0.27591674]])
>>>
but I would like x1 to be only a pointer or something like this. Copy the data every time that I need a submatrix is too expensive for me.
How can I do that?
EDIT:
Apparently there is not any solution with the numpy array. Are the pandas data frame better from this point of view?
The information for your array x is summarized in the .__array_interface__ property
In [433]: x.__array_interface__
Out[433]:
{'descr': [('', '<f8')],
'strides': None,
'data': (171396104, False),
'typestr': '<f8',
'version': 3,
'shape': (20, 2)}
It has the array shape, strides (default here), and pointer to the data buffer. A view can point to the same data buffer (possibly further along), and have its own shape and strides.
But indexing with your boolean can't be summarized in those few numbers. Either it has to carry the index array all the way through, or copy selected items from the x data buffer. numpy chooses to copy. You have choice of when to apply the index, now or further down the calling stack.
Since index is an array of type bool, you are doing advanced indexing. And the docs say: „Advanced indexing always returns a copy of the data.“
This makes a lot of sense. Compared to normal indexing where you only need to know the start, stop and step, advanced indexing can use any value from the original array without such a simple rule. This would mean having lots of extra meta information where referenced indices point to that might use more memory than a copy.
If you can manage with a traditional slice such as
x1 = x[3:8]
Then it will be just a pointer.
Have you looked at using masked arrays? You might be able to do exactly what you want.
x = np.array([0.12, 0.23],
[1.23, 3.32],
...
[0.75, 1.23]])
data = np.array([[False, False],
[True, True],
...
[True, True]])
x1 = np.ma.array(x, mask=data)
## x1 can be worked on and only includes elements of x where data==False