How to modify the value of numpy array based on another array? - python

I have a numpy array. I want to modify one array index by the chosen elements of another array. For example:
import numpy as np
t1 = np.ones((10,3))
t2 = np.arange(10)
t1[np.where(t2>5)][:,2] = 10
print(t1)
What I want t1 is:
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 10.],
[1., 1., 10.],
[1., 1., 10.],
[1., 1., 10.]])
But the output of t1 is:
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
What the problem there?

It's backwards, it should be:
t1[:,2][np.where(t2>5)] = 10
output:
array([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 10.],
[ 1., 1., 10.],
[ 1., 1., 10.],
[ 1., 1., 10.]])

The most pythonic way to do this is probably
t1[:, 2] = np.where(t2 > 5, # where t2 > 5
10, # put 10
t1[:, 2]) # into the third column of t1
Besides added clarity, for very large objects this will have a time benefit, as there is no creation of an intermediate indexing array np.where(t2 > 5) and no resulting intermediate callbacks to that python object - everything is done in-place with c-compiled code.

You can do:
t1[np.where(t2>5), 2] = 10
Syntax: array[<row>, <col>]

Related

For a binary array sum(array) and numpy.count_nonzero(array) give different answers for big arrays, when array is uint8. Why?

I have 3D arrays filled with ones and zeros (created through pyellipsoid). The array is uint8. I wanted to know the number of 1s. I used sum(sum(sum(array))) to do this and it worked fine for small arrays (up to approx.5000 entries).
I compared sum(sum(sum(array))) to numpy.count_nonzero(array) for a known number of nonzero entries. For bigger arrays the answers from "sum" are always wrong and lower than they should be.
If I use float64 arrays it works fine with big arrays. If I change the data type to uint8 it does not work.
Why is that? I am sure there is a very simple reason, but I can't find an answer.
Small array example:
test = numpy.zeros((2,2,2))
test[0,0,0] = 1
test[1,0,0] = 1
In: test
Out:
array([[[1., 0.],
[0., 0.]],
In: sum(sum(sum(test)))
Out: 2.0
Big example (8000 entries, only one zero, 7999 ones):
test_big=np.ones((20,20,20))
test_big[0,0,0] = 0
test_big
Out[77]:
array([[[0., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
...,
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]]])
In: sum(sum(sum(test_big)))
Out: 7999.0
So far so good. Here, the data type of the sum output is float64. But if I now change the data type of the array to the type that is used with pyellipsoid (uint8)...
In: test_big = test_big.astype('uint8')
In: sum(sum(sum(test_big)))
Out: 2879
So obviously 2879 is not 7999. Here, the data type of the sum output is int32 (-2147483648 to 2147483647) so this should be big enough for 7999, right? I guess it has something to do with the data type, but how? Why?
(I am using Spyder in Anaconda on Windows).
The issue is as you guessed - there is an integer overflow. If you take a look at sum(sum(test_big)), you will notice that the values are wrong there.
The part that is wrong is that integer overflow can occur in your sum() functions which are taking the partial sums.
What I would suggest is making a sum of this array using np.sum() as it does give an appropriate sum despite of data type.

I don't know the meaning of the code "[:, row:row]"

I have the code:
g, g_err = data[:, 4:6].T
I don't know the meaning of [:, 4:6]
especially the first :
and does .T mean transpose?
You have a 2D matrix called data, your code takes all elements from the first dimension, marked as :, then takes only elements 4 and 5 in the second dimension, something like this:
>>> np.ones( (7,7 ))
array([[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.]])
>>> np.ones( (7,7 ))[:,4:6]
array([[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.]])
>>>
And yeah, .T means transpose.

Set default value to sparse scipy matrix

Is there any way to set 1 as the default value for sparse matrix in scipy?
For example:
>>>M=scipy.sparse.dok_matrix((5,5), dtype=np.float, [default=1.])
>>>M.A
array([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])
U can use np.ones((5,5)):
M = scipy.sparse.dok_matrix(np.ones((5,5)))

Numpy array change value of inner elements

I create an array in numpy as
a = np.ones([5 , 5])
I will then get an output as 5x5 array full of 1s. I would like to keep the outer elements as 1, and inner elements as 0. So I would like output to be:
[[ 1. 1. 1. 1. 1.]
[ 1. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 1.]
[ 1. 1. 1. 1. 1.]]
Is there any way with which we could do this in a single line? (I have read about inner() but I dont know how to get it working with this single array)
Yes, we can use slicing for this:
a[1:-1, 1:-1] = 0
Or for a generic multidimensional array:
a[(slice(1, -1),) * a.ndim] = 0
but usually it would be better to construct such matrix in another way. This produces:
>>> a = np.ones([5 , 5])
>>> a[1:-1, 1:-1] = 0
>>> a
array([[1., 1., 1., 1., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 1., 1., 1., 1.]])
and for instance for a 3d case (imagine some sort of cube):
>>> a = np.ones([5 , 5, 5])
>>> a[(slice(1, -1),) * a.ndim] = 0
>>> a
array([[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 0., 0., 0., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]]])

In numpy, why multi dimension array d[0][0:2][0][0] returning not two elements

In [136]: d = np.ones((1,2,3,4))
In [167]: d[0][0][0:2][0]
Out[167]: array([ 1., 1., 1., 1.])
as shown above, why it's not returning exactly 2 elements
Look at the array itself. It should be self explanatory
>>> d
array([[[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]],
[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]]]])
# first you grab the first and only element
>>> d[0]
array([[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]],
[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]]])
# then you get the first element out of the two groups
>>> d[0][0]
array([[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]])
# thirdly, you get the two first elements as a list
>>> d[0][0][0:2]
array([[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]])
# finally, you get the first element of the list
>>> d[0][0][0:2][0]
array([ 1., 1., 1., 1.])

Categories

Resources