Set default value to sparse scipy matrix - python

Is there any way to set 1 as the default value for sparse matrix in scipy?
For example:
>>>M=scipy.sparse.dok_matrix((5,5), dtype=np.float, [default=1.])
>>>M.A
array([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])

U can use np.ones((5,5)):
M = scipy.sparse.dok_matrix(np.ones((5,5)))

Related

For a binary array sum(array) and numpy.count_nonzero(array) give different answers for big arrays, when array is uint8. Why?

I have 3D arrays filled with ones and zeros (created through pyellipsoid). The array is uint8. I wanted to know the number of 1s. I used sum(sum(sum(array))) to do this and it worked fine for small arrays (up to approx.5000 entries).
I compared sum(sum(sum(array))) to numpy.count_nonzero(array) for a known number of nonzero entries. For bigger arrays the answers from "sum" are always wrong and lower than they should be.
If I use float64 arrays it works fine with big arrays. If I change the data type to uint8 it does not work.
Why is that? I am sure there is a very simple reason, but I can't find an answer.
Small array example:
test = numpy.zeros((2,2,2))
test[0,0,0] = 1
test[1,0,0] = 1
In: test
Out:
array([[[1., 0.],
[0., 0.]],
In: sum(sum(sum(test)))
Out: 2.0
Big example (8000 entries, only one zero, 7999 ones):
test_big=np.ones((20,20,20))
test_big[0,0,0] = 0
test_big
Out[77]:
array([[[0., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
...,
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],
[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]]])
In: sum(sum(sum(test_big)))
Out: 7999.0
So far so good. Here, the data type of the sum output is float64. But if I now change the data type of the array to the type that is used with pyellipsoid (uint8)...
In: test_big = test_big.astype('uint8')
In: sum(sum(sum(test_big)))
Out: 2879
So obviously 2879 is not 7999. Here, the data type of the sum output is int32 (-2147483648 to 2147483647) so this should be big enough for 7999, right? I guess it has something to do with the data type, but how? Why?
(I am using Spyder in Anaconda on Windows).
The issue is as you guessed - there is an integer overflow. If you take a look at sum(sum(test_big)), you will notice that the values are wrong there.
The part that is wrong is that integer overflow can occur in your sum() functions which are taking the partial sums.
What I would suggest is making a sum of this array using np.sum() as it does give an appropriate sum despite of data type.

How to modify the value of numpy array based on another array?

I have a numpy array. I want to modify one array index by the chosen elements of another array. For example:
import numpy as np
t1 = np.ones((10,3))
t2 = np.arange(10)
t1[np.where(t2>5)][:,2] = 10
print(t1)
What I want t1 is:
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 10.],
[1., 1., 10.],
[1., 1., 10.],
[1., 1., 10.]])
But the output of t1 is:
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
What the problem there?
It's backwards, it should be:
t1[:,2][np.where(t2>5)] = 10
output:
array([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 10.],
[ 1., 1., 10.],
[ 1., 1., 10.],
[ 1., 1., 10.]])
The most pythonic way to do this is probably
t1[:, 2] = np.where(t2 > 5, # where t2 > 5
10, # put 10
t1[:, 2]) # into the third column of t1
Besides added clarity, for very large objects this will have a time benefit, as there is no creation of an intermediate indexing array np.where(t2 > 5) and no resulting intermediate callbacks to that python object - everything is done in-place with c-compiled code.
You can do:
t1[np.where(t2>5), 2] = 10
Syntax: array[<row>, <col>]

I don't know the meaning of the code "[:, row:row]"

I have the code:
g, g_err = data[:, 4:6].T
I don't know the meaning of [:, 4:6]
especially the first :
and does .T mean transpose?
You have a 2D matrix called data, your code takes all elements from the first dimension, marked as :, then takes only elements 4 and 5 in the second dimension, something like this:
>>> np.ones( (7,7 ))
array([[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1.]])
>>> np.ones( (7,7 ))[:,4:6]
array([[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.]])
>>>
And yeah, .T means transpose.

In numpy, why multi dimension array d[0][0:2][0][0] returning not two elements

In [136]: d = np.ones((1,2,3,4))
In [167]: d[0][0][0:2][0]
Out[167]: array([ 1., 1., 1., 1.])
as shown above, why it's not returning exactly 2 elements
Look at the array itself. It should be self explanatory
>>> d
array([[[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]],
[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]]]])
# first you grab the first and only element
>>> d[0]
array([[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]],
[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]]])
# then you get the first element out of the two groups
>>> d[0][0]
array([[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]])
# thirdly, you get the two first elements as a list
>>> d[0][0][0:2]
array([[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]])
# finally, you get the first element of the list
>>> d[0][0][0:2][0]
array([ 1., 1., 1., 1.])

Adding numpy arrays

How can I add a small numpy array to a part of big numpy array?
My code is like:
import numpy as np
x = np.ones((10, 15))
I want to add a 3 by 3 np array to the middle or some location that I can designate.
If by adding you mean assigning values from 3x3 matrix into your x matrix, then you can assign it to the slice of your x matrix. Example -
x[row:row+3,col:col+3] = np.array([[1,2,3],[4,5,6],[7,8,9]]) #Your 3x3 array on right side.
Demo -
In [98]: x = np.ones((10,15))
In [99]: x[3:6,3:6] = np.array([[1,2,3],[4,5,6],[7,8,9]])
In [100]: x
Out[100]:
array([[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 2., 3., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 4., 5., 6., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 7., 8., 9., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1.]])
If by add, you meant to add the values up at the corresponding indexes, you can use += in the above slicing assignment. Example -
x[row:row+3,col:col+3] += np.array([[1,2,3],[4,5,6],[7,8,9]]) #Your 3x3 array on right side.

Categories

Resources