Related
If I have the array [[1,0,0],[0,1,0],[0,0,1]] (let's call it So) which is done as numpy.eye(3).
How can I get that the elements below the diagonal are only 2 and 3 like this [[1,0,0],[2,1,0],[3,2,1]] ?? How can I assign vectors of an array to a different set of values?
I know I could use numpy.concatenate to join 3 vectors and I know how to change rows/columns but I can't figure out how to change diagonals below the main diagonal.
I tried to do np.diagonal(So,-1)=2*np.diagonal(So,-1) to change the diagonal right below the main diagonal but I get the error message cannot assign to function call.
I would not start from numpy.eye but rather numpy.ones and use numpy.tril+cumsum to compute the next numbers on the lower triangle:
import numpy as np
np.tril(np.ones((3,3))).cumsum(axis=0).astype(int)
output:
array([[1, 0, 0],
[2, 1, 0],
[3, 2, 1]])
reversed output (from comment)
Assuming the array is square
n = 3
a = np.tril(np.ones((n,n)))
(a*(n+2)-np.eye(n)*n-a.cumsum(axis=0)).astype(int)
Output:
array([[1, 0, 0],
[3, 1, 0],
[2, 3, 1]])
Output for n=5:
array([[1, 0, 0, 0, 0],
[5, 1, 0, 0, 0],
[4, 5, 1, 0, 0],
[3, 4, 5, 1, 0],
[2, 3, 4, 5, 1]])
You can use np.fill_diagonal and index the matrix so the principal diagonal of your matrix is the one you want. This suposing you want to put other values than 2 and 3 is the a good solution:
import numpy as np
q = np.eye(3)
#if you want the first diagonal below the principal
# you can call q[1:,:] (this is not a 3x3 or 2x3 matrix but it'll work)
val =2
np.fill_diagonal(q[1:,:], val)
#note that here you can use an unique value 'val' or
# an array with values of corresponding size
#np.fill_diagonal(q[1:,:], [2, 2])
#then you can do the same on the last one column
np.fill_diagonal(q[2:,:], 3)
You could follow this approach:
def func(n):
... return np.array([np.array(list(range(i, 0, -1)) + [0,] * (n - i)) for i in range(1, n + 1)])
func(3)
OUTPUT
array([[1, 0, 0],
[2, 1, 0],
[3, 2, 1]])
Let's say I have matrix that contains 5 rows and 4 columns:
arr = [[1,1,1,1],
[2,2,2,2],
[3,3,3,3],
[4,4,4,4],
[5,5,5,5]]
and I want to randomly zero=out/mask a certain percentage of this matrix row-wise. So if I set the percentage to be zeroed to 40%. I will get the following:
arr = [[0,0,0,0],
[2,2,2,2],
[3,3,3,3],
[0,0,0,0],
[5,5,5,5]]
what would be a good way to achieve this? Thanks!
One way to achieve your task is following (set num_zero_rows):
Try it online!
import random
arr = [[1,1,1,1],
[2,2,2,2],
[3,3,3,3],
[4,4,4,4],
[5,5,5,5]]
num_zero_rows = 2
zero_idxs = set(random.sample(range(len(arr)), num_zero_rows))
arr = [([0] * len(arr[i]) if i in zero_idxs else arr[i])
for i in range(len(arr))]
print(arr)
Output:
[[0, 0, 0, 0], [2, 2, 2, 2], [0, 0, 0, 0], [4, 4, 4, 4], [5, 5, 5, 5]]
Or a bit shorter/cleaner/faster variant of same code:
Try it online!
import random
arr = [[1,1,1,1],
[2,2,2,2],
[3,3,3,3],
[4,4,4,4],
[5,5,5,5]]
num_zero_rows = 2
for i in random.sample(range(len(arr)), num_zero_rows):
arr[i] = [0] * len(arr[i])
print(arr)
You can sample an indicator vector using torch.bernouli:
torch.bernoulli(0.4 * torch.ones_like(arr[:, :1]))
Once you have this vector, you can multiply it with arr:
out = torch.bernoulli(0.4 * torch.ones_like(arr[:, :1])) * arr
And get the sampled array you want.
You should also look at dropout functions.
Simple. If your matrix has N rows, pick a list of indecies from [0, N-1]
inds = np.arange(N)
M = int(N * 40 / 100) # 40% of rows
inds = np.random.choice(inds, M, replace=False) # without replacement
.. then just assign zeros to the specific rows and all columns
arr[inds, :] = 0.
You can do this,
import random
arr = [[1,1,1,1],
[2,2,2,2],
[3,3,3,3],
[4,4,4,4],
[5,5,5,5]]
for i in arr:
if random.randint(0, 1): # if 1 is hit, the entire line will zero out
for j in range(len(i)):
i[j] = 0
output try 1:
[[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[0, 0, 0, 0],
[0, 0, 0, 0]]
output try 2:
[[0, 0, 0, 0],
[2, 2, 2, 2],
[0, 0, 0, 0],
[0, 0, 0, 0],
[5, 5, 5, 5]]
import numpy as np
import itertools as it
SPIN_POS = np.array([[0, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1],
[2, 2, 0], [3, 3, 0], [3, 2, 1], [2, 3, 1],
[2, 0, 2], [3, 1, 2], [3, 0, 3], [2, 1, 3],
[0, 2, 2], [1, 3, 2], [1, 2, 3], [0, 3, 3]
]) / 4
def gen_posvecs(xdim:int, ydim:int, zdim:int):
"""
Generates position vectors of site pairs in the lattice of size xdim,ydim,zdim
:param x,y,z is the number of unit cells in the x,y,z directions;
:returns array containing the position vectors
"""
poss = np.zeros((xdim,ydim,zdim,16,3))
for x,y,z,s in it.product(range(xdim), range(ydim), range(zdim), range(16)):
poss[x,y,z,s] = np.array([x,y,z]) + SPIN_POS[s]
return poss
A = gen_sepvecs(4,4,4) # A.shape = (4,4,4,16,3)
B = np.subtract.outer(A[...,-1], A) # my attempt at a soln
assert all(A[1,2,0,12] - A[0,1,3,11] == B[1,2,0,12,0,1,3,11]) # should give true
Consider the above code. I have an array A of shape (4,4,4,16,3), which represents 3D position vectors in a lattice (the last axis of dim 3 are the x,y,z coordinates). The first 4 dimensions index the site in the lattice.
What I want
I would like to generate from A, an array containing all possible separation vectors between sites in the lattice. This means an output array B, of shape (4,4,4,16,4,4,4,16,3). The first 4 dimensions being of site i, next 4 dimensions of site j, then the last dimension of the (x,y,z) coordinate of the position vector difference.
i.e., A[a,b,c,d]: shape (3,) is the (x,y,z) of first site; A[r,s,t,u]: shape (3,) is the (x,y,z) of second site; Then I want B[a,b,c,d,r,s,t,u] to be (x,y,z) difference between the first two.
My attempt
I know about the ufunc.outer function, as you can see in my attempt in code. But I'm stuck at applying it together with performing element-wise subtraction on the last axis (the (x,y,z)) of each A.
In my attempt, B has the correct dimensions I want, but it is obviously wrong. Any hints? (barring the use of any for-loops)
I think you just need to do:
B = (A[:, :, :, :, np.newaxis, np.newaxis, np.newaxis, np.newaxis] -
A[np.newaxis, np.newaxis, np.newaxis, np.newaxis])
In your code:
import numpy as np
import itertools as it
SPIN_POS = np.array([[0, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1],
[2, 2, 0], [3, 3, 0], [3, 2, 1], [2, 3, 1],
[2, 0, 2], [3, 1, 2], [3, 0, 3], [2, 1, 3],
[0, 2, 2], [1, 3, 2], [1, 2, 3], [0, 3, 3]
]) / 4
def gen_posvecs(xdim:int, ydim:int, zdim:int):
"""
Generates position vectors of site pairs in the lattice of size xdim,ydim,zdim
:param x,y,z is the number of unit cells in the x,y,z directions;
:returns array containing the position vectors
"""
poss = np.zeros((xdim,ydim,zdim,16,3))
for x,y,z,s in it.product(range(xdim), range(ydim), range(zdim), range(16)):
poss[x,y,z,s] = np.array([x,y,z]) + SPIN_POS[s]
return poss
A = gen_posvecs(4,4,4) # A.shape = (4,4,4,16,3)
B = A[:, :, :, :, np.newaxis, np.newaxis, np.newaxis, np.newaxis] - A[np.newaxis, np.newaxis, np.newaxis, np.newaxis]
assert all(A[1,2,0,12] - A[0,1,3,11] == B[1,2,0,12,0,1,3,11])
# Does not fail
I have an array with elements, and I want to sum up the accuracy. I want to sum the arrays that have the elements in the same order. I rather not be writing a for loop going through each element with zip and summing them up, is there an easier way to do this?
The two arrays are as follows, and currently my code is below for calculating the sum.
yp = [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
y = [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
sums = np.sum(yp == y)
I am getting an accuracy of zero.
Using your example:
yp = [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
y = [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
# First make the two arrays in question numpy arrays.
yp = np.array(yp)
y = np.array(y)
array_length = y.shape[1] # store length of sub arrays
equal_elements = np.array(yp) == np.array(y) # check all equal elements
sums = np.sum(equal_elements, 1) # sum the number of equal elements in each sub array, use axis 1 as each array/sample is axis 0
equal_arrays = np.where(sums==array_length)[0] # returns a tuple, so index first element immediately
number_equal_arrays = equal_arrays.shape[0] # What elements are equal
print('Number of equal arrays %d' % number_equal_arrays)
print('Accuracy %0.2f' % (number_equal_arrays/yp.shape[0]))
prints
Number of equal arrays 6
Accuracy 1.00
I would like to scale an array of shape (h, w) by a factor of n, resulting in an array of shape (h*n, w*n), with the.
Say that I have a 2x2 array:
array([[1, 1],
[0, 1]])
I would like to scale the array to become 4x4:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]])
That is, the value of each cell in the original array is copied into 4 corresponding cells in the resulting array. Assuming arbitrary array size and scaling factor, what's the most efficient way to do this?
You should use the Kronecker product, numpy.kron:
Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first
import numpy as np
a = np.array([[1, 1],
[0, 1]])
n = 2
np.kron(a, np.ones((n,n)))
which gives what you want:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]])
You could use repeat:
In [6]: a.repeat(2,axis=0).repeat(2,axis=1)
Out[6]:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]])
I am not sure if there's a neat way to combine the two operations into one.
scipy.misc.imresize can scale images. It can be used to scale numpy arrays, too:
#!/usr/bin/env python
import numpy as np
import scipy.misc
def scale_array(x, new_size):
min_el = np.min(x)
max_el = np.max(x)
y = scipy.misc.imresize(x, new_size, mode='L', interp='nearest')
y = y / 255 * (max_el - min_el) + min_el
return y
x = np.array([[1, 1],
[0, 1]])
n = 2
new_size = n * np.array(x.shape)
y = scale_array(x, new_size)
print(y)
To scale effectively I use following approach. Works 5 times faster than repeat and 10 times faster that kron. First, initialise target array, to fill scaled array in-place. And predefine slices to win few cycles:
K = 2 # scale factor
a_x = numpy.zeros((h * K, w *K), dtype = a.dtype) # upscaled array
Y = a_x.shape[0]
X = a_x.shape[1]
myslices = []
for y in range(0, K) :
for x in range(0, K) :
s = slice(y,Y,K), slice(x,X,K)
myslices.append(s)
Now this function will do the scale:
def scale(A, B, slices): # fill A with B through slices
for s in slices: A[s] = B
Or the same thing simply in one function:
def scale(A, B, k): # fill A with B scaled by k
Y = A.shape[0]
X = A.shape[1]
for y in range(0, k):
for x in range(0, k):
A[y:Y:k, x:X:k] = B