Related
I have to insert a small matrix into a big matrix (zeros matrix), I was trying through a loop, but every time I am getting the value error: could not broadcast the input array from the shape (6,6) into shape (4,4)
there are two issues:-
how to insert it into the zeros matrix. (specifying the location into the big zeros matrix).
how to put that matrix, from the 23rd row of the 40*40 zeroes matrix.
import numpy as np
ndofs = 39
k = np.array( [ [ 1, 0, 1, 0, 0, 0 ],
[ 0, 12, 6, 0, -12, 6 ],
[ 0, 6 , 4, 0, -6, 2 ],
[ 1, 0, 0, 1, 0, 0 ],
[ 0, -12, -6, 0, 12, 6 ],
[ 0, 6, 2, 0, -6, 4 ] ] )
K = np.zeros((ndofs+1,ndofs+1))
print(K.shape)
# for each element, changes to global coordinates
for i in range(ndofs):
K_temp = np.zeros((ndofs+1,ndofs+1))
K_temp[3*i:3*i+6, 3*i:3*i+6] = k
K += K_temp
print(K)
you just overwrite the indexes in the bigger array...
a = numpy.zeros((50,50))
b = numpy.ones((10,10))
a[2:12,2:12] = b # insert b at 2,2
So I have a 2d array
X = [[ 7.3571296 0.49626 ]
[-0.7747436 3.14599 ]
[ 3.7817762 4.1808457 ]
[ 4.5332413 6.8228664 ]
[ 7.4655724 -0.11392868]
[ 2.416418 4.692072 ]]
and a cluster label array.
y = [1 3 2 2 1 3]
Then I have an algorithm that can predict the label of the 2d array.
Z = {1: array([[ 7.3571296 0.49626 ],
[ 7.4655724 -0.11392868]]),
2: array([[ 3.7817762 4.1808457 ]
[ 2.416418 4.692072 ]]),
3: array([[-0.7747436 3.14599 ],
[ 4.5332413 6.8228664 ]])}
I want to match my predicted label with original label to know my algorithm's accuracy. But how can I extract the dictionary format into label array format? (i.e. y_pred = [1 3 2 3 1 2])
You can use the keys() method of the dictionary and cast it to list.
import numpy as np
Z = {1: np.asarray([[7.3571296, 0.49626], [7.4655724, 0.11392868]]),
2: np.asarray([[3.7817762, 4.1808457], [2.416418, 4.692072]]),
3: np.asarray([[-0.7747436, 3.14599], [4.5332413, 6.8228664]])}
print(list(Z.keys())) #[1, 2, 3]
So I'm trying to generate a list of possible adjacent movements within a 3d array (preferebly n-dimensional).
What I have works as it's supposed to, but I was wondering if there's a more numpythonic way to do so.
def adjacents(loc, bounds):
adj = []
bounds = np.array(bounds) - 1
if loc[0] > 0:
adj.append((-1, 0, 0))
if loc[1] > 0:
adj.append((0, -1, 0))
if loc[2] > 0:
adj.append((0, 0, -1))
if loc[0] < bounds[0]:
adj.append((1, 0, 0))
if loc[1] < bounds[1]:
adj.append((0, 1, 0))
if loc[2] < bounds[2]:
adj.append((0, 0, 1))
return np.array(adj)
Here are some example outputs:
adjacents((0, 0, 0), (10, 10, 10))
= [[1 0 0]
[0 1 0]
[0 0 1]]
adjacents((9, 9, 9), (10, 10, 10))
= [[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]]
adjacents((5, 5, 5), (10, 10, 10))
= [[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]
[ 1 0 0]
[ 0 1 0]
[ 0 0 1]]
Here's an alternative which is vectorized and uses a constant, prepopulated array:
# all possible moves
_moves = np.array([
[-1, 0, 0],
[ 0,-1, 0],
[ 0, 0,-1],
[ 1, 0, 0],
[ 0, 1, 0],
[ 0, 0, 1]])
def adjacents(loc, bounds):
loc = np.asarray(loc)
bounds = np.asarray(bounds)
mask = np.concatenate((loc > 0, loc < bounds - 1))
return _moves[mask]
This uses asarray() instead of array() because it avoids copying if the input happens to be an array already. Then mask is constructed as an array of six bools corresponding to the original six if conditions. Finally, the appropriate rows of the constant data _moves are returned.
But what about performance?
The vectorized approach above, while it will appeal to some, actually runs only half as fast as the original. If it's performance you're after, the best simple change you can make is to remove the line bounds = np.array(bounds) - 1 and subtract 1 inside each of the last three if conditions. That gives you a 2x speedup (because it avoids creating an unnecessary array).
I want to do some forces calculations between vertices and because the forces are symmetrical I have a list of vertice-pairs that need those forces added. I am sure it's possible with fancy indexing, but I really just can get it to work with a slow python for-loop. for symmetric reasons, the right-hand side of the index array needs a negative sign when adding the forces.
consider you have the vertice index array:
>>> I = np.array([[0,1],[1,2],[2,0]])
I = [[0 1]
[1 2]
[2 0]]
and the x,y forces array for each pair:
>>> F = np.array([[3,6],[4,7],[5,8]])
F = [[3 6]
[4 7]
[5 8]]
the wanted operation could be described as:
"vertice #0 sums the force vectors (3,6) and (-5,-8),
vertice #1 sums the force vectors (-3,-6) and (4,7),
vertice #2 sums the force vectors (-4,-7) and (5,8)"
Desired results:
[ 3 6 ] [ 0 0 ] [-5 -8 ] [-2 -2 ] //resulting force Vertice #0
A = [-3 -6 ] + [ 4 7 ] + [ 0 0 ] = [ 1 1 ] //resulting force Vertice #1
[ 0 0 ] [-4 -7 ] [ 5 8 ] [ 1 1 ] //resulting force Vertice #2
edit:
my ugly for-loop solution:
import numpy as np
I = np.array([[0,1],[1,2],[2,0]])
F = np.array([[3,6],[4,7],[5,8]])
A = np.zeros((3,2))
A_x = np.zeros((3,2))
A_y = np.zeros((3,2))
for row in range(0,len(F)):
A_x[I[row][0],0]= F[row][0]
A_x[I[row][1],1]= -F[row][0]
A_y[I[row][0],0]= F[row][1]
A_y[I[row][1],1]= -F[row][1]
A = np.hstack((np.sum(A_x,axis=1).reshape((3,1)),np.sum(A_y,axis=1).reshape((3,1))))
print(A)
A= [[-2. -2.]
[ 1. 1.]
[ 1. 1.]]
Your current "push-style" interpretation of I is
For row-index k in I, take the forces from F[k] and add/subtract them to out[I[k], :]
I = np.array([[0,1],[1,2],[2,0]])
out = numpy.zeros_like(F)
for k, d in enumerate(I):
out[d[0], :] += F[k]
out[d[1], :] -= F[k]
out
# array([[-2, -2],
# [ 1, 1],
# [ 1, 1]])
However you can also change the meaning of I on its head and make it "pull-style", so it says
For row-index k in I, set vertex out[k] to be the difference of F[I[k]]
I = np.array([[0,2],[1,0],[2,1]])
out = numpy.zeros_like(F)
for k, d in enumerate(I):
out[k, :] = F[d[0], :] - F[d[1], :]
out
# array([[-2, -2],
# [ 1, 1],
# [ 1, 1]])
In which case the operation simplifies quite easily to mere fancy indexing:
out = F[I[:, 0], :] - F[I[:, 1], :]
# array([[-2, -2],
# [ 1, 1],
# [ 1, 1]])
You can preallocate an array to hold the shuffled forces and then use the index like so:
>>> N = I.max() + 1
>>> out = np.zeros((N, 2, 2), F.dtype)
>>> out[I, [1, 0]] = F[:, None, :]
>>> np.diff(out, axis=1).squeeze()
array([[-2, -2],
[ 1, 1],
[ 1, 1]])
or, equivalently,
>>> out = np.zeros((2, N, 2), F.dtype)
>>> out[[[1], [0]], I.T] = F
>>> np.diff(out, axis=0).squeeze()
array([[-2, -2],
[ 1, 1],
[ 1, 1]])
The way I understand the question, the values in the I array represent the vortex number, or the name of the vortex. They are not an actual positional index. Based on this thought, I have a different solution that uses the original I array. It does not quite come without loops, but should be OK for a reasonable number of vertices:
I = np.array([[0,1],[1,2],[2,0]])
F = np.array([[3,6],[4,7],[5,8]])
pos = I[:, 0]
neg = I[:, 1]
A = np.zeros_like(F)
unique = np.unique(I)
for i, vortex_number in enumerate(unique):
A[i] = F[np.where(pos==vortex_number)] - F[np.where(neg==vortex_number)]
# produces the expected result
# [[-2 -2]
# [ 1 1]
# [ 1 1]]
Maybe this loop can also be replaced by some numpy magic.
I'm not sure how to phrase the question but here's what I'm trying to do.
arr_first = np.array([[0,0,0,0],[0,0,0,0],[1,1,1,0],[1,1,1,0],[1,1,1,0],[1,1,2,0],[1,1,2,0],[2,2,2,0]])
arr_second = np.array([[0,0,0],[1,1,1],[1,1,2],[2,2,2]])
I am trying to filter arr_first by the first three elements of arr_second, resulting in...
[array([0, 0, 0, 0]), array([0, 0, 0, 0])]
[array([1, 1, 1, 0]), array([1, 1, 1, 0]), array([1, 1, 1, 0])]
[array([1, 1, 2, 0]), array([1, 1, 2, 0])]
[array([2, 2, 2, 0])]
and then, with the filtered 2d arrays, add 32 to the fourth element of one of the arrays in each 2d array, like this:
[[ 0 0 0 0]
[ 0 0 0 32]
[ 1 1 1 0]
[ 1 1 1 0]
[ 1 1 1 32]
[ 1 1 2 0]
[ 1 1 2 32]
[ 2 2 2 32]]
and save that data to the original arr_first.
The method I am currently using to do that is with python list comprehension syntax:
for i in range(len(arr_second)):
filtered = [row for row in arr_first if
arr_second[i][0] == row[0] and arr_second[i][1] == row[1] and arr_second[i][2] == row[2]]
choosen_block = random.choice(filtered)
choosen_block[3] += 32
print(arr_first)
This works, but it can be very slow in large data sets. Therefore, I tried filtering by using numpy's in1d:
for i in range(len(arr_second)):
filtered = arr_first[np.in1d(arr_first[:, 0], arr_second[i][0]) &
np.in1d(arr_first[:, 1], arr_second[i][1]) &
np.in1d(arr_first[:, 2], arr_second[i][2])]
choosen_block = random.choice(filtered)
choosen_block[3] += 32
But the problem with this method is that the changes are no longer saved in arr_first, unlike the list comprehension method as arr_first is no longer in a pass by reference to filtered.
I was wondering if someone could give me some guidance on how to fix this by making the changes in filtered occur also in arr_first instead of having to make another list and with a loop appending filtered to it.
You can use Pandas to groupby, sample, and update arr_first.
import pandas as pd
df = pd.DataFrame(arr_first)
inner_len = len(arr_first[0,:])
update_amt = 32
update_ix = 3
df.iloc[(df.groupby(list(range(inner_len)))
.apply(lambda x: x.sample().index.values[0]).values),
update_ix] += update_amt
arr_first
[[ 0 0 0 0]
[ 0 0 0 32]
[ 1 1 1 0]
[ 1 1 1 32]
[ 1 1 1 0]
[ 1 1 2 32]
[ 1 1 2 0]
[ 2 2 2 32]]
Explanation
Pandas lets us group arr_first by the unique sets of row values, e.g. [1,1,1,0]. I abbreviated the groupby procedure with range(), but the command really just says: "Group by column 0, then column 1, then column 2, then column 3". That effectively groups by the full set of values for each row in arr_first. This seems to effectively mimic your approach of matching arr_first rows by the values in arr_second.
Once we've got the rows in groups, we can sample one of the rows in each group, and grab its index.
Then, use the selected indices for the addition update step.
Even though we're updating df, arr_first is also updated, as it is (sort of) passed by reference in the creation of df.
I tend to think in Pandas, but there may be a Numpy equivalent to these steps.
Here is how to make your approach work.
First, why does the list comp work in-place, whereas the in1d doesn't? The list comp operates on individual rows of arr_first, each such row is a "view", i.e. a reference into arr_first. By contrast, the in1d soln creates a mask which is then applied to the array. Using masks is one form of "fancy" or "advanced" indexing. Since the subset of the orig array fancy indexing refers to will typically not be representable by offsets and strides this forces a copy and whatever you do afterwards will not affect the orig array.
One easy fix is to not apply the mask. Instead convert it to a vector of row indices and use random.choice directly on this vector:
import numpy as np
import random
arr_first = np.array([[0,0,0,0],[0,0,0,0],[1,1,1,0],[1,1,1,0],[1,1,1,0],[1,1,2,0],[1,1,2,0],[2,2,2,0]])
arr_second = np.array([[0,0,0],[1,1,1],[1,1,2],[2,2,2]])
for i in range(len(arr_second)):
filtered_idx = np.where(np.in1d(arr_first[:, 0], arr_second[i][0]) &
np.in1d(arr_first[:, 1], arr_second[i][1]) &
np.in1d(arr_first[:, 2], arr_second[i][2]))[0]
choosen_block = random.choice(filtered_idx)
arr_first[choosen_block, 3] += 32
print(arr_first)
Sample output:
[[ 0 0 0 0]
[ 0 0 0 32]
[ 1 1 1 32]
[ 1 1 1 0]
[ 1 1 1 0]
[ 1 1 2 0]
[ 1 1 2 32]
[ 2 2 2 32]]