I want to do some forces calculations between vertices and because the forces are symmetrical I have a list of vertice-pairs that need those forces added. I am sure it's possible with fancy indexing, but I really just can get it to work with a slow python for-loop. for symmetric reasons, the right-hand side of the index array needs a negative sign when adding the forces.
consider you have the vertice index array:
>>> I = np.array([[0,1],[1,2],[2,0]])
I = [[0 1]
[1 2]
[2 0]]
and the x,y forces array for each pair:
>>> F = np.array([[3,6],[4,7],[5,8]])
F = [[3 6]
[4 7]
[5 8]]
the wanted operation could be described as:
"vertice #0 sums the force vectors (3,6) and (-5,-8),
vertice #1 sums the force vectors (-3,-6) and (4,7),
vertice #2 sums the force vectors (-4,-7) and (5,8)"
Desired results:
[ 3 6 ] [ 0 0 ] [-5 -8 ] [-2 -2 ] //resulting force Vertice #0
A = [-3 -6 ] + [ 4 7 ] + [ 0 0 ] = [ 1 1 ] //resulting force Vertice #1
[ 0 0 ] [-4 -7 ] [ 5 8 ] [ 1 1 ] //resulting force Vertice #2
edit:
my ugly for-loop solution:
import numpy as np
I = np.array([[0,1],[1,2],[2,0]])
F = np.array([[3,6],[4,7],[5,8]])
A = np.zeros((3,2))
A_x = np.zeros((3,2))
A_y = np.zeros((3,2))
for row in range(0,len(F)):
A_x[I[row][0],0]= F[row][0]
A_x[I[row][1],1]= -F[row][0]
A_y[I[row][0],0]= F[row][1]
A_y[I[row][1],1]= -F[row][1]
A = np.hstack((np.sum(A_x,axis=1).reshape((3,1)),np.sum(A_y,axis=1).reshape((3,1))))
print(A)
A= [[-2. -2.]
[ 1. 1.]
[ 1. 1.]]
Your current "push-style" interpretation of I is
For row-index k in I, take the forces from F[k] and add/subtract them to out[I[k], :]
I = np.array([[0,1],[1,2],[2,0]])
out = numpy.zeros_like(F)
for k, d in enumerate(I):
out[d[0], :] += F[k]
out[d[1], :] -= F[k]
out
# array([[-2, -2],
# [ 1, 1],
# [ 1, 1]])
However you can also change the meaning of I on its head and make it "pull-style", so it says
For row-index k in I, set vertex out[k] to be the difference of F[I[k]]
I = np.array([[0,2],[1,0],[2,1]])
out = numpy.zeros_like(F)
for k, d in enumerate(I):
out[k, :] = F[d[0], :] - F[d[1], :]
out
# array([[-2, -2],
# [ 1, 1],
# [ 1, 1]])
In which case the operation simplifies quite easily to mere fancy indexing:
out = F[I[:, 0], :] - F[I[:, 1], :]
# array([[-2, -2],
# [ 1, 1],
# [ 1, 1]])
You can preallocate an array to hold the shuffled forces and then use the index like so:
>>> N = I.max() + 1
>>> out = np.zeros((N, 2, 2), F.dtype)
>>> out[I, [1, 0]] = F[:, None, :]
>>> np.diff(out, axis=1).squeeze()
array([[-2, -2],
[ 1, 1],
[ 1, 1]])
or, equivalently,
>>> out = np.zeros((2, N, 2), F.dtype)
>>> out[[[1], [0]], I.T] = F
>>> np.diff(out, axis=0).squeeze()
array([[-2, -2],
[ 1, 1],
[ 1, 1]])
The way I understand the question, the values in the I array represent the vortex number, or the name of the vortex. They are not an actual positional index. Based on this thought, I have a different solution that uses the original I array. It does not quite come without loops, but should be OK for a reasonable number of vertices:
I = np.array([[0,1],[1,2],[2,0]])
F = np.array([[3,6],[4,7],[5,8]])
pos = I[:, 0]
neg = I[:, 1]
A = np.zeros_like(F)
unique = np.unique(I)
for i, vortex_number in enumerate(unique):
A[i] = F[np.where(pos==vortex_number)] - F[np.where(neg==vortex_number)]
# produces the expected result
# [[-2 -2]
# [ 1 1]
# [ 1 1]]
Maybe this loop can also be replaced by some numpy magic.
Related
I am coding a function to create generator matrix using Reed-Solomon encoding in Python. it is currently using for loops but i was wondering if there is a more efficient way to this. My code is:
def ReedSolomon(k,p):
M = np.zeros((k,p))
for i in range(k):
for j in range(p):
M[i][j] = j**i
return M
The encoding is:
I believe my function works but may not scale well to large p and k
The generalized equation for an element at index r, c in your matrix is c**r.
For a matrix of shape k, p, you can create two aranges -- a row vector from 0 to p-1, and a column vector from 0 to k-1, and have numpy automatically broadcast the shapes:
def ReedSolomon(k,p):
rr = np.arange(k).reshape((-1, 1))
cc = np.arange(p)
return cc**rr
Calling this function with e.g. k=5, p=3 gives:
>>> ReedSolomon(5, 3)
array([[ 1, 1, 1],
[ 0, 1, 2],
[ 0, 1, 4],
[ 0, 1, 8],
[ 0, 1, 16]])
you can use numpy
p=5
k=7
A = np.arange(p).reshape((1,p))
A = np.repeat(A, k, axis=0)
B = np.arange(k).reshape((k,1))
B = np.repeat(B, p, axis=1)
M = A**B
print(M)
[[ 1 1 1 1 1]
[ 0 1 2 3 4]
[ 0 1 4 9 16]
[ 0 1 8 27 64]
[ 0 1 16 81 256]
[ 0 1 32 243 1024]
[ 0 1 64 729 4096]]
I have a large point cloud in open3D and I want to basically make a 3D grid and bin the points based on which cube they are in. Other have called it "binning in 3D space."
Example image with grids only in one direction (I want to split into 3D volumes)
Better Image of what I'm trying to do
Example:
import numpy as np
A = np.array([[ 0, -1, 10],
[ 1, -2 ,11],
[ 2, -3 ,12],
[ 3, -4 ,13],
[ 4, -5 ,14],
[ 5, -6 ,15],
[ 6, -7 ,16],
[ 7, -8 ,17],
[ 8, -9 ,18]])
#point 1: X,Y,Z
#point 2: X,Y,Z
print(A)
X_segments = np.linspace(0,8,3) #plane at beginning, middle and end - this creates 2 sections where data can be
Y_segments = np.linspace(-9,-1,3)
Z_segments = np.linspace(10,18,3)
#all of these combined form 4 cuboids where data can be
#its also possible for the data to be outside these cuboids but we can ignore that
bin1 = A where A[0,:] is > X_segments [0] and < X_segments[1]
and A where A[1,:] is > Y_segments [0] and < Y_segments[1]
and A where A[2,:] is > Z_segments [0] and < Z_segments[1]
bin2 = A where A[0,:] is > X_segments [1] and < X_segments[2]
and A where A[1,:] is > Y_segments [0] and < Y_segments[1]
and A where A[2,:] is > Z_segments [0] and < Z_segments[1]
bin3 = A where A[0,:] is > X_segments [1] and < X_segments[2]
and A where A[1,:] is > Y_segments [1] and < Y_segments[2]
and A where A[2,:] is > Z_segments [0] and < Z_segments[1]
bin4 = A where A[0,:] is > X_segments [1] and < X_segments[2]
and A where A[1,:] is > Y_segments [1] and < Y_segments[2]
and A where A[2,:] is > Z_segments [1] and < Z_segments[2]
Thanks yall!
You can try the following:
import numpy as np
A = np.array([[ 0, -1, 10],
[ 1, -2 ,11],
[ 2, -3 ,12],
[ 3, -4 ,13],
[ 4, -5 ,14],
[ 5, -6 ,15],
[ 6, -7 ,16],
[ 7, -8 ,17],
[ 8, -9 ,18]])
X_segments = np.linspace(0,8,3)
Y_segments = np.linspace(-9,-1,3)
Z_segments = np.linspace(10,18,3)
edges = [X_segments, Y_segments, Z_segments]
print(edges) # just to show edges of the bins
We get:
[[ 0. 4. 8.]
[-9. -5. -1.]
[10. 14. 18.]]
Next, apply np.digitize along each coordinate axis:
coords = np.vstack([np.digitize(A.T[i], b, right=True) for i, b in enumerate(edges)]).T
print(coords)
This gives:
[[0 2 0]
[1 2 1]
[1 2 1]
[1 2 1]
[1 1 1]
[2 1 2]
[2 1 2]
[2 1 2]
[2 0 2]]
Rows of this array describe positions of the corresponding rows of A in the grid of bins. For example, the second row [1, 2, 1] indicates that the second row of A, i.e. [1, -2, 11] is in the first bin along the X-axis (since 0 < 1 <= 4), in the second bin along the Y-axis (since -5 < -2 <= -1), and in the first bin along the Z-axis (since 10 < 11 <= 14). Then you can pick elements belonging to each cuboid:
# select rows of A that are in the cuboid with coordinates [1, 2, 1]
A[np.all(coords == [1, 2, 1], axis=1)]
This gives:
array([[ 1, -2, 11],
[ 2, -3, 12],
[ 3, -4, 13]])
Here is a 3-dimensional numpy array:
import numpy as np
m = np.array([
[
[1,2,3,2], [4,5,6,3]
],
[
[7,8,9,4], [1,2,3,5]
]
])
For each tuple, I need to multiply the first three values by the last one (divided by 10 and rounded), and then to keep only the 3 results. For example in [1,2,3,2]:
The 1 becomes: round(1 * 2 / 10) = 0
The 2 becomes: round(2 * 2 / 10) = 0
The 3 becomes: round(3 * 2 / 10) = 1
So, [1,2,3,2] becomes: [0,0,1].
And the complete result will be:
[
[
[0,0,1], [1,2,2]
],
[
[3,3,4], [1,1,2]
]
]
I tried to separate the last value of each tuple in a alpha variable, and the 3 first values in a rgb variable.
alpha = m[:, :, 3] / 10
rgb = m[:, :, :3]
But after that I'm a beginner in Python and I really don't know how to process these arrays.
A little help from an experienced Python-guy will be most welcome.
Try this
n = np.rint(m[:,:,:3] * m[:,:,[-1]] / 10).astype(int)
Out[192]:
array([[[0, 0, 1],
[1, 2, 2]],
[[3, 3, 4],
[0, 1, 2]]])
I would like to compute all possible pairwise differences (without repetition) between the columns of a matrix. What's an efficient / pythonic way to do this?
mat = np.random.normal(size=(10, 3))
mat
array([[ 1.57921282, 0.76743473, -0.46947439],
[ 0.54256004, -0.46341769, -0.46572975],
[ 0.24196227, -1.91328024, -1.72491783],
[-0.56228753, -1.01283112, 0.31424733],
[-0.90802408, -1.4123037 , 1.46564877],
[-0.2257763 , 0.0675282 , -1.42474819],
[-0.54438272, 0.11092259, -1.15099358],
[ 0.37569802, -0.60063869, -0.29169375],
[-0.60170661, 1.85227818, -0.01349722],
[-1.05771093, 0.82254491, -1.22084365]])
In this matrix there are 3 pairwise differences (N choose k unique combinations, where order doesn't matter).
pair_a = mat[:, 0] - mat[:, 1]
pair_b = mat[:, 0] - mat[:, 2]
pair_c = mat[:, 1] - mat[:, 2]
is one (ugly) way. You can easily imagine using nested for loops for larger matrices, but I am hoping there's a nicer way.
I would like the result to be another matrix, with scipy.misc.comb(mat.shape[1], 2) columns and mat.shape[0] rows.
Combinations of length 2 can be found using the following trick:
N = mat.shape[1]
I, J = np.triu_indices(N, 1)
result = mat[:,I] - mat[:,J]
In [7]: arr = np.arange(m*n).reshape((m, n))
In [8]: arr
Out[8]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19]])
In [9]: from itertools import combinations
In [10]: def diffs(arr):
....: arr = np.asarray(arr)
....: n = arr.shape[1]
....: for i, j in combinations(range(n), 2):
....: yield arr[:, i] - arr[:, j]
....:
In [11]: for x in diffs(arr): print x
[-1 -1 -1 -1 -1]
[-2 -2 -2 -2 -2]
[-3 -3 -3 -3 -3]
[-1 -1 -1 -1 -1]
[-2 -2 -2 -2 -2]
[-1 -1 -1 -1 -1]
If you need them in an array, then just preallocate the array and assign the rows (or columns, as desired).
Incidentally, here is the solution I came up with. Much less elegant than moarningsun's.
def pair_diffs(mat):
n_pairs = int(sp.misc.comb(mat.shape[1], 2))
pairs = np.empty([mat.shape[0], n_pairs])
this_pair = 0
# compute all differences:
for i in np.arange(mat.shape[1]-1):
for j in np.arange(i+1, mat.shape[1]):
pairs[:, this_pair] = mat[:, i] - mat[:, j]
this_pair += 1
return pairs
I want to change elements to be [0,0,0] if the pixel at that color is blue. The code below works, but is extremely slow:
for row in range(w):
for col in range(h):
if np.array_equal(image[row][col], [255,0,0]):
image[row][col] = (0,0,0)
else:
image[row][col] = (255,255,255)
I know np.where works for single dimensional arrays, but how can I use that function to replace stuff for a 3 dimensional object?
Since you brought up numpy.where, this is how you'd do it using nupmy.where:
import numpy as np
# Make an example image
image = np.random.randint(0, 255, (10, 10, 3))
image[2, 2, :] = [255, 0, 0]
# Define the color you're looking for
pattern = np.array([255, 0, 0])
# Make a mask to use with where
mask = (image == pattern).all(axis=2)
newshape = mask.shape + (1,)
mask = mask.reshape(newshape)
# Finish it off
image = np.where(mask, [0, 0, 0], [255, 255, 255])
The reshape is in there so that numpy will apply broadcasting, more here also.
The simplest thing you could do is just multiply the element you want to set to a zero array by zero. An example of this array property for a three dimensional array is shown below.
x = array([ [ [ 1,2,3 ] , [ 2 , 3 , 4 ] ] , [ [ 1, 2, 3, ] , [ 2 , 3 , 4 ] ] , [ [ 1,2,3 ] , [ 2 , 3 , 4 ] ] , [ [ 1, 2, 3, ] , [ 2 , 3 , 4 ] ] ])
print x
if 1:
x[0] = x[0] * 0
print x
This will yield two printouts:
[[[1 2 3]
[2 3 4]]
[[1 2 3]
[2 3 4]]...
and
[[[0 0 0]
[0 0 0]]
[[1 2 3]
[2 3 4]]...
This method will work for both image[row], and image[row][column] in your example. Your example reworked would look like:
for row in range(w):
for col in range(h):
if np.array_equal(image[row][col], [255,0,0]):
image[row][col] = 0
else:
image[row][col] = 255