I have a NumPy array with equations solved symbolically, with constants a and b. Here's an example of the cell at index (2,0) in my array "bounds_symbolic":
-a*sqrt(1/(a**6*b**2+1))
I also have an array, called "a_values", that I would like to substitute into my "bounds_symbolic" array. I also have the b-value set to 1, which I would also like to substitute in. Keeping the top row of the arrays intact would also be nice.
In other words, for the cell indexed at (2,0) in "bounds_symbolic", I want to substitute all of my a and b-values into the equation, while extending the column to contain the substituted equations. I then want to do this operation for the entirety of the "bounds_symbolic" array.
Here is the code that I have so far:
import sympy
import numpy as np
a, b, x, y = sympy.symbols("a b x y")
# Equation of the ellipse solved for y
ellipse = sympy.sqrt((b ** 2) * (1 - ((x ** 2) / (a ** 2))))
# Functions to be tested
test_functions = np.array(
[(a * b * x), (((a * b) ** 2) * x), (((a * b) ** 3) * x), (((a * b) ** 4) * x), (((a * b) ** 5) * x)])
# Equating ellipse and test_functions so their intersection can be symbolically solved for
equate = np.array(
[sympy.Eq(ellipse, test_functions[0]), sympy.Eq(ellipse, test_functions[1]), sympy.Eq(ellipse, test_functions[2]),
sympy.Eq(ellipse, test_functions[3]), sympy.Eq(ellipse, test_functions[4])])
# Calculating the intersection points of the ellipse and the testing functions
# Array that holds the bounds of the integral solved symbolically
bounds_symbolic = np.array([])
for i in range(0, 5):
bounds_symbolic = np.append(bounds_symbolic, sympy.solve(equate[i], x))
# Array of a-values to plug into the bounds of the integral
a_values = np.array(np.linspace(-10, 10, 201))
# Setting b equal to a constant of 1
b = 1
integrand = np.array([])
for j in range(0, 5):
integrand = np.append(integrand, (ellipse - test_functions[j]))
# New array with a-values substituted into the bounds
bounds_a = bounds_symbolic
# for j in range(0, 5):
# bounds_a = np.append[:, ]
Thank you!
Numpy arrays are the best choice when working with pure numerical data, for which they can help speed up many types of calculations. Once you start mixing sympy expressions, things can get very messy. You'll also lose all the speed advantages of numpy arrays.
Apart from that, np.append is a very slow operation as it needs to recreate the complete array every time it is executed. When creating a new numpy array, the recommended way it to first create an empty array (e.g. with np.zeros()) already with its final size.
You should also check out Python's list comprehension as it eases the creation of lists. In "pythonic" code, indices are used as little as possible. List comprehension may look a bit weird when you are used to other programming languages, but you quickly get used to them, and from then on you'll certainly prefer them.
In your example code, numpy is useful for the np.linspace command, which creates an array of numbers (again converting them with np.array isn't necessary). And at the end, you might want to convert the substituted values to a numpy array. Note that this won't work when solve would return a different number of solutions for some of the equations, as numpy arrays need an equal size for all its elements. Also note that an explicit conversion from sympy's numerical type to a dtype understood by numpy might be needed. (Sympy often works with higher precision, not caring for the loss of speed.)
Also note that if you assign b = 1, you create a new variable and lose the variable pointing to the sympy symbol. It's recommended to use another name. Just writing b = 1 will not change the value of the symbol. You need subs to substitute symbols with values.
Summarizing, your code could look like this:
import sympy
import numpy as np
a, b, x, y = sympy.symbols("a b x y")
# Equation of the ellipse solved for y
ellipse = sympy.sqrt((b ** 2) * (1 - ((x ** 2) / (a ** 2))))
# Functions to be tested
test_functions = [a * b * x, ((a * b) ** 2) * x, ((a * b) ** 3) * x, ((a * b) ** 4) * x, ((a * b) ** 5) * x]
# Equating ellipse and test_functions so their intersection can be symbolically solved for
# Array that holds the bounds of the integral solved symbolically
bounds_symbolic = [sympy.solve(sympy.Eq(ellipse, fun), x) for fun in test_functions]
# Array of a-values to plug into the bounds of the integral
a_values = np.linspace(-10, 10, 201)
# Setting b equal to a constant of 1
b_val = 1
# New array with a-values substituted into the bounds
bounds_a = [[[bound.subs({a: a_val, b: b_val}) for bound in bounds]
for bounds in bounds_symbolic]
for a_val in a_values]
bounds_a = np.array(bounds_a, dtype='float') # shape: (201, 5, 2)
The values of the resulting array can for example be used for plotting:
import matplotlib.pyplot as plt
for i, (test_func, color) in enumerate(zip(test_functions, plt.cm.Set1.colors)):
plt.plot(a_values, bounds_a[:, i, 0], color=color, label=test_func)
plt.plot(a_values, bounds_a[:, i, 1], color=color, alpha=0.5)
plt.legend()
plt.margins(x=0)
plt.xlabel('a')
plt.ylabel('bounds')
plt.show()
Or filled:
for i, (test_func, color) in enumerate(zip(test_functions, plt.cm.Set1.colors)):
plt.plot(a_values, bounds_a[:, i, :], color=color)
plt.fill_between(a_values, bounds_a[:, i, 0], bounds_a[:, i, 1], color=color, alpha=0.1)
Related
I have a complex matrix C with dimensions (r, r) as well as a complex vector of size r. I need to compute a new matrix from C and v following this equation:
where K is also a square matrix of dimensions (r, r). Here is the code to compute K with three loops:
import numpy as np
import matplotlib.pyplot as plt
r = 9
# Create random matrix
C = np.random.rand(r,r) + np.random.rand(r,r) * 1j
v = np.random.rand(r) + np.random.rand(r) * 1j
# Original loops
K = np.zeros((r, r))
for m in range(r):
for n in range(r):
for i in range(r):
K[m,n] += np.imag( C[i,m] * np.conj(C[i,n]) * np.sign(np.imag(v[i])) )
plt.figure()
plt.imshow(K)
plt.show()
Removing the loop with i is relatively easy:
# First optimization
K = np.zeros((r, r))
for m in range(r):
for n in range(r):
K[m,n] = np.imag(np.sum(C[:,m] * np.conj(C[:,n]) * np.sign(np.imag(v)) ))
but I am not sure how to proceed to vectorize the two remaining loops. Is it actually possible in this case?
I had a lot of these of problems and here is how I usually proceeded to find solutions to writing out vectorized code.
Here is what I have noticed about your summation. Cool conclusion is that you probably do not need vectorization at all, as you can express your whole calculation as a single product of 2D matrics. Here comes...
Lets first define following matrix (sorry for lack of Latex notation, Stackoverflow does not support Mathjax) :
A_{i,j} = c_{i,j}.
B_{i,j} = c_{i,j} * sgn(Im(v_i))
Then you can write your summation as:
k_{m,n} = Im( \sum_{i=1}^{r} c_{i,m} * sgn(Im(v_i)) * c_{i,n}^* ) = Im ( \sum_{i=1}^{r} B_{i,m} * A_{i,n}^* ) = Im( \sum_{i=1}^{r} B_{m,i}^T * A_{i,n}^* )
The expression above inside of Im(.) is the by definition of matrix multiplication equivalent to following :
k_{m,n} = Im( (B^T * A^*)_{m,n} )
Which means that your matrix k can be expressed as product of transpose of matrix B and product of matrix A. In your code the matrix matrix A is assigned already to variable C. So the vectorization could be done as follows:
C = np.random.rand(r,r) + np.random.rand(r,r) * 1j
v = np.random.rand(r) + np.random.rand(r) * 1j
k = np.imag( (C * np.sign(np.imag(v)).T # np.conj(C) )
And you have avoided both nasty loops and convoluted expressions
This looks like matrix multiplication:
out = np.imag((C*np.sign(np.imag(v))[:,None]).T # np.conj(C))
Or you can use np.einsum:
out = np.imag(np.einsum('im,in,i', C, np.conj(C), np.sign(np.imag(v))))
Verification with your approach:
np.all(np.abs(out-K) < 1e-6)
# True
I found something that can work for now. However, one loop remains and since the resulting matrix is symetric, there is still some optimization to be made.
Instead of removing the i loop, I removed the two other ones:
K = np.zeros((r, r), dtype=np.complex128)
for i in range(r):
K += adjointMatrix(C) # (np.sign(np.imag(v)) * C)
K = np.imag(K)
with:
def adjointMatrix(X):
return np.conjugate( np.transpose(X) )
This question is intended to be a canonical duplicate target
Given two arrays X and Y of shapes (i, n) and (j, n), representing lists of n-dimensional coordinates,
def test_data(n, i, j, r = 100):
X = np.random.rand(i, n) * r - r / 2
Y = np.random.rand(j, n) * r - r / 2
return X, Y
X, Y = test_data(3, 1000, 1000)
what are the fastest ways to find:
The distance D with shape (i,j) between every point in X and every point in Y
The indices k_i and distance k_d of the k nearest neighbors against all points in X for every point in Y
The indices r_i, r_j and distance r_d of every point in X within distance r of every point j in Y
Given the following sets of restrictions:
Only using numpy
Using any python package
Including the special case:
Y is X
In all cases distance primarily means Euclidean distance, but feel free to highlight methods that allow other distance calculations.
#1. All Distances
only using numpy
The naive method is:
D = np.sqrt(np.sum((X[:, None, :] - Y[None, :, :])**2, axis = -1))
However this takes up a lot of memory creating an (i, j, n)-shaped intermediate matrix, and is very slow
However, thanks to a trick from #Divakar (eucl_dist package, wiki), we can use a bit of algebra and np.einsum to decompose as such: (X - Y)**2 = X**2 - 2*X*Y + Y**2
D = np.sqrt( # (X - Y) ** 2
np.einsum('ij, ij ->i', X, X)[:, None] + # = X ** 2 \
np.einsum('ij, ij ->i', Y, Y) - # + Y ** 2 \
2 * X.dot(Y.T)) # - 2 * X * Y
Y is X
Similar to above:
XX = np.einsum('ij, ij ->i', X, X)
D = np.sqrt(XX[:, None] + XX - 2 * X.dot(X.T))
Beware that floating-point imprecision can make the diagonal terms deviate very slightly from zero with this method. If you need to make sure they are zero, you'll need to explicitly set it:
np.einsum('ii->i', D)[:] = 0
Any Package
scipy.spatial.distance.cdist is the most intuitive builtin function for this, and far faster than bare numpy
from scipy.spatial.distance import cdist
D = cdist(X, Y)
cdist can also deal with many, many distance measures as well as user-defined distance measures (although these are not optimized). Check the documentation linked above for details.
Y is X
For self-referring distances, scipy.spatial.distance.pdist works similar to cdist, but returns a 1-D condensed distance array, saving space on the symmetric distance matrix by only having each term once. You can convert this to a square matrix using squareform
from scipy.spatial.distance import pdist, squareform
D_cond = pdist(X)
D = squareform(D_cond)
#2. K Nearest Neighbors (KNN)
Only using numpy
We could use np.argpartition to get the k-nearest indices and use those to get the corresponding distance values. So, with D as the array holding the distance values obtained above, we would have -
if k == 1:
k_i = D.argmin(0)
else:
k_i = D.argpartition(k, axis = 0)[:k]
k_d = np.take_along_axis(D, k_i, axis = 0)
However we can speed this up a bit by not taking the square roots until we have reduced our dataset. np.sqrt is the slowest part of calculating the Euclidean norm, so we don't want to do that until the end.
D_sq = np.einsum('ij, ij ->i', X, X)[:, None] +\
np.einsum('ij, ij ->i', Y, Y) - 2 * X.dot(Y.T)
if k == 1:
k_i = D_sq.argmin(0)
else:
k_i = D_sq.argpartition(k, axis = 0)[:k]
k_d = np.sqrt(np.take_along_axis(D_sq, k_i, axis = 0))
Now, np.argpartition performs indirect partition and doesn't necessarily give us the elements in sorted order and only makes sure that the first k elements are the smallest ones. So, for a sorted output, we need to use argsort on the output from previous step -
sorted_idx = k_d.argsort(axis = 0)
k_i_sorted = np.take_along_axis(k_i, sorted_idx, axis = 0)
k_d_sorted = np.take_along_axis(k_d, sorted_idx, axis = 0)
If you only need, k_i, you never need the square root at all:
D_sq = np.einsum('ij, ij ->i', X, X)[:, None] +\
np.einsum('ij, ij ->i', Y, Y) - 2 * X.dot(Y.T)
if k == 1:
k_i = D_sq.argmin(0)
else:
k_i = D_sq.argpartition(k, axis = 0)[:k]
k_d_sq = np.take_along_axis(D_sq, k_i, axis = 0)
sorted_idx = k_d_sq.argsort(axis = 0)
k_i_sorted = np.take_along_axis(k_i, sorted_idx, axis = 0)
X is Y
In the above code, replace:
D_sq = np.einsum('ij, ij ->i', X, X)[:, None] +\
np.einsum('ij, ij ->i', Y, Y) - 2 * X.dot(Y.T)
with:
XX = np.einsum('ij, ij ->i', X, X)
D_sq = XX[:, None] + XX - 2 * X.dot(X.T))
Any Package
KD-Tree is a much faster method to find neighbors and constrained distances. Be aware the while KDTree is usually much faster than brute force solutions above for 3d (as long as oyu have more than 8 points), if you have n-dimensions, KDTree only scales well if you have more than 2**n points. For discussion and more advanced methods for high dimensions, see Here
The most recommended method for implementing KDTree is to use scipy's scipy.spatial.KDTree or scipy.spatial.cKDTree
from scipy.spatial import KDTree
X_tree = KDTree(X)
k_d, k_i = X_tree.query(Y, k = k)
Unfortunately scipy's KDTree implementation is slow and has a tendency to segfault for larger data sets. As pointed out by #HansMusgrave here, pykdtree increases the performance a lot, but is not as common an include as scipy and can only deal with Euclidean distance currently (while the KDTree in scipy can handle Minkowsi p-norms of any order)
X is Y
Use instead:
k_d, k_i = X_tree.query(X, k = k)
Arbitrary metrics
A BallTree has similar algorithmic properties to a KDTree. I'm not aware of a parallel/vectorized/fast BallTree in Python, but using scipy we can still have reasonable KNN queries for user-defined metrics. If available, builtin metrics will be much faster.
def d(a, b):
return max(np.abs(a-b))
tree = sklearn.neighbors.BallTree(X, metric=d)
k_d, k_i = tree.query(Y)
This answer will be wrong if d() is not a metric. The only reason a BallTree is faster than brute force is because the properties of a metric allow it to rule out some solutions. For truly arbitrary functions, brute force is actually necessary.
#3. Radius search
Only using numpy
The simplest method is just to use boolean indexing:
mask = D_sq < r**2
r_i, r_j = np.where(mask)
r_d = np.sqrt(D_sq[mask])
Any Package
Similar to above, you can use scipy.spatial.KDTree.query_ball_point
r_ij = X_tree.query_ball_point(Y, r = r)
or scipy.spatial.KDTree.query_ball_tree
Y_tree = KDTree(Y)
r_ij = X_tree.query_ball_tree(Y_tree, r = r)
Unfortunately r_ij ends up being a list of index arrays that are a bit difficult to untangle for later use.
Much easier is to use cKDTree's sparse_distance_matrix, which can output a coo_matrix
from scipy.spatial import cKDTree
X_cTree = cKDTree(X)
Y_cTree = cKDTree(Y)
D_coo = X_cTree.sparse_distance_matrix(Y_cTree, r = r, output_type = `coo_matrix`)
r_i = D_coo.row
r_j = D_coo.column
r_d = D_coo.data
This is an extraordinarily flexible format for the distance matrix, as it stays an actual matrix (if converted to csr) can also be used for many vectorized operations.
a,b=np.ogrid[0:n:1,0:n:1]
A=np.exp(1j*(np.pi/3)*np.abs(a-b))
a,b=np.diag_indices_from(A)
A[a,b]=1-1j/np.sqrt(3)
is my basis. it produces a grid which acts as an n*n matrix.
My issue is I need to replace a column in the grid, say for example where b=17.
I need for this column to be:
A=np.exp(1j*(np.pi/3)*np.abs(a-17+geo_mean(x)))
except for where a=b where it needs to stay as:
A[a,b]=1-1j/np.sqrt(3)
geo_mean(x) is just a geometric average of 50 values determined from a pseudo random number generator, defined in my code as:
x=[random.uniform(0,0.5) for p in range(0,50)]
def geo_mean(iterable):
a = np.array(iterable)
return a.prod()**(1.0/len(a))
So how do i go about replacing a column to include the geo_mean in the exponent formula and do it without changing the diagonal value?
Let's start by saying that diag_indices_from() is kind of useless here since we already know that diagonal elements are those that have equal indices i and j and run up to value n. Therefore, let's simplify the code a little bit at the beginning:
a, b = np.ogrid[0:n:1, 0:n:1]
A = np.exp(1j * (np.pi / 3) * np.abs(a - b))
diag = np.arange(n)
A[diag, diag] = 1 - 1j / np.sqrt(3)
Now, let's say you would like to set the column k values, except for the diagonal element, to
np.exp(1j * (np.pi/3) * np.abs(a - 17 + geo_mean(x)))
(I guess a in the above formula is row index).
This can be done using integer indices, especially that they are almost computed: we already have diag and we just need to remove from it the index of the diagonal element that needs to be kept unchanged:
r = np.delete(diag, k)
Then
x = np.random.uniform(0, 0.5, (r.size, 50))
A[r, k] = np.exp(1j * (np.pi/3) * np.abs(r - k + geo_mean(x)))
However, for the above to work, you need to rewrite your geo_mean() function in a such a way that it will work with 2D input arrays (I will also add some checks and conversions to make it backward compatible):
def geo_mean(x):
x = np.asarray(x)
dim = len(x.shape)
x = np.atleast_2d(x)
v = np.prod(x, axis=1) ** (1.0 / x.shape[1])
return v[0] if dim == 1 else v
Suppose I have two 2D NumPy arrays A and B, I would like to compute the matrix C whose entries are C[i, j] = f(A[i, :], B[:, j]), where f is some function that takes two 1D arrays and returns a number.
For instance, if def f(x, y): return np.sum(x * y) then I would simply have C = np.dot(A, B). However, for a general function f, are there NumPy/SciPy utilities I could exploit that are more efficient than doing a double for-loop?
For example, take def f(x, y): return np.sum(x != y) / len(x), where x and y are not simply 0/1-bit vectors.
Here is a reasonably general approach using broadcasting.
First, reshape your two matrices to be rank-four tensors.
A = A.reshape(A.shape + (1, 1))
B = B.reshape((1, 1) + B.shape)
Second, apply your function element by element without performing any reduction.
C = f(A, B) # e.g. A != B
Having reshaped your matrices allows numpy to broadcast. The resulting tensor C has shape A.shape + B.shape.
Third, apply any desired reduction by, for example, summing over the indices you want to discard:
C = C.sum(axis=(1, 3)) / C.shape[0]
consider my code
a,b,c = np.loadtxt ('test.dat', dtype='double', unpack=True)
a,b, and c are the same array length.
for i in range(len(a)):
q[i] = 3*10**5*c[i]/100
x[i] = q[i]*math.sin(a)*math.cos(b)
y[i] = q[i]*math.sin(a)*math.sin(b)
z[i] = q[i]*math.cos(a)
I am trying to find all the combinations for the difference between 2 points in x,y,z to iterate this equation (xi-xj)+(yi-yj)+(zi-zj) = r
I use this combination code
for combinations in it.combinations(x,2):
xdist = (combinations[0] - combinations[1])
for combinations in it.combinations(y,2):
ydist = (combinations[0] - combinations[1])
for combinations in it.combinations(z,2):
zdist = (combinations[0] - combinations[1])
r = (xdist + ydist +zdist)
This takes a long time for python for a large file I have and I am wondering if there is a faster way to get my array for r preferably using a nested loop?
Such as
if i in range(?):
if j in range(?):
Since you're apparently using numpy, let's actually use numpy; it'll be much faster. It's almost always faster and usually easier to read if you avoid python loops entirely when working with numpy, and use its vectorized array operations instead.
a, b, c = np.loadtxt('test.dat', dtype='double', unpack=True)
q = 3e5 * c / 100 # why not just 3e3 * c?
x = q * np.sin(a) * np.cos(b)
y = q * np.sin(a) * np.sin(b)
z = q * np.cos(a)
Now, your example code after this doesn't do what you probably want it to do - notice how you just say xdist = ... each time? You're overwriting that variable and not doing anything with it. I'm going to assume you want the squared euclidean distance between each pair of points, though, and make a matrix dists with dists[i, j] equal to the distance between the ith and jth points.
The easy way, if you have scipy available:
# stack the points into a num_pts x 3 matrix
pts = np.hstack([thing.reshape((-1, 1)) for thing in (x, y, z)])
# get squared euclidean distances in a matrix
dists = scipy.spatial.squareform(scipy.spatial.pdist(pts, 'sqeuclidean'))
If your list is enormous, it's more memory-efficient to not use squareform, but then it's in a condensed format that's a little harder to find specific pairs of distances with.
Slightly harder, if you can't / don't want to use scipy:
pts = np.hstack([thing.reshape((-1, 1)) for thing in (x, y, z)])
sqnorms = np.sum(pts ** 2, axis=1)
dists = sqnorms.reshape((-1, 1)) - 2 * np.dot(pts, pts.T) + sqnorms
which basically implements the formula (a - b)^2 = a^2 - 2 a b + b^2, but all vector-like.
Apologies for not posting a full solution, but you should avoid nesting calls to range(), as it will create a new tuple every time it gets called. You are better off either calling range() once and storing the result, or using a loop counter instead.
For example, instead of:
max = 50
for number in range (0, 50):
doSomething(number)
...you would do:
max = 50
current = 0
while current < max:
doSomething(number)
current += 1
Well, the complexity of your calculation is pretty high. Also, you need to have huge amounts of memory if you want to store all r values in a single list. Often, you don't need a list and a generator might be enough for what you want to do with the values.
Consider this code:
def calculate(x, y, z):
for xi, xj in combinations(x, 2):
for yi, yj in combinations(y, 2):
for zi, zj in combinations(z, 2):
yield (xi - xj) + (yi - yj) + (zi - zj)
This returns a generator that computes only one value each time you call the generator's next() method.
gen = calculate(xrange(10), xrange(10, 20), xrange(20, 30))
gen.next() # returns -3
gen.next() # returns -4 and so on