Using a python array for calculations - python

I have a numpy array named distance.
It is actually the distance from the center of a circle divided in equal intervals of 0.1262755.
array([ 0. , 0.12627551, 0.25255103, 0.37882654, 0.50510206,
0.63137757, 0.75765309, 0.8839286 , 1.01020411, 1.13647963,
1.26275514])
I need to use this to find area of the annulus of the circle. The formula is:
math.pi*(R**2-r**2)
wherein "R" denotes the large radii and "r" the small radii. Example for area of second annuli is math.pi(0.25255103^2-0.12627551^2)
I need to repeat this for the entire distance array and I would like to know how?

>>> import numpy as np
>>> a = np.array([ 0. , 0.12627551, 0.25255103, 0.37882654, 0.50510206,
0.63137757, 0.75765309, 0.8839286 , 1.01020411, 1.13647963,
1.26275514])
>>> [math.pi*(R**2-r**2) for R, r in zip(a[1:], a)]
[0.050094279561751477, 0.15028285455350326, 0.25047140574288157, 0.35065999660288272, 0.45084853192401186, 0.55103713865226189, 0.65122565810514155, 0.75141421722864576, 0.85160284775926787, 0.95179134340977567]

Related

Min/max scaling with additional points

I'm trying to normalize an array within a range, e.g. [10,100]
But I also want to manually specify additional points in my result array, for example:
num = [1,2,3,4,5,6,7,8]
num_expected = [min(num), 5, max(num)]
expected_range = [10, 20, 100]
result_array = normalize(num, num_expected, expected_range)
Intended results:
Values from 1-5 are normalized to range (10,20].
5 in num array is mapped to 20 in expected range.
Values from 6-8 are normalized to range (20,100].
I know I can do it by normalizing the array twice, but I might have many additional points to add. I was wondering if there's any built-in function in numpy or scipy to do this?
I've checked MinMaxScaler in sklearn, but did not find the functionality I want.
Thanks!
Linear interpolation will do exactly what you want:
import scipy.interpolate
interp = scipy.interpolate.interp1d(num_expected, expected_range)
Then just pass numbers or arrays of numbers that you want to interpolate:
In [20]: interp(range(1, 9))
Out[20]:
array([ 10. , 12.5 , 15. , 17.5 ,
20. , 46.66666667, 73.33333333, 100. ])

How to multiply a 3D matrix with a 2D matrix efficiently in numpy

I have two multidimensional arrays, which I want to multiply with each other. One has the shape N,N,3 and the other has the shape N,N.
Let me set the stage:
I have an array of atom positions of the shape N,3:
atom_positions = [[x1,y1,z1],
[x2,y2,z2],
[x3,y3,z3],
...
]
From these I calculate an upper triangular matrix of distance vectors so that the resulting N,N,3 matrix contains all unique pair distance vectors r_ij of the vectors inside atom_positions:
pair_distance_vectors = [[[0,0,0],[x2-x1,y2-y1,z2-z1],[x3-x1,y3-y1,z3-z1],...],
[[0,0,0],[0,0,0] ,[x3-x2,y3-y2,z3-z2],...],
...
]
Now I want to normalize each of these pair distance vectors. For that I want to use my N,N pair_distances array, which contains the length of every vector inside pair_distance_vectors.
The formula for a single vector is:
r_ij/|r_ij|
I want to do that by doing a matrix multiplication, where every entry in the N,N array becomes a scalar by which a vector inside the N,N,3 array is multiplied. I'm pretty sure that this can be achieved somehow with numpy by using numpy.dot() or a different function, but I just can't find the answer myself. Also, I'm afraid if I do find a transformation which allows for this, that my maths will be faulty.
Here's some demonstration code, which achieves what I want in a very inefficient fashion:
import numpy as np
pair_distance_vectors = np.ones(shape=(2,2,3))
pair_distances = np.array(((1,2),(3,4)))
normalized_pair_distance_vectors = np.zeros(shape=(2,2,3))
for i,vec_list in enumerate(pair_distance_vectors):
for j,vec in enumerate(vec_list):
normalized_pair_distance_vectors[i,j] = vec*pair_distances[i,j]
print(normalized_pair_distance_vectors)
Thanks in advance.
EDIT: Maybe this is clearer:
distance_vectors = [[[x11,y11,z11],[x12,y12,z12],[x13,y13,z13],...],
[[x21,y21,z21],[x22,y22,z22],[x23,y23,z23],...],
... ]
distance_matrix = [[r_11,r_12,r_13,...],
[r_21,r_22,r_23,...],
... ]
norm_distance_vectors = some_operation(distance_vectors,distance_matrix)
norm_distance_vectors = [[r_11*[x11,y11,z11],r_12*[x12,y12,z12],r_13*[x13,y13,z13],...],
[r_21*[x21,y21,z21],r_22*[x22,y22,z22],r_23*[x23,y23,z23],...],
... ]
You won't need a loop. Trick is to expand your pair_distance in the 3rd dimension by repeating it m times (m being the dimension of your vectors, here 3D) and then divide two arrays element wise (works for any m-dimensional vectors, replace 3 with m):
pair_distances = np.repeat(pair_distances[:,:,None], 3, axis=2)
normalized_pair_distance_vectors = np.nan_to_num(pair_distance_vectors/ pair_distances)
Output for your example inputs:
[[[1. 1. 1. ]
[0.5 0.5 0.5 ]]
[[0.33333333 0.33333333 0.33333333]
[0.25 0.25 0.25 ]]]

Calculate Euclidean distance between two python arrays

I want to write a function to calculate the Euclidean distance between coordinates in list_a to each of the coordinates in list_b, and produce an array of distances of dimension a rows by b columns (where a is the number of coordinates in list_a and b is the number of coordinates in list_b.
NB: I do not want to use any libraries other than numpy, for simplicity.
list_a = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
list_b = np.array([[0,1],[5,4]])
Running the function would generate:
>>> np.array([[0., 5.830951894845301],
[2.236, 3.605551275463989],
[5.830951894845301, 0.],
[5.830951894845301, 2.8284271247461903],
[4.123105625617661, 2.23606797749979]])
I have been trying to run the below
def run_euc(list_a,list_b):
euc_1 = [np.subtract(list_a, list_b)]
euc_2 = sum(sum([i**2 for i in euc_1]))
return np.sqrt(euc_2)
But I am getting the following error:
ValueError: operands could not be broadcast together with shapes (5,2) (2,2)
Thank you.
Here, you can just use np.linalg.norm to compute the Euclidean distance. Your bug is due to np.subtract is expecting the two inputs are of the same length.
import numpy as np
list_a = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
list_b = np.array([[0,1],[5,4]])
def run_euc(list_a,list_b):
return np.array([[ np.linalg.norm(i-j) for j in list_b] for i in list_a])
print(run_euc(list_a, list_b))
The code produces:
[[0. 5.83095189]
[2.23606798 3.60555128]
[5.83095189 0. ]
[5.83095189 2.82842712]
[4.12310563 2.23606798]]
I wonder what is stopping you from using Scipy. Since you are anyway using numpy, perhaps you can try using Scipy, which is not so heavy.
Why?
It has many mathematical functions with efficient implementations to make good use of your computing power.
With that in mind, here is a distance_matrix function exactly for the purpose you've mentioned.
Concretely, it takes your list_a (m x k matrix) and list_b (n x k matrix) and outputs m x n matrix with p-norm (p=2 for euclidean) distance between each pair of points across the two matrices.
from scipy.spatial import distance_matrix
distances = distance_matrix(list_a, list_b)
I think this works
import numpy as np
def distance(x,y):
x=np.array(x)
y=np.array(y)
p=np.sum((x-y)**2)
d=np.sqrt(p)
return d
Another way you can do this is:
np.array(
[np.sqrt((list_a[:,1]-list_b[i,1])**2+(list_a[:,0]-list_b[i,0])**2) for i in range(len(list_b))]
).T
Output:
array([[0. , 5.83095189],
[2.23606798, 3.60555128],
[5.83095189, 0. ],
[5.83095189, 2.82842712],
[4.12310563, 2.23606798]])
This code can be written in much more simpler and efficient way,so if you find anything that could be improved in the code,please let me know in the comment.
I hope this answers the question but this is a repeat of;
Minimum Euclidean distance between points in two different Numpy arrays, not within
# Import package
import numpy as np
# Define unequal matrices
xy1 = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
xy2 = np.array([[0,1],[5,4]])
P = np.add.outer(np.sum(xy1**2, axis=1), np.sum(xy2**2, axis=1))
N = np.dot(xy1, xy2.T)
dists = np.sqrt(P - 2*N)
print(dists)
Using scipy, you could compute distance between each pair as follows:
import numpy as np
from scipy.spatial import distance
list_a = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
list_b = np.array([[0,1],[5,4]])
dist = distance.cdist(list_a, list_b, 'euclidean')
print(dist)
Result:
array([[0. , 5.83095189],
[2.23606798, 3.60555128],
[5.83095189, 0. ],
[5.83095189, 2.82842712],
[4.12310563, 2.23606798]])

Calculate average weighted euclidean distance between values in numpy

I searched a bit around and found comparable questions/answers, but none of them returned the correct results for me.
Situation:
I have an array with a number of clumps of values == 1, while the rest of the cells are set to zero. Each cell is a square (width=height).
Now I want to calculate the average distance between all 1 values.
The formula should be like this: d = sqrt ( (( x2 - x1 )*size)**2 + (( y2 - y1 )*size)**2 )
Example:
import numpy as np
from scipy.spatial.distance import pdist
a = np.array([[1, 0, 1],
[0, 0, 0],
[0, 0, 1]])
# Given that each cell is 10m wide/high
val = 10
d = pdist(a, lambda u, v: np.sqrt( ( ((u-v)*val)**2).sum() ) )
d
array([ 14.14213562, 10. , 10. ])
After that I would calculate the average via d.mean(). However the result in d is obviously wrong as the distance between the cells in the top-row should be 20 already (two crossed cells * 10). Is there something wrong with my formula, math or approach?
You need the actual coordinates of the non-zero markers, to compute the distance between them:
>>> import numpy as np
>>> from scipy.spatial.distance import squareform, pdist
>>> a = np.array([[1, 0, 1],
... [0, 0, 0],
... [0, 0, 1]])
>>> np.where(a)
(array([0, 0, 2]), array([0, 2, 2]))
>>> x,y = np.where(a)
>>> coords = np.vstack((x,y)).T
>>> coords
array([[0, 0], # That's the coordinate of the "1" in the top left,
[0, 2], # top right,
[2, 2]]) # and bottom right.
Next you want to calculate the distance between these points. You use pdist for this, like so:
>>> dists = pdist(coords) * 10 # Uses the Euclidean distance metric by default.
>>> squareform(dists)
array([[ 0. , 20. , 28.28427125],
[ 20. , 0. , 20. ],
[ 28.28427125, 20. , 0. ]])
In this last matrix, you will find (above the diagonal), the distance between each marked point in a and another coordinate. In this case, you had 3 coordinates, so it gives you the distance between node 0 (a[0,0]) and node 1 (a[0,2]), node 0 and node 2 (a[2,2]) and finally between node 1 and node 2. To put it in different words, if S = squareform(dists), then S[i,j] returns the distance between the coordinates on row i of coords and row j.
Just the values in the upper triangle of that last matrix are also present in the variable dist, from which you can derive the mean easily, without having to perform the relatively expensive calculation of the squareform (shown here just for demonstration purposes):
>>> dists
array([ 20. , 28.2842712, 20. ])
>>> dists.mean()
22.761423749153966
Remark that your computed solution "looks" nearly correct (aside from a factor of 2), because of the example you chose. What pdist does, is it takes the Euclidean distance between the first point in the n-dimensional space and the second and then between the first and the third and so on. In your example, that means, it computes the distance between a point on row 0: that point has coordinates in 3 dimensional space given by [1,0,1]. The 2nd point is [0,0,0]. The Euclidean distance between those two sqrt(2)~1.4. Then, the distance between the first and the 3rd coordinate (the last row in a), is only 1. Finally, the distance between the 2nd coordinate (row 1: [0,0,0]) and the 3rd (last row, row 2: [0,0,1]) is also 1. So remember, pdist interprets its first argument as a stack of coordinates in n-dimensional space, n being the number of elements in the tuple of each node.

Why do I get rows of zeros in my 2D fft?

I am trying to replicate the results from a paper.
"Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006)
The figures published show "the log of the variance of the 2D-FT".
I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array.
Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude:
import numpy as np
from numpy import ma
from matplotlib import pyplot as plt
from Scientific.IO.NetCDF import NetCDFFile
### input directory
indir = '/home/nicholas/data/'
### get the flux data which is in
### [time(5day ave for 10 years),latitude,longitude]
nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r')
cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:]
cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land
nc.close()
cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s
### create an array that consists of the seasonal signal fro each pixel
year_stack = np.split(cflux, 10, axis=0)
year_stack = np.array(year_stack)
signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1))
signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask
### average the array over latitude(or longitude)
signal_time_lon = ma.mean(signal_array, axis=1)
### do a 2D Fourier Transform of the time/space image
ft = np.fft.fft2(signal_time_lon)
mgft = np.abs(ft)
ps = mgft**2
log_ps = np.log(mgft)
log_mgft= np.log(mgft)
Every second row of the ft consists completely of zeros. Why is this?
Would it be acceptable to add a randomly small number to the signal to avoid this.
signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05
EDIT: Adding images and clarify meaning
The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image.
Here are is the starting image using following code:
plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight')
Here is result using following code:
ft = np.fft.rfft2(signal_time_lon)
mgft = np.abs(ft)
ps = mgft**2
log_ps = np.log1p(mgft)
plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight')
It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.
Consider the following example:
In [59]: from scipy import absolute, fft
In [60]: absolute(fft([1,2,3,4]))
Out[60]: array([ 10. , 2.82842712, 2. , 2.82842712])
In [61]: absolute(fft([1,2,3,4, 1,2,3,4]))
Out[61]:
array([ 20. , 0. , 5.65685425, 0. ,
4. , 0. , 5.65685425, 0. ])
In [62]: absolute(fft([1,2,3,4, 1,2,3,4, 1,2,3,4]))
Out[62]:
array([ 30. , 0. , 0. , 8.48528137,
0. , 0. , 6. , 0. ,
0. , 8.48528137, 0. , 0. ])
If X[k] = fft(x), and Y[k] = fft([x x]), then Y[2k] = 2*X[k] for k in {0, 1, ..., N-1} and zero otherwise.
Therefore, I would look into how your signal_time_lon is being tiled. That may be where the problem lies.

Categories

Resources