I'm using doatools.py library (https://github.com/morriswmz/doatools.py)
Now, my code looks like:
import numpy as np
from scipy import constants as const
import math
import doatools.model as model
import doatools.estimation as estimation
def calculate_wavelength(frequency):
return const.speed_of_light / frequency
# Uniform circular array
# X
# |
# X---------X
# |
# X
NUMBER_OF_ELEMENTS = 4 # elements are shown as "X"
RADIUS = 0.47 / 2
FREQ_MHZ = 315
freq = FREQ_MHZ * const.mega
wavelength = calculate_wavelength(freq)
antenna_array = model.UniformCircularArray(NUMBER_OF_ELEMENTS, RADIUS)
# Create a MUSIC-based estimator.
grid = estimation.FarField1DSearchGrid()
estimator = estimation.MUSIC(antenna_array, wavelength, grid)
R = np.array([[1.5, 2, 3, 4], [4, 5, 6, 5], [45, 5, 5, 6], [5, 1, 0, 5]])
_, estimates = estimator.estimate(R, 1, return_spectrum=False, refine_estimates=True)
print('Estimates: {0}'.format(estimates.locations))
I can generate signal with this library, but how to use my own? For example, signal from ADC (like this:
-> Switching to antenna 0 : [0, 4, 7, 10]
-> Switching to antenna 1 : [5, 6, 11, 83]
-> Switching to antenna 2 : [0, 23, 2, 34]
-> Switching to antenna 3 : [23, 105, 98, 200]
)
I think your question is how you should feed the real data from antennas, right?
Supposedly your data should be in order along time. I mean in case of "antenna 0 : [0, 4, 7, 10]", 0 is the 1st-in data, and 4, 7, in order, and the 10 is the last one in time.
If yes, you could leave them as a simple matrix like what you typed above:
r = matrix 4x4 of
0, 4, 7, 10
5, 6, 11, 83
0, 23, 2, 34
23, 105, 98, 200
//===============
r(0,0) = 0, r(0,1) = 4, r(0,2) = 7, r(0,3) = 10
r(1,0) = 5, r(1,1) = 6, ... etc.
r(2,0) = 0, ...etc.
//==============
R = the product of r and its hermitian matrix (r.h in python).
R = r # r.h
And this is the covariance matrix that you need to fill in as the 1st argument in function.
Related
I'm trying to build a grid world using numpy.
The grid is 4*4 and laid out in a square.
The first and last squares (i.e. 1 and 16) are terminal squares.
At each time step you can move one step in any direction either: up, down , left or right.
Once you enter one of the terminal squares no further moves are possible and the game terminates.
The first and last columns are the left and right edges of the square whilst the first and last rows represent the top and bottom edges.
If you are on an edge, for example the left one and attempt to move left, instead of moving left you stay in the square you started in. Similarly you remain in the same square if you try and cross any of the other edges.
Although the grid is a square I've implemented it as an array.
States_r calculates the position of the states after a move right. 1 and 16 stay where they are because they are terminal states (note the code uses zero based counting so 1 and 16 are 0 and 15 respectively in the code).
The rest of the squares are in increased by one. The code for states_r works however those squares on the right edge i.e. (4, 8, 12) should also stay where they are but states_r code doesn't do that.
State_l is my attempt to include the edge condition for the left edge of the square. The logic is the same the terminal states (1, 16) should not move nor should those squares on the left edge (5, 9, 13). I think the general logic is correct but it's producing an error.
states = np.arange(16)
states_r = states[np.where((states + 1 <= 15) & (states != 0), states + 1, states)]
states_l = states[np.where((max(1, (states // 4) * 4) <= states - 1) & (states != 15), states - 1, states)]
The first example states_r works, it handles the terminal state but does not handle the edge condition.
The second example is my attempt to include the edge condition, however it is giving me the following error:
"The truth value of an array with more than one element is ambiguous."
Can someone please explain how to fix my code?
Or alternatively suggest another solution,ideally I want the code to be fast (so I can scale it up) so I want to avoid for loops if possible?
If I understood correctly you want arrays which indicate for each state where the next state is, depending on the move (right, left, up, down).
If so, I guess your implementation of state_r is not quit right. I would suggest to switch to a 2D representation of your grid, because a lot of the things you describe are easier and more intuitive to handle if you have x and y directly (at least for me).
import numpy as np
n = 4
states = np.arange(n*n).reshape(n, n)
states_r, states_l, states_u, states_d = (states.copy(), states.copy(),
states.copy(), states.copy())
states_r[:, :n-1] = states[:, 1:]
states_l[:, 1:] = states[:, :n-1]
states_u[1:, :] = states[:n-1, :]
states_d[:n-1, :] = states[1:, :]
# up [[ 0, 1, 2, 3],
# left state right [ 0, 1, 2, 3],
# down [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]]
#
# [[ 0, 0, 1, 2], [[ 0, 1, 2, 3], [[ 1, 2, 3, 3],
# [ 4, 4, 5, 6], [ 4, 5, 6, 7], [ 5, 6, 7, 7],
# [ 8, 8, 9, 10], [ 8, 9, 10, 11], [ 9, 10, 11, 11],
# [12, 12, 13, 14]] [12, 13, 14, 15]] [13, 14, 15, 15]]
#
# [[ 4, 5, 6, 7],
# [ 8, 9, 10, 11],
# [12, 13, 14, 15],
# [12, 13, 14, 15]]
If you want to exclude the terminal states, you can do something like this:
terminal_states = np.zeros((n, n), dtype=bool)
terminal_states[0, 0] = True
terminal_states[-1, -1] = True
states_r[terminal_states] = states[terminal_states]
states_l[terminal_states] = states[terminal_states]
states_u[terminal_states] = states[terminal_states]
states_d[terminal_states] = states[terminal_states]
If you prefer the 1D approach:
import numpy as np
n = 4
states = np.arange(n*n)
valid_s = np.ones(n*n, dtype=bool)
valid_s[0] = False
valid_s[-1] = False
states_r = np.where(np.logical_and(valid_s, states % n < n-1), states+1, states)
states_l = np.where(np.logical_and(valid_s, states % n > 0), states-1, states)
states_u = np.where(np.logical_and(valid_s, states > n-1), states-n, states)
states_d = np.where(np.logical_and(valid_s, states < n**2-n), states+n, states)
Another way of doing it without preallocating arrays:
states = np.arange(16).reshape(4,4)
states_l = np.hstack((states[:,0][:,None],states[:,:-1],))
states_r = np.hstack((states[:,1:],states[:,-1][:,None]))
states_d = np.vstack((states[1:,:],states[-1,:]))
states_u = np.vstack((states[0,:],states[:-1,:]))
To get them all in 1-D, you can always flatten()/ravel()/reshape(-1) the 2-D arrays.
[[ 0 1 2 3]
[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[ 0 0 1 2] [[ 0 1 2 3] [[ 1 2 3 3]
[ 4 4 5 6] [ 4 5 6 7] [ 5 6 7 7]
[ 8 8 9 10] [ 8 9 10 11] [ 9 10 11 11]
[12 12 13 14]] [12 13 14 15]] [13 14 15 15]]
[[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]
[12 13 14 15]]
And for corners you can do:
states_u[-1,-1] = 15
states_l[-1,-1] = 15
I have a number of x/y data points of varying sizes and I need to rescale each one to the same fixed size.
For example, given two sets of x/y data where the first has 12 data points and the second 6. The maximum y value of the first is 80 and the second 55:
X1 = np.array([ 1, 2, 3, 4, 5, 6 ,7, 8, 9, 10, 11, 12 ])
Y1 = np.array([ 10, 20, 50, 55, 70, 77 ,78, 80, 55, 50, 21, 12 ])
X2 = [ 1, 2, 3, 4, 5, 6 ]
Y2 = [ 10, 20, 50, 55, 50, 10 ]
How can I rescale this data so that they both have 8 data points and the maximum y value is 60? I'm developing in python with numpy/matplotlib.
If you want to add/remove points from a data set my first idea would be to do a regression on the data set with np.polyfit or scipy.optimize.curve_fit (depending on what kind of function you expect your points to follow), then generate new points from that regression.
new_x_points = [1, 2, 3, 4, 5, 6, 7, 8]
coeff = np.polyfit(X1, Y1, deg = 2)
new_y_points = np.polyval(coeff, new_x_points)
Moving points from an interval (a,b) to the interval (c,d) is purely a mathematical problem. If x is on the interval (a,b) then
f(x) = (x - a) * h + c
where
h = (d - c)/(b - a)
is a linear map to the interval (c, d).
I have a numpy 2d array and I need to transform it in a way that the first row remains the same, the second row moves by one position to right, (it can wrap around or just have zero padded to the front). Third row shifts 3 positions to the right, etc.
I can do this through a "for loop" but that is not very efficient. I am guessing there should be a filtering matrix that multipled by the original one will have the same effect, or maybe a numpy trick that will help me doing this? Thanks!
I have looked into numpy.roll() but I don't think it can work on each row separately.
import numpy as np
p = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
'''
p = [ 1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16]
desired output:
p'= [ 1 2 3 4
0 5 6 7
0 0 9 10
0 0 0 13]
'''
We can extract sliding windows into a zeros padded version of the input to have a memory efficient approach and hence performant too. To get those windows, we can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows. More info on use of as_strided based view_as_windows.
Hence, the solution would be -
from skimage.util.shape import view_as_windows
def slide_by_one(p):
m,n = p.shape
z = np.zeros((m,m-1),dtype=p.dtype)
a = np.concatenate((z,p),axis=1)
w = view_as_windows(a,(1,p.shape[1]))[...,0,:]
r = np.arange(m)
return w[r,r[::-1]]
Sample run -
In [60]: p # generic sample of size mxn
Out[60]:
array([[ 1, 5, 9, 13, 17],
[ 2, 6, 10, 14, 18],
[ 3, 7, 11, 15, 19],
[ 4, 8, 12, 16, 20]])
In [61]: slide_by_one(p)
Out[61]:
array([[ 1, 5, 9, 13, 17],
[ 0, 2, 6, 10, 14],
[ 0, 0, 3, 7, 11],
[ 0, 0, 0, 4, 8]])
We can leverage the regular rampy pattern to have a more efficient approach with a more raw usage of np.lib.stride_tricks.as_strided, like so -
def slide_by_one_v2(p):
m,n = p.shape
z = np.zeros((m,m-1),dtype=p.dtype)
a = np.concatenate((z,p),axis=1)
s0,s1 = a.strides
return np.lib.stride_tricks.as_strided(a[:,m-1:],shape=(m,n),strides=(s0-s1,s1))
Another one with some masking -
def slide_by_one_v3(p):
m,n = p.shape
z = np.zeros((len(p),1),dtype=p.dtype)
a = np.concatenate((p,z),axis=1)
return np.triu(a[:,::-1],1)[:,::-1].flat[:-m].reshape(m,-1)
Here is a simple method based on zero-padding and reshaping. It is fast because it avoids advanced indexing and other overheads.
def pp(p):
m,n = p.shape
aux = np.zeros((m,n+m-1),p.dtype)
np.copyto(aux[:,:n],p)
return aux.ravel()[:-m].reshape(m,n+m-2)[:,:n].copy()
If I have an array, let's say: np.array([4,8,-2,9,6,0,3,-6]) and I would like to add the previous number to the next element, how do I do?
And every time the number 0 shows up the addition of elements 'restarts'.
An example with the above array, I should get the following output when I run the function:
stock = np.array([4,12,10,19,25,0,3,-3]) is the right output, if the above array is inserted in transactions.
def cumulativeStock(transactions):
# insert your code here
return stock
I can't think of a method to solving this problem. Any help would be very appreciated.
I believe you mean something like this?
z = np.array([4,8,-2,9,6,0,3,-6])
n = z == 0
[False False False False False True False False]
res = np.split(z,np.where(n))
[array([ 4, 8, -2, 9, 6]), array([ 0, 3, -6])]
res_total = [np.cumsum(x) for x in res]
[array([ 4, 12, 10, 19, 25]), array([ 0, 3, -3])]
np.concatenate(res_total)
[ 4 12 10 19 25 0 3 -3]
another vectorized solution:
import numpy as np
stock = np.array([4, 8, -2, 9, 6, 0, 3, -6])
breaks = stock == 0
tmp = np.cumsum(stock)
brval = numpy.diff(numpy.concatenate(([0], -tmp[breaks])))
stock[breaks] = brval
np.cumsum(stock)
# array([ 4, 12, 10, 19, 25, 0, 3, -3])
import numpy as np
stock = np.array([4, 12, 10, 19, 25, 0, 3, -3, 4, 12, 10, 0, 19, 25, 0, 3, -3])
def cumsum_stock(stock):
## Detect all Zero's first
zero_p = np.where(stock==0)[0]
## Create empty array to append final result
final_stock = np.empty(shape=[0, len(zero_p)])
for i in range(len(zero_p)):
## First Zero detection
if(i==0):
stock_first_part = np.cumsum(stock[:zero_p[0]])
stock_after_zero_part = np.cumsum(stock[zero_p[0]:zero_p[i+1]])
final_stock = np.append(final_stock, stock_first_part)
final_stock = np.append(final_stock, stock_after_zero_part)
## Last Zero detection
elif(i==(len(zero_p)-1)):
stock_last_part = np.cumsum(stock[zero_p[i]:])
final_stock = np.append(final_stock, stock_last_part, axis=0)
## Intermediate Zero detection
else:
intermediate_stock = np.cumsum(stock[zero_p[i]:zero_p[i+1]])
final_stock = np.append(final_stock, intermediate_stock, axis=0)
return(final_stock)
final_stock = cumsum_stock(stock).astype(int)
#Output
final_stock
Out[]: array([ 4, 16, 26, ..., 0, 3, 0])
final_stock.tolist()
Out[]: [4, 16, 26, 45, 70, 0, 3, 0, 4, 16, 26, 0, 19, 44, 0, 3, 0]
def cumulativeStock(transactions):
def accum(x):
acc=0
for i in x:
if i==0:
acc=0
acc+=i
yield acc
stock = np.array(list(accum(transactions)))
return stock
for your input np.array([4,8,-2,9,6,0,3,-6])
it returns
array([ 1, 3, 6, 9, 13, 0, 1, 3, 6])
I assume you mean you want to seperate the list at every zero?
from itertools import groupby
import numpy
def cumulativeStock(transactions):
#split list on item 0
groupby(transactions, lambda x: x == 0)
all_lists = [list(group) for k, group in groupby(transactions, lambda x: x == 0) if not k]
# cumulative the items
stock = []
for sep_list in all_lists:
for item in numpy.cumsum(sep_list):
stock.append(item)
return stock
print(cumulativeStock([4,8,-2,9,6,0,3,-6]))
Which will return:
[4, 12, 10, 19, 25, 3, -3]
I am trying to group a numpy array into smaller size by taking average of the elements. Such as take average foreach 5x5 sub-arrays in a 100x100 array to create a 20x20 size array. As I have a huge data need to manipulate, is that an efficient way to do that?
I have tried this for smaller array, so test it with yours:
import numpy as np
nbig = 100
nsmall = 20
big = np.arange(nbig * nbig).reshape([nbig, nbig]) # 100x100
small = big.reshape([nsmall, nbig//nsmall, nsmall, nbig//nsmall]).mean(3).mean(1)
An example with 6x6 -> 3x3:
nbig = 6
nsmall = 3
big = np.arange(36).reshape([6,6])
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
small = big.reshape([nsmall, nbig//nsmall, nsmall, nbig//nsmall]).mean(3).mean(1)
array([[ 3.5, 5.5, 7.5],
[ 15.5, 17.5, 19.5],
[ 27.5, 29.5, 31.5]])
This is pretty straightforward, although I feel like it could be faster:
from __future__ import division
import numpy as np
Norig = 100
Ndown = 20
step = Norig//Ndown
assert step == Norig/Ndown # ensure Ndown is an integer factor of Norig
x = np.arange(Norig*Norig).reshape((Norig,Norig)) #for testing
y = np.empty((Ndown,Ndown)) # for testing
for yr,xr in enumerate(np.arange(0,Norig,step)):
for yc,xc in enumerate(np.arange(0,Norig,step)):
y[yr,yc] = np.mean(x[xr:xr+step,xc:xc+step])
You might also find scipy.signal.decimate interesting. It applies a more sophisticated low-pass filter than simple averaging before downsampling the data, although you'd have to decimate one axis, then the other.
Average a 2D array over subarrays of size NxN:
height, width = data.shape
data = average(split(average(split(data, width // N, axis=1), axis=-1), height // N, axis=1), axis=-1)
Note that eumiro's approach does not work for masked arrays as .mean(3).mean(1) assumes that each mean along axis 3 was computed from the same number of values. If there are masked elements in your array, this assumption does not hold any more. In that case, you have to keep track of the number of values used to compute .mean(3) and replace .mean(1) by a weighted mean. The weights are the normalized number of values used to compute .mean(3).
Here is an example:
import numpy as np
def gridbox_mean_masked(data, Nbig, Nsmall):
# Reshape data
rshp = data.reshape([Nsmall, Nbig//Nsmall, Nsmall, Nbig//Nsmall])
# Compute mean along axis 3 and remember the number of values each mean
# was computed from
mean3 = rshp.mean(3)
count3 = rshp.count(3)
# Compute weighted mean along axis 1
mean1 = (count3*mean3).sum(1)/count3.sum(1)
return mean1
# Define test data
big = np.ma.array([[1, 1, 2],
[1, 1, 1],
[1, 1, 1]])
big.mask = [[0, 0, 0],
[0, 0, 1],
[0, 0, 0]]
Nbig = 3
Nsmall = 1
# Compute gridbox mean
print gridbox_mean_masked(big, Nbig, Nsmall)