Numerical operations on arrays while reading from CSV file - python

I am trying to do a few numerical operations on a few arrays while reading some values from CSV files.
I have the coordinates of a receiver which is fixed and I read coordinates of the heliostats from a CSV file which track the Sun.
The coordinates of the receiver:
# co-ordinates of Receiver
XT = 0 # X co-ordinate of Receiver
YT = 0 # Y co-ordinate of Receiver
ZT = 207.724 # Z co-ordinate of Receiver, this is the height of tower
A = np.array(([XT],[YT],[ZT]))
print(A," are the co-ordinates of the target i.e. the receiver")
The coordinates of the ten heliostats:
This data I read from a CSV file with the follwoing data:
#X,Y,Z
#-1269.56,-1359.2,5.7
#1521.28,-68.0507,5.7
#-13.6163,1220.79,5.7
#-1388.76,547.708,5.7
#1551.75,-82.2342,5.7
#405.92,-1853.83,5.7
#1473.43,-881.703,5.7
#1291.73,478.988,5.7
#539.027,1095.43,5.7
#-1648.13,-73.7251,5.7
I read the coordinates of the CSV as follows:
import csv
# Reading data from csv file
with open('Heliostat Field Layout Large heliostat.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
X = []
Y = []
Z = []
for row in readCSV:
X_coordinates = row[0]
Y_coordinates = row[1]
Z_coordinates = row[2]
X.append(X_coordinates)
Y.append(Y_coordinates)
Z.append(Z_coordinates)
Xcoordinate = [float(X[c]) for c in range(1,len(X))]
Ycoordinate=[float(Y[c]) for c in range(1,len(Y))]
Zcoordinate=[float(Z[c]) for c in range(1,len(Z))]
Now, when I try to print the co-ordinates of the ten heliostats, I get three big arrays with all Xcoordinate, Ycoordinate and Zcoordinate grouped into one instead of ten different outputs.
[[[-1269.56 1521.28 -13.6163 -1388.76 1551.75 405.92 1473.43
1291.73 539.027 -1648.13 ]]
[[-1359.2 -68.0507 1220.79 547.708 -82.2342 -1853.83
-881.703 478.988 1095.43 -73.7251]]
[[ 5.7 5.7 5.7 5.7 5.7 5.7 5.7
5.7 5.7 5.7 ]]] are the co-ordinates of the heliostats
I used:
B = np.array(([Xcoordinate],[Ycoordinate],[Zcoordinate]))
print(B," are the co-ordinates of the heliostats")
What is the mistake?
Further, I would like to have an array where I wuold like B - A
for which I use:
#T1 = matrix(A)- matrix(B)
#print(T1," is the target vector for heliostat 1, T1")
How should i do a numerical operation on Arrays A and B? I tried a matrix operation here. Is that wrong?

Your code is correct
The following output is the way numpy arrays are displayed.
[[-1359.2 -68.0507 1220.79 547.708 -82.2342 -1853.83
-881.703 478.988 1095.43 -73.7251]]
Despite the illusion that the values are stuck together, they are perfectly distinct in the array. You can access to a single value with
print(B[1, 0, 0]) # print Y[0]
The substraction of arrays A and B you want to perform will work
T1 = np.matrix(A)- np.matrix(B)
print(T1," is the target vector for heliostat 1, T1")
May I make two suggestions ?
You can read a numpy array written as a matrix in a text file (it's the case here) with the function loadtxt of numpy :
your_file = 'Heliostat Field Layout Large heliostat.csv'
B = np.loadtxt(your_file, delimiter=',', skiprows=1)
The result will be a (3, 10) numpy array.
You can perform broadcasing operations directly on numpy arrays (so you don't need to convert it in matrix). You just need to be careful with the dimensions.
In your original script you just need to write :
T1 = A - B
If you get array B with loadtxt as suggested, you will get a (10, 3) array, while A is a (3, 1) array. The array B must first be reshaped in a (3, 10) array :
B = B.reshape((3, 10))
T1 = A - B
EDIT : compute the norm of each 3D vector of T1
norm_T1 = np.sqrt( np.sum( np.array(T1)**2, axis=0 ) )
Note that in your code T1 is a matrix, so T1**2 is a matrix product. In order to compute sqrt( v[0]**2 + v[1]**2 + v[2]**2 ) for each vector v of T1, I first convert it to a numpy array.

Related

how to restructure data from h5py written in python

I am reading functions from an existing file using h5py library.
readFile = h5py.File('File',r)
using readFile.keys() I obtained the list of the functions stored in 'File'. One of these functions is the function phi. To print the function phi, I did
phi = numpy.array(readFile['phi'])[:,0,:,:]
in [:,0,:,:] we find the way how the data is stored [blocks, z, y, x]. z= 0 because it is a 2D case. x is divided in 2 blocks, and y is divided to 2 blocks. each x block is divided to nxb (x1, x2, ....,x20), and each y block is divided to nyb. (nxb and nyb can also be obtained directly from the file using h5py as they are also stored in the file. The domain of the data is also stored in the file and it is called ['bounding box'])
Then , coding the grid will be:
nxb = numpy.array(readFile['integer scalars'])[0][1]
nyb = numpy.array(readFile['integer scalars'])[1][1]
X = numpy.zeros([block, nxb, nyb])
Y = numpy.zeros([block, nxb, nyb])
for block in range(block):
x_min, x_max = numpy.array(readFile['bounding box'])[block,0,:]
y_min, y_max = numpy.array(readFile['bounding box'])[block,1,:]
X[block,:,:], Y[block,:,:] = numpy.meshgrid(numpy.linspace(x_min,x_max,nxb),
numpy.linspace(y_min,y_max,nyb))
My question, is that I am trying to restructure the data (see the figure). I want to bring the data of the block 2 up to the data of the block 1 and not next to him. Which means that I need to create new coordinates I' and J' related to the old coordinates I , and J. I tried this but it is not working:
for i in range(X):
for j in range(Y):
i' = i -len(X[0:1,:,:]
j' = j + len(Y[0:1,:,:]
phi(i',j') = phi
When working with HDF5 data, it's important to understand your data schema before you start writing code. Here are my initial observations and suggestions.
Your question is a little hard to follow. (For example, you are using the term "functions" to describe HDF5 datasets.) HDF5 organizes data in datasets and groups. Your data of interest is in 2 datasets: 'phi' and 'integer scalars'.
You can simplify code to access the datasets as a Numpy arrays using the following:
with h5py.File('File','r') as readFile:
# to get the axis dimensions for 'phi':
print(f"Shape of Dataset phi: {readFile['phi'].shape}")
phi_ds = readFile['phi'] # to get a dataset object
phi_arr = readFile['phi'][()] # to read dataset as a numpy array
# to get the axis dimensions for 'integer scalars'
nxb, nyb = readFile['integer scalars'].shape
I don't understand what you mean by "blocks". Are you referering to the axis simensions? Also, why you are using meshgrid? If you simply want to change dimensions, use Numpy's .reshape() method to change the axis dimensions of the Numpy array.
Here is a simple example that creates a 2x2 dataset, then reads it into a new array and reshapes it to 1x4. I think this is what you want to do. Change the values of a0 and a1 if you want to increase the size. The reshape operation will read the shape from the first array and reshape the new array to (N,1), where N is your nxb*nyb value.
with h5py.File('SO_72340647.h5','w') as h5f:
a0, a1 = 2,2
arr = np.arange(a0*a1).reshape(a0,a1)
h5f.create_dataset('ds_2x2',data=arr)
with h5py.File('SO_72340647.h5','r') as h5f:
print(f"Shape of Dataset ds_2x2: {h5f['ds_2x2'].shape}")
ds_arr = h5f['ds_2x2'][()]
print(ds_arr)
ds0, ds1 = ds_arr.shape
new_arr = ds_arr.reshape(ds0*ds1,1)
print(f"Shape of new (reshaped) array: {new_arr.shape}")
print(new_arr)
Note: h5py dataset objects "behave like" Numpy arrays. So, you frequently don't have to read into an array to use the data.

Random access in a saved-on-disk numpy array

I have one big numpy array A of shape (2_000_000, 2000) of dtype float64, which takes 32 GB.
(or alternatively the same data split into 10 arrays of shape (200_000, 2000), it may be easier for serialization?).
How can we serialize it to disk such that we can have fast random read access to any part of the data?
More precisely I need to be able to read ten thousands of windows of shape (16, 2 000) from A at random starting indexes i:
L = []
for i in range(10_000):
i = random.randint(0, 2_000_000 - 16):
window = A[i:i+16, :] # window of A of shape (16, 2000) starting at a random index i
L.append(window)
WINS = np.concatenate(L) # shape (10_000, 16, 2000) of float64, ie: ~ 2.4 GB
Let's say I only have 8 GB of RAM available for this task; it's totally impossible to load the whole 32 GB of A in RAM.
How can we read such windows in a serialized-on-disk numpy array? (.h5 format or any other)
Note: The fact the reading is done at randomized starting indexes is important.
This example shows how you can use an HDF5 file for the process you describe.
First, create a HDF5 file with a dataset of shape(2_000_000, 2000) and dtype=float64 values. I used variables for the dimensions so you can tinker with it.
import numpy as np
import h5py
import random
h5_a0, h5_a1 = 2_000_000, 2_000
with h5py.File('SO_68206763.h5','w') as h5f:
dset = h5f.create_dataset('test',shape=(h5_a0, h5_a1))
incr = 1_000
a0 = h5_a0//incr
for i in range(incr):
arr = np.random.random(a0*h5_a1).reshape(a0,h5_a1)
dset[i*a0:i*a0+a0, :] = arr
print(dset[-1,0:10]) # quick dataset check of values in last row
Next, open the file in read mode, read 10_000 random array slices of shape (16,2_000) and append to the list L. At the end, convert the list to the array WINS. Note, by default the array will have 2 axes -- you need to use .reshape() if you want 3 axes per your comment (reshape also shown).
with h5py.File('SO_68206763.h5','r') as h5f:
dset = h5f['test']
L = []
ds0, ds1 = dset.shape[0], dset.shape[1]
for i in range(10_000):
ir = random.randint(0, ds0 - 16)
window = dset[ir:ir+16, :] # window from dset of shape (16, 2000) starting at a random index i
L.append(window)
WINS = np.concatenate(L) # shape (160_000, 2_000) of float64,
print(WINS.shape, WINS.dtype)
WINS = np.concatenate(L).reshape(10_0000,16,ds1) # reshaped to (10_000, 16, 2_000) of float64
print(WINS.shape, WINS.dtype)
The procedure above is not memory efficient. You wind up with 2 copies of the randomly sliced data: in both list L and array WINS. If memory is limited, this could be a problem. To avoid the intermediate copy, read the random slide of data directly to an array. Doing this simplifies the code, and reduces the memory footprint. That method is shown below (WINS2 is a 2 axis array, and WINS3 is a 3 axis array).
with h5py.File('SO_68206763.h5','r') as h5f:
dset = h5f['test']
ds0, ds1 = dset.shape[0], dset.shape[1]
WINS2 = np.empty((10_000*16,ds1))
WINS3 = np.empty((10_000,16,ds1))
for i in range(10_000):
ir = random.randint(0, ds0 - 16)
WINS2[i*16:(i+1)*16,:] = dset[ir:ir+16, :]
WINS3[i,:,:] = dset[ir:ir+16, :]
An alternative soluton to h5py datasets that I tried and that works is using memmap, as suggested in #RyanPepper's comment.
Write the data as binary
import numpy as np
with open('a.bin', 'wb') as A:
for f in range(1000):
x = np.random.randn(10*2000).astype('float32').reshape(10, 2000)
A.write(x.tobytes())
A.flush()
Open later as memmap
A = np.memmap('a.bin', dtype='float32', mode='r').reshape((-1, 2000))
print(A.shape) # (10000, 2000)
print(A[1234:1234+16, :]) # window

Using Mann Kendall in python with a lot of data

I have a set of 46 years worth of rainfall data. It's in the form of 46 numpy arrays each with a shape of 145, 192, so each year is a different array of maximum rainfall data at each lat and lon coordinate in the given model.
I need to create a global map of tau values by doing an M-K test (Mann-Kendall) for each coordinate over the 46 years.
I'm still learning python, so I've been having trouble finding a way to go through all the data in a simple way that doesn't involve me making 27840 new arrays for each coordinate.
So far I've looked into how to use scipy.stats.kendalltau and using the definition from here: https://github.com/mps9506/Mann-Kendall-Trend
EDIT:
To clarify and add a little more detail, I need to perform a test on for each coordinate and not just each file individually. For example, for the first M-K test, I would want my x=46 and I would want y=data1[0,0],data2[0,0],data3[0,0]...data46[0,0]. Then to repeat this process for every single coordinate in each array. In total the M-K test would be done 27840 times and leave me with 27840 tau values that I can then plot on a global map.
EDIT 2:
I'm now running into a different problem. Going off of the suggested code, I have the following:
for i in range(145):
for j in range(192):
out[i,j] = mk_test(yrmax[:,i,j],alpha=0.05)
print out
I used numpy.stack to stack all 46 arrays into a single array (yrmax) with shape: (46L, 145L, 192L) I've tested it out and it calculates p and tau correctly if I change the code from out[i,j] to just out. However, doing this messes up the for loop so it only takes the results from the last coordinate in stead of all of them. And if I leave the code as it is above, I get the error: TypeError: list indices must be integers, not tuple
My first guess was that it has to do with mk_test and how the information is supposed to be returned in the definition. So I've tried altering the code from the link above to change how the data is returned, but I keep getting errors relating back to tuples. So now I'm not sure where it's going wrong and how to fix it.
EDIT 3:
One more clarification I thought I should add. I've already modified the definition in the link so it returns only the two number values I want for creating maps, p and z.
I don't think this is as big an ask as you may imagine. From your description it sounds like you don't actually want the scipy kendalltau, but the function in the repository you posted. Here is a little example I set up:
from time import time
import numpy as np
from mk_test import mk_test
data = np.array([np.random.rand(145, 192) for _ in range(46)])
mk_res = np.empty((145, 192), dtype=object)
start = time()
for i in range(145):
for j in range(192):
out[i, j] = mk_test(data[:, i, j], alpha=0.05)
print(f'Elapsed Time: {time() - start} s')
Elapsed Time: 35.21990394592285 s
My system is a MacBook Pro 2.7 GHz Intel Core I7 with 16 GB Ram so nothing special.
Each entry in the mk_res array (shape 145, 192) corresponds to one of your coordinate points and contains an entry like so:
array(['no trend', 'False', '0.894546014835', '0.132554125342'], dtype='<U14')
One thing that might be useful would be to modify the code in mk_test.py to return all numerical values. So instead of 'no trend'/'positive'/'negative' you could return 0/1/-1, and 1/0 for True/False and then you wouldn't have to worry about the whole object array type. I don't know what kind of analysis you might want to do downstream but I imagine that would preemptively circumvent any headaches.
Thanks to the answers provided and some work I was able to work out a solution that I'll provide here for anyone else that needs to use the Mann-Kendall test for data analysis.
The first thing I needed to do was flatten the original array I had into a 1D array. I know there is probably an easier way to go about doing this, but I ultimately used the following code based on code Grr suggested using.
`x = 46
out1 = np.empty(x)
out = np.empty((0))
for i in range(146):
for j in range(193):
out1 = yrmax[:,i,j]
out = np.append(out, out1, axis=0) `
Then I reshaped the resulting array (out) as follows:
out2 = np.reshape(out,(27840,46))
I did this so my data would be in a format compatible with scipy.stats.kendalltau 27840 is the total number of values I have at every coordinate that will be on my map (i.e. it's just 145*192) and the 46 is the number of years the data spans.
I then used the following loop I modified from Grr's code to find Kendall-tau and it's respective p-value at each latitude and longitude over the 46 year period.
`x = range(46)
y = np.zeros((0))
for j in range(27840):
b = sc.stats.kendalltau(x,out2[j,:])
y = np.append(y, b, axis=0)`
Finally, I reshaped the data one for time as shown:newdata = np.reshape(y,(145,192,2)) so the final array is in a suitable format to be used to create a global map of both tau and p-values.
Thanks everyone for the assistance!
Depending on your situation, it might just be easiest to make the arrays.
You won't really need them all in memory at once (not that it sounds like a terrible amount of data). Something like this only has to deal with one "copied out" coordinate trend at once:
SIZE = (145,192)
year_matrices = load_years() # list of one 145x192 arrays per year
result_matrix = numpy.zeros(SIZE)
for x in range(SIZE[0]):
for y in range(SIZE[1]):
coord_trend = map(lambda d: d[x][y], year_matrices)
result_matrix[x][y] = analyze_trend(coord_trend)
print result_matrix
Now, there are things like itertools.izip that could help you if you really want to avoid actually copying the data.
Here's a concrete example of how Python's "zip" might works with data like yours (although as if you'd used ndarray.flatten on each year):
year_arrays = [
['y0_coord0_val', 'y0_coord1_val', 'y0_coord2_val', 'y0_coord2_val'],
['y1_coord0_val', 'y1_coord1_val', 'y1_coord2_val', 'y1_coord2_val'],
['y2_coord0_val', 'y2_coord1_val', 'y2_coord2_val', 'y2_coord2_val'],
]
assert len(year_arrays) == 3
assert len(year_arrays[0]) == 4
coord_arrays = zip(*year_arrays) # i.e. `zip(year_arrays[0], year_arrays[1], year_arrays[2])`
# original data is essentially transposed
assert len(coord_arrays) == 4
assert len(coord_arrays[0]) == 3
assert coord_arrays[0] == ('y0_coord0_val', 'y1_coord0_val', 'y2_coord0_val', 'y3_coord0_val')
assert coord_arrays[1] == ('y0_coord1_val', 'y1_coord1_val', 'y2_coord1_val', 'y3_coord1_val')
assert coord_arrays[2] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
assert coord_arrays[3] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
flat_result = map(analyze_trend, coord_arrays)
The example above still copies the data (and all at once, rather than a coordinate at a time!) but hopefully shows what's going on.
Now, if you replace zip with itertools.izip and map with itertools.map then the copies needn't occur — itertools wraps the original arrays and keeps track of where it should be fetching values from internally.
There's a catch, though: to take advantage itertools you to access the data only sequentially (i.e. through iteration). In your case, it looks like the code at https://github.com/mps9506/Mann-Kendall-Trend/blob/master/mk_test.py might not be compatible with that. (I haven't reviewed the algorithm itself to see if it could be.)
Also please note that in the example I've glossed over the numpy ndarray stuff and just show flat coordinate arrays. It looks like numpy has some of it's own options for handling this instead of itertools, e.g. this answer says "Taking the transpose of an array does not make a copy". Your question was somewhat general, so I've tried to give some general tips as to ways one might deal with larger data in Python.
I ran into the same task and have managed to come up with a vectorized solution using numpy and scipy.
The formula are the same as in this page: https://vsp.pnnl.gov/help/Vsample/Design_Trend_Mann_Kendall.htm.
The trickiest part is to work out the adjustment for the tied values. I modified the code as in this answer to compute the number of tied values for each record, in a vectorized manner.
Below are the 2 functions:
import copy
import numpy as np
from scipy.stats import norm
def countTies(x):
'''Count number of ties in rows of a 2D matrix
Args:
x (ndarray): 2d matrix.
Returns:
result (ndarray): 2d matrix with same shape as <x>. In each
row, the number of ties are inserted at (not really) arbitary
locations.
The locations of tie numbers in are not important, since
they will be subsequently put into a formula of sum(t*(t-1)*(2t+5)).
Inspired by: https://stackoverflow.com/a/24892274/2005415.
'''
if np.ndim(x) != 2:
raise Exception("<x> should be 2D.")
m, n = x.shape
pad0 = np.zeros([m, 1]).astype('int')
x = copy.deepcopy(x)
x.sort(axis=1)
diff = np.diff(x, axis=1)
cated = np.concatenate([pad0, np.where(diff==0, 1, 0), pad0], axis=1)
absdiff = np.abs(np.diff(cated, axis=1))
rows, cols = np.where(absdiff==1)
rows = rows.reshape(-1, 2)[:, 0]
cols = cols.reshape(-1, 2)
counts = np.diff(cols, axis=1)+1
result = np.zeros(x.shape).astype('int')
result[rows, cols[:,1]] = counts.flatten()
return result
def MannKendallTrend2D(data, tails=2, axis=0, verbose=True):
'''Vectorized Mann-Kendall tests on 2D matrix rows/columns
Args:
data (ndarray): 2d array with shape (m, n).
Keyword Args:
tails (int): 1 for 1-tail, 2 for 2-tail test.
axis (int): 0: test trend in each column. 1: test trend in each
row.
Returns:
z (ndarray): If <axis> = 0, 1d array with length <n>, standard scores
corresponding to data in each row in <x>.
If <axis> = 1, 1d array with length <m>, standard scores
corresponding to data in each column in <x>.
p (ndarray): p-values corresponding to <z>.
'''
if np.ndim(data) != 2:
raise Exception("<data> should be 2D.")
# alway put records in rows and do M-K test on each row
if axis == 0:
data = data.T
m, n = data.shape
mask = np.triu(np.ones([n, n])).astype('int')
mask = np.repeat(mask[None,...], m, axis=0)
s = np.sign(data[:,None,:]-data[:,:,None]).astype('int')
s = (s * mask).sum(axis=(1,2))
#--------------------Count ties--------------------
counts = countTies(data)
tt = counts * (counts - 1) * (2*counts + 5)
tt = tt.sum(axis=1)
#-----------------Sample Gaussian-----------------
var = (n * (n-1) * (2*n+5) - tt) / 18.
eps = 1e-8 # avoid dividing 0
z = (s - np.sign(s)) / (np.sqrt(var) + eps)
p = norm.cdf(z)
p = np.where(p>0.5, 1-p, p)
if tails==2:
p=p*2
return z, p
I assume your data come in the layout of (time, latitude, longitude), and you are examining the temporal trend for each lat/lon cell.
To simulate this task, I synthesized a sample data array of shape (50, 145, 192). The 50 time points are taken from Example 5.9 of the book Wilks 2011, Statistical methods in the atmospheric sciences. And then I simply duplicated the same time series 27840 times to make it (50, 145, 192).
Below is the computation:
x = np.array([0.44,1.18,2.69,2.08,3.66,1.72,2.82,0.72,1.46,1.30,1.35,0.54,\
2.74,1.13,2.50,1.72,2.27,2.82,1.98,2.44,2.53,2.00,1.12,2.13,1.36,\
4.9,2.94,1.75,1.69,1.88,1.31,1.76,2.17,2.38,1.16,1.39,1.36,\
1.03,1.11,1.35,1.44,1.84,1.69,3.,1.36,6.37,4.55,0.52,0.87,1.51])
# create a big cube with shape: (T, Y, X)
arr = np.zeros([len(x), 145, 192])
for i in range(arr.shape[1]):
for j in range(arr.shape[2]):
arr[:, i, j] = x
print(arr.shape)
# re-arrange into tabular layout: (Y*X, T)
arr = np.transpose(arr, [1, 2, 0])
arr = arr.reshape(-1, len(x))
print(arr.shape)
import time
t1 = time.time()
z, p = MannKendallTrend2D(arr, tails=2, axis=1)
p = p.reshape(145, 192)
t2 = time.time()
print('time =', t2-t1)
The p-value for that sample time series is 0.63341565, which I have validated against the pymannkendall module result. Since arr contains merely duplicated copies of x, the resultant p is a 2d array of size (145, 192), with all 0.63341565.
And it took me only 1.28 seconds to compute that.

Finding relative maximums of a 2-D numpy array

I have a 2-D numpy array that can be subdivided into 64 boxes (think of a chessboard).
The goal is a function that returns the position and value of the maximum in each box. Something like:
FindRefs(array) --> [(argmaxX00, argmaxY00, Max00), ...,(argmaxX63, argmaxY63, Max63)]
where argmaxXnn and argmaxYnn are the indexes of the whole array (not of the box), and Maxnn is the max value in each box. In other words,
Maxnn = array[argmaxYnn,argmaxYnn]
I've tryed the obvious "nested-for" solution:
def FindRefs(array):
Height, Width = array.shape
plumx = []
plumy = []
lum = []
w = int(Width/8)
h = int(Height/8)
for n in range(0,8): # recorrer boxes
x0 = n*w
x1 = (n+1)*w
for m in range(0,8):
y0 = m*h
y1 = (m+1)*h
subflatind = a[y0:y1,x0:x1].argmax() # flatten index of box
y, x = np.unravel_index(subflatind, (h, w))
X = x0 + x
Y = y0 + y
lum.append(a[Y,X])
plumx.append(X)
plumy.append(Y)
refs = []
for pt in range(0,len(plumx)):
ptx = plumx[pt]
pty = plumy[pt]
refs.append((ptx,pty,lum[pt]))
return refs
It works, but is neither elegant nor eficient.
So I've tryed this more pythonic version:
def FindRefs(a):
box = [(n*w,m*h) for n in range(0,8) for m in range(0,8)]
flatinds = [a[b[1]:h+b[1],b[0]:w+b[0]].argmax() for b in box]
unravels = np.unravel_index(flatinds, (h, w))
ur = [(unravels[1][n],unravels[0][n]) for n in range(0,len(box))]
absinds = [map(sum,zip(box[n],ur[n])) for n in range(0,len(box))]
refs = [(absinds[n][0],absinds[n][1],a[absinds[n][1],absinds[n][0]]) for n in range(0,len(box))]
return refs
It works fine but, to my surprise, is not more efficient than the previous version!
The question is: Is there a more clever way to do the task?
Note that efficiency matters, as I have many large arrays for processing.
Any clue is welcome. :)
Try this:
from numpy.lib.stride_tricks import as_strided as ast
import numpy as np
def FindRefs3(a):
box = tuple(x/8 for x in a.shape)
z=ast(a, \
shape=(8,8)+box, \
strides=(a.strides[0]*box[0],a.strides[1]*box[1])+a.strides)
v3 = np.max(z,axis=-1)
i3r = np.argmax(z,axis=-1)
v2 = np.max(v3,axis=-1)
i2 = np.argmax(v3,axis=-1)
i2x = np.indices(i2.shape)
i3 = i3r[np.ix_(*[np.arange(x) for x in i2.shape])+(i2,)]
i3x = np.indices(i3.shape)
ix0 = i2x[0]*box[0]+i2
ix1 = i3x[1]*box[1]+i3
return zip(np.ravel(ix0),np.ravel(ix1),np.ravel(v2))
Note that your first FindRefs reverses indices, so that for a tuple (i1,i2,v), a[i1,i2] won't return the right value, whereas a[i2,i1] will.
So here's what the code does:
It first calculates the dimensions that each box needs to have (box) given the size of your array. Note that this doesn't do any checking: you need to have an array that can be divided evenly into an 8 by 8 grid.
Then z with ast is the messiest bit. It takes the 2d array, and turns it into a 4d array. The 4d array has dimensions (8,8,box[0],box[1]), so it lets you choose which box you want (the first two axes) and then what position you want in the box (the next two). This lets us deal with all the boxes at once by doing operations on the last two axes.
v3 gives us the maximum values along the last axis: in other words, it contains the maximum of each column in each box. i3r contains the index of which row in the box contained that max value.
v2 takes the maximum of v3 along its own last axis, which is now dealing with rows in the box: it takes the column maxes, and finds the maximum of them, so that v2 is a 2d array containing the maximum value of each box. If all you wanted were the maximums, this is all you'd need.
i2 is the index of the column in the box that holds the maximum value.
Now we need to get the index of the row in the box... that's trickier. i3r contains the row index of the max of each column in the box, but we want the row for the specific column that's specified in i2. We do this by choosing an element from i3r using i2, which gives us i3.
At this point, i2 and i3 are 8 by 8 arrays containing the row and column indexes of the maximums relative to each box. We want the absolute indexes. So we create i2x and i3x (actually, this is pointless; we could just create one, as they are the same), which are just arrays of what the indexes for i2 and i3 are (0,1,2,...,8 etc in one dimension, and so on). We then multiply these by the box sizes, and add the relative max indexes, to get the absolute max indexes.
We then combine these to get the same output that you had. Note that if you keep them as arrays, though, instead of making tuples, it's much faster.

Efficient way to find median value of a number of RGB images

I'm playing around with a script in Python where I want to find the median of a number of images of same dimensions. That is, I wan't to take all (red, green and blue) pixels in position [x,y], and construct a new image with their median values.
My current method uses Python PIL (the imaging library), but it is quite slow! I would very much like to use the OpenCV (cv2) interface, since it loads every image directly as a numpy array. However, I keep getting indices wrong when stacking x images of dimension (2560,1920,3). Any help?
My current, inefficient code with PIL, is the following:
from PIL import Image, ImageChops,ImageDraw,ImageFilter,cv
import sys,glob,sys,math,shutil,time,os, errno,numpy,string
from os import *
inputs = ()
path = str(os.getcwd())
BGdummyy=0
os.chdir(path)
for files in glob.glob("*.png"):
inputs = inputs + (str(str(files)),)
BGdummy=0
for file in inputs:
BGdummy=BGdummy+1
im = cv.LoadImage(file)
cv.CvtColor( im, im, cv.CV_BGR2RGB )
img = Image.fromstring("RGB", cv.GetSize(im), im.tostring())
vars()["file"+str(BGdummy)] = img.load()
imgnew = Image.new("RGB", (2560,1920))
pixnew = imgnew.load()
for x in range(2560):
for y in range(1920):
R=[];G=[];B=[];
for z in range(len(inputs)):
R.append(vars()["file"+str(z+1)][x,y][0])
G.append(vars()["file"+str(z+1)][x,y][1])
B.append(vars()["file"+str(z+1)][x,y][2])
R = sorted(R)
G = sorted(G)
B = sorted(B)
mid = int(len(inputs)/2.)
Rnew = R[mid]
Gnew = G[mid]
Bnew = B[mid]
pixnew[x,y] = (Rnew,Gnew,Bnew)
BGdummyy = BGdummyy+1
imgnew.save("NewBG.png")
I will demonstrate on how to do it with 5 small arrays of size (3,3,3).
First I will create 5 arrays, then keep them in a list X. In your case you will have keep your 30 images in this list. ( I am doing it in a single line )
X = [a,b,c,d,e] = [np.random.randint(0,255,(3,3,3)) for i in xrange(5)]
Next you flatten each image to a long single row. So earlier your image would be like
[R1G1B1 R2G2B2 R3G3B3,
R4G4B4 R5G5B5 R6G6B6,
R7G7B7 R8G8B8 R9G9B9]
This will change into [R1 G1 B1 R2 G2 B2 R3 G3 B3......... R9 G9 B9] . Then you stack all these flattened images to form a big 2D array. In that array, you see, all first red pixels comes in first column and so on. Then you can simply apply np.median for that.
Y = np.vstack((x.ravel() for x in X))
I lattened each image and stacked. In my case, Y is an array of size 5x27 (row - number of images, column - number of pixels in an image)
Now I find median of this Y and reshape it to our original image shape :
Z = np.median(Y,axis = 0)
Z = np.uint8(Z.reshape(a.shape))
Done.
Just to make sure it is working fine, let's check the value of arbitrary pixel, say Z[0,1,2] :
In [50]: G1 = [x[0,1,2] for x in X]
In [51]: G1
Out[51]: [225, 65, 26, 182, 51]
In [52]: Z[0,1,2]
Out[52]: 65.0
Yes, the data is correct.

Categories

Resources