Related
I'm facing a problem with vectorizing a function so that it applies efficiently on a numpy array.
My program entries :
A pos_part 2D Array of Nb_particles lines, 3 columns (basicaly x,y,z coordinates, only z is relevant for the part that bothers me) Nb_particles can up to several hundreds of thousands.
An prop_part 1D array with Nb_particles values. This part I got covered, creation is made with some nice numpy functions ; I just put here a basic distribution that ressembles real values.
A z_distances 1D Array, a simple np.arange betwwen z=0 and z=z_max.
Then come the calculation that takes time, because where I can't find a way to do things properply with only numpy operation of arrays. What i want to do is :
For all distances z_i in z_distances, sum all values from prop_part if corresponding particle coordinate z_particle < z_i. This would return a 1D array the same length as z_distances.
My ideas so far :
Version 0, for loop, enumerate and np.where do retrieve the index of values that I need to sum. Obviously quite long.
Version 1, using a mask on a new array (combination of z coordinates and particle properties), and sum on the masked array. Seems better than v0
Version 2, another mask and a np.vectorize, but i understand it's not efficient as vectorize is basicaly a for loop. Still seems better than v0
Version 3, I'm trying to use mask on a function that can I directly apply to z_distances, but it's not working so far.
So, here I am. There is maybe something to do with a sort and a cumulative sum, but I don't know how to do this, so any help would be greatly appreciated. Please find below the code to make things clearer
Thanks in advance.
import numpy as np
import time
import matplotlib.pyplot as plt
# Creation of particles' positions
Nb_part = 150_000
pos_part = 10*np.random.rand(Nb_part,3)
pos_part[:,0] = pos_part[:,1] = 0
#usefull property creation
beta = 1/1.5
prop_part = (1/beta)*np.exp(-pos_part[:,2]/beta)
z_distances = np.arange(0,10,0.1)
#my version 0
t0=time.time()
result = np.empty(len(z_distances))
for index_dist, val_dist in enumerate(z_distances):
positions = np.where(pos_part[:,2]<val_dist)[0]
result[index_dist] = sum(prop_part[i] for i in positions)
print("v0 :",time.time()-t0)
#A graph to help understand
plt.figure()
plt.plot(z_distances,result, c="red")
plt.ylabel("Sum of particles' usefull property for particles with z-pos<d")
plt.xlabel("d")
#version 1 ??
t1=time.time()
combi = np.column_stack((pos_part[:,2],prop_part))
result2 = np.empty(len(z_distances))
for index_dist, val_dist in enumerate(z_distances):
mask = (combi[:,0]<val_dist)
result2[index_dist]=sum(combi[:,1][mask])
print("v1 :",time.time()-t1)
plt.plot(z_distances,result2, c="blue")
#version 2
t2=time.time()
def themask(a):
mask = (combi[:,0]<a)
return sum(combi[:,1][mask])
thefunc = np.vectorize(themask)
result3 = thefunc(z_distances)
print("v2 :",time.time()-t2)
plt.plot(z_distances,result3, c="green")
### This does not work so far
# version 3
# =============================
# t3=time.time()
# def thesum(a):
# mask = combi[combi[:,0]<a]
# return sum(mask[:,1])
# result4 = thesum(z_distances)
# print("v3 :",time.time()-t3)
# =============================
You can get a lot more performance by writing your first version completely in numpy. Replace pythons sum with np.sum. Instead of the for i in positions list comprehension, simply pass the positions mask you are creating anyways.
Indeed, the np.where is not necessary and my best version looks like:
#my version 0
t0=time.time()
result = np.empty(len(z_distances))
for index_dist, val_dist in enumerate(z_distances):
positions = pos_part[:, 2] < val_dist
result[index_dist] = np.sum(prop_part[positions])
print("v0 :",time.time()-t0)
# out: v0 : 0.06322097778320312
You can get a bit faster if z_distances is very long by using numba.
Running calc for the first time usually creates some overhead which we can get rid of by running the function for some small set of `z_distances.
The below code achieves roughly a factor of two speedup over pure numpy on my laptop.
import numba as nb
#nb.njit(parallel=True)
def calc(result, z_distances):
n = z_distances.shape[0]
for ii in nb.prange(n):
pos = pos_part[:, 2] < z_distances[ii]
result[ii] = np.sum(prop_part[pos])
return result
result4 = np.zeros_like(result)
# _t = time.time()
# calc(result4, z_distances[:10])
# print(time.time()-_t)
t3 = time.time()
result4 = calc(result4, z_distances)
print("v3 :", time.time()-t3)
plt.plot(z_distances, result4)
The page https://pypi.python.org/pypi/fancyimpute has the line
# Instead of solving the nuclear norm objective directly, instead
# induce sparsity using singular value thresholding
X_filled_softimpute = SoftImpute().complete(X_incomplete_normalized)
which kind of suggests that I need to normalize the input data. However I did not find any details on the internet, what exactly is meant by that. Do I have to normalize my data beforehand and what exactly is expected?
Yes you should definitely normalize the data. Consider the following example:
from fancyimpute import SoftImpute
import numpy as np
v=np.random.normal(100,0.5,(5,3))
v[2,1:3]=np.nan
v[0,0]=np.nan
v[3,0]=np.nan
SoftImpute().complete(v)
The result is
array([[ 81.78428587, 99.69638878, 100.67626769],
[ 99.82026281, 100.09077899, 99.50273223],
[ 99.70946085, 70.98619873, 69.57668189],
[ 81.82898539, 99.66269922, 100.95263318],
[ 99.14285815, 100.10809651, 99.73870089]])
Note that the places where I put nan are completely off. However, if instead you run
from fancyimpute import SoftImpute
import numpy as np
v=np.random.normal(0,1,(5,3))
v[2,1:3]=np.nan
v[0,0]=np.nan
v[3,0]=np.nan
SoftImpute().complete(v)
(same code as before, the only difference is that v is normalized) you get the following reasonable result:
array([[ 0.07705556, -0.53449412, -0.20081351],
[ 0.9709198 , -1.19890962, -0.25176222],
[ 0.41839224, -0.11786451, 0.03231515],
[ 0.21374759, -0.66986997, 0.78565414],
[ 0.30004524, 1.28055845, 0.58625942]])
Thus, when you are using SoftImpute, don't forget to normalize your data (you can do that by making the mean of every column to be 0, and the std to be 1).
The image (test.tif) is attached.
The np.nan values are the whitest region.
How to fill those whitest region using some gap filling algorithms that uses values from the neighbours?
import scipy.ndimage
data = ndimage.imread('test.tif')
As others have suggested, scipy.interpolate can be used. However, it requires fairly extensive index manipulation to get this to work.
Complete example:
from pylab import *
import numpy
import scipy.ndimage
import scipy.interpolate
import pdb
data = scipy.ndimage.imread('data.png')
# a boolean array of (width, height) which False where there are missing values and True where there are valid (non-missing) values
mask = ~( (data[:,:,0] == 255) & (data[:,:,1] == 255) & (data[:,:,2] == 255) )
# array of (number of points, 2) containing the x,y coordinates of the valid values only
xx, yy = numpy.meshgrid(numpy.arange(data.shape[1]), numpy.arange(data.shape[0]))
xym = numpy.vstack( (numpy.ravel(xx[mask]), numpy.ravel(yy[mask])) ).T
# the valid values in the first, second, third color channel, as 1D arrays (in the same order as their coordinates in xym)
data0 = numpy.ravel( data[:,:,0][mask] )
data1 = numpy.ravel( data[:,:,1][mask] )
data2 = numpy.ravel( data[:,:,2][mask] )
# three separate interpolators for the separate color channels
interp0 = scipy.interpolate.NearestNDInterpolator( xym, data0 )
interp1 = scipy.interpolate.NearestNDInterpolator( xym, data1 )
interp2 = scipy.interpolate.NearestNDInterpolator( xym, data2 )
# interpolate the whole image, one color channel at a time
result0 = interp0(numpy.ravel(xx), numpy.ravel(yy)).reshape( xx.shape )
result1 = interp1(numpy.ravel(xx), numpy.ravel(yy)).reshape( xx.shape )
result2 = interp2(numpy.ravel(xx), numpy.ravel(yy)).reshape( xx.shape )
# combine them into an output image
result = numpy.dstack( (result0, result1, result2) )
imshow(result)
show()
Output:
This passes to the interpolator all values we have, not just the ones next to the missing values (which may be somewhat inefficient). It also interpolates every point in the output, not just the missing values (which is extremely inefficient). A better way is to interpolate just the missing values, and then patch them into the original image. This is just a quick working example to get started :)
I think viena's question is more related to an inpainting problem.
Here are some ideas:
In order to fill the gaps in B/W images you can use some filling algorithm like scipy.ndimage.morphology.binary_fill_holes. But you have a gray level image, so you can't use it.
I suppose that you don't want to use a complex inpainting algorithm. My first suggestion is: Don't try to use Nearest gray value (you don't know the real value of the NaN pixels). Using the NEarest value will generate a dirty algorithm. Instead, I would suggest you to fill the gaps with some other value (e.g. the mean of the row). You can do it without coding by using scikit-learn:
Source:
>>> from sklearn.preprocessing import Imputer
>>> imp = Imputer(strategy="mean")
>>> a = np.random.random((5,5))
>>> a[(1,4,0,3),(2,4,2,0)] = np.nan
>>> a
array([[ 0.77473361, 0.62987193, nan, 0.11367791, 0.17633671],
[ 0.68555944, 0.54680378, nan, 0.64186838, 0.15563309],
[ 0.37784422, 0.59678177, 0.08103329, 0.60760487, 0.65288022],
[ nan, 0.54097945, 0.30680838, 0.82303869, 0.22784574],
[ 0.21223024, 0.06426663, 0.34254093, 0.22115931, nan]])
>>> a = imp.fit_transform(a)
>>> a
array([[ 0.77473361, 0.62987193, 0.24346087, 0.11367791, 0.17633671],
[ 0.68555944, 0.54680378, 0.24346087, 0.64186838, 0.15563309],
[ 0.37784422, 0.59678177, 0.08103329, 0.60760487, 0.65288022],
[ 0.51259188, 0.54097945, 0.30680838, 0.82303869, 0.22784574],
[ 0.21223024, 0.06426663, 0.34254093, 0.22115931, 0.30317394]])
The dirty solution that uses the Nearest values can be this:
1) Find the perimeter points of the NaN regions
2) Compute all the distances between the NaN points and the perimeter
3) Replace the NaNs with the nearest's point gray value
If you want values from the nearest neighbors, you could use the NearestNDInterpolator from scipy.interpolate. There are also other interpolators as well you can consider.
You can locate the X,Y index values for the NaN values with:
import numpy as np
nan_locs = np.where(np.isnan(data))
There are some other options for the interpolation as well. One option is to replace NaN values with the results of a median filter (but your areas are kind of large for this). Another option might be grayscale dilation. The correct interpolation depends on your end domain.
If you haven't used a SciPy ND interpolator before, you'll need to provide X, Y, and value data to fit the interpolator to then X and Y data for values to interpolate at. You can do this using the where example above as a template.
OpenCV has some image in-painting algorithms that you could use. You just need to provide a binary mask which indicates which pixels should be in-painted.
import cv2
import numpy as np
import scipy.ndimage
data = ndimage.imread("test.tif")
mask = np.isnan(data)
inpainted_img = cv2.inpaint(img, mask, inpaintRadius=3, flags=cv2.INPAINT_TELEA)
Variance image in gdal
I want a local variance image with a 3x3 of a geospatial raster image using python. My approach so far was to read in the raster band as an array, then using matrix notation to run a moving window and write the array into a new raster image. This approach worked well for a high pass filter as described in this tutorial: http://www.gis.usu.edu/~chrisg/python/2009/lectures/ospy_slides6.pdf
Then I tried to calculate the variance with several approaches, the last one using numpy (as np), but I just get a gray image with the same value everywhere.
I am open to any kind of solution. If it gives me the average local variance in the end, that would be even better.
rows = srcDS.RasterYSize
#read in as array
data = srcBand.ReadAsArray(0,0, cols, rows).astype(np.int)
#calculate the variance for a 3x3 window
outVariance = np.zeros((rows, cols), np.float)
outVariance[1:rows-1,1:cols-1] = np.var([(data[0:rows-2,0:cols-2]),
(data[0:rows-2,1:cols-1]),
(data[0:rows-2,2:cols] ),
(data[1:rows-1,0:cols-2]),
(data[1:rows-1,1:cols-1]),
(data[1:rows-1,2:cols] ),
(data[2:rows,0:cols-2] ),
(data[2:rows,1:cols-1] ),
(data[2:rows,2:cols] )])
#output
outDS = driver.Create(outFN, cols, rows, 1, GDT_Float32)
outDS.SetGeoTransform(srcDS.GetGeoTransform())
outDS.SetProjection(srcDS.GetProjection())
outBand = outDS.GetRasterBand(1)
outBand.WriteArray(outVariance,0,0)
...
You could try Scipy, it has a function for running local filters on an array.
from scipy import ndimage
outVariance = ndimage.generic_filter(data, np.var, size=3)
It has a 'mode=' keyword for how the edges should be handled.
edit:
You can test it yourself, declare a 3x3 array:
a = np.random.rand(3,3)
a
[[ 0.01869967 0.14037373 0.32960675]
[ 0.17213158 0.35287243 0.13498175]
[ 0.29511881 0.46387688 0.89359801]]
For a 3x3 window, the variance of the center cell of the array will simply be:
print np.var(a)
0.058884734425985602
That value should be equal to the center cell of the returned array by Scipy:
print ndimage.generic_filter(a, np.var, size=3)
print ndimage.generic_filter(a, np.var, size=(3,3))
print ndimage.generic_filter(a, np.var, footprint=np.ones((3,3)))
[[ 0.01127325 0.01465338 0.00959321]
[ 0.02001052 0.05888473 0.07897385]
[ 0.00978547 0.06966683 0.09633447]]
Note that all other values in the array are 'edge-values' so the result depends on how Scipy handles the edges. It defaults to mode='reflect'.
See the documentation for more detailed information:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.generic_filter.html
simpler solution and also faster : use uniform
and a "variance trick" explained here : http://imagej.net/Integral_Image_Filters (the variance is the difference between "sum of square" and "square of sum")
import numpy as np
from scipy import ndimage
rows, cols = 500, 500
win_rows, win_cols = 5, 5
img = np.random.rand(rows, cols)
win_mean = ndimage.uniform_filter(img,(win_rows,win_cols))
win_sqr_mean = ndimage.uniform_filter(img**2,(win_rows,win_cols))
win_var = win_sqr_mean - win_mean**2
the generic_filter is 40 times slower than the strides...
Is there an easy way to calculate a running variance filter on an image using Python/NumPy/Scipy? By running variance image I mean the result of calculating sum((I - mean(I))^2)/nPixels for each sub-window I in the image.
Since the images are quite large (12000x12000 pixels), I want to avoid the overhead of converting the arrays between formats just to be able to use a different library and then convert back.
I guess I could do this manually by finding the mean using something like
kernel = np.ones((winSize, winSize))/winSize**2
image_mean = scipy.ndimage.convolve(image, kernel)
diff = (image - image_mean)**2
# Calculate sum over winSize*winSize sub-images
# Subsample result
but it would be much nicer to have something like the stdfilt-function from Matlab.
Can anyone point me in the direction of a library that has this functionality AND supports numpy arrays, or hint at/provide a way to do this in NumPy/SciPy?
Simpler solution and also faster: use SciPy's ndimage.uniform_filter
import numpy as np
from scipy import ndimage
rows, cols = 500, 500
win_rows, win_cols = 5, 5
img = np.random.rand(rows, cols)
win_mean = ndimage.uniform_filter(img, (win_rows, win_cols))
win_sqr_mean = ndimage.uniform_filter(img**2, (win_rows, win_cols))
win_var = win_sqr_mean - win_mean**2
The "stride trick" is beautiful trick, but 4 slower and not that readable.
the generic_filter is 20 times slower than the strides...
You can use numpy.lib.stride_tricks.as_strided to get a windowed view of your image:
import numpy as np
from numpy.lib.stride_tricks import as_strided
rows, cols = 500, 500
win_rows, win_cols = 5, 5
img = np.random.rand(rows, cols)
win_img = as_strided(img, shape=(rows-win_rows+1, cols-win_cols+1,
win_rows, win_cols),
strides=img.strides*2)
And now win_img[i, j]is the (win_rows, win_cols) array with the top left corner at position [i, j]:
>>> img[100:105, 100:105]
array([[ 0.34150754, 0.17888323, 0.67222354, 0.9020784 , 0.48826682],
[ 0.68451774, 0.14887515, 0.44892615, 0.33352743, 0.22090103],
[ 0.41114758, 0.82608407, 0.77190533, 0.42830363, 0.57300759],
[ 0.68435626, 0.94874394, 0.55238567, 0.40367885, 0.42955156],
[ 0.59359203, 0.62237553, 0.58428725, 0.58608119, 0.29157555]])
>>> win_img[100,100]
array([[ 0.34150754, 0.17888323, 0.67222354, 0.9020784 , 0.48826682],
[ 0.68451774, 0.14887515, 0.44892615, 0.33352743, 0.22090103],
[ 0.41114758, 0.82608407, 0.77190533, 0.42830363, 0.57300759],
[ 0.68435626, 0.94874394, 0.55238567, 0.40367885, 0.42955156],
[ 0.59359203, 0.62237553, 0.58428725, 0.58608119, 0.29157555]])
You have to be careful, though, with not converting your windowed view of the image, into a windowed copy of it: in my example that would require 25 times more storage. I believe numpy 1.7 lets you select more than one axis, so you could then simply do:
>>> np.var(win_img, axis=(-1, -2))
I am stuck with numpy 1.6.2, so I cannot test that. The other option, which may fail with not-so-large windows, would be to do, if I remember my math correctly:
>>> win_mean = np.sum(np.sum(win_img, axis=-1), axis=-1)/win_rows/win_cols
>>> win_sqr_mean = np.sum(np.sum(win_img**2, axis=-1), axis=-1)/win_rows/win_cols
>>> win_var = win_sqr_mean - win_mean**2
And now win_var is an array of shape
>>> win_var.shape
(496, 496)
and win_var[i, j] holds the variance of the (5, 5) window with top left corner at [i, j].
After a bit of optimization we came up with this function for a generic 3D image:
def variance_filter( img, VAR_FILTER_SIZE ):
from numpy.lib.stride_tricks import as_strided
WIN_SIZE=(2*VAR_FILTER_SIZE)+1
if ~ VAR_FILTER_SIZE%2==1:
print 'Warning, VAR_FILTER_SIZE must be ODD Integer number '
# hack -- this could probably be an input to the function but Alessandro is lazy
WIN_DIMS = [ WIN_SIZE, WIN_SIZE, WIN_SIZE ]
# Check that there is a 3D image input.
if len( img.shape ) != 3:
print "\t variance_filter: Are you sure that you passed me a 3D image?"
return -1
else:
DIMS = img.shape
# Set up a windowed view on the data... this will have a border removed compared to the img_in
img_strided = as_strided(img, shape=(DIMS[0]-WIN_DIMS[0]+1, DIMS[1]-WIN_DIMS[1]+1, DIMS[2]-WIN_DIMS[2]+1, WIN_DIMS[0], WIN_DIMS[1], WIN_DIMS[2] ), strides=img.strides*2)
# Calculate variance, vectorially
win_mean = numpy.sum(numpy.sum(numpy.sum(img_strided, axis=-1), axis=-1), axis=-1) / (WIN_DIMS[0]*WIN_DIMS[1]*WIN_DIMS[2])
# As per http://en.wikipedia.org/wiki/Variance, we are removing the mean from every window,
# then squaring the result.
# Casting to 64 bit float inside, because the numbers (at least for our images) get pretty big
win_var = numpy.sum(numpy.sum(numpy.sum((( img_strided.T.astype('<f8') - win_mean.T.astype('<f8') )**2).T, axis=-1), axis=-1), axis=-1) / (WIN_DIMS[0]*WIN_DIMS[1]*WIN_DIMS[2])
# Prepare an output image of the right size, in order to replace the border removed with the windowed view call
out_img = numpy.zeros( DIMS, dtype='<f8' )
# copy borders out...
out_img[ WIN_DIMS[0]/2:DIMS[0]-WIN_DIMS[0]+1+WIN_DIMS[0]/2, WIN_DIMS[1]/2:DIMS[1]-WIN_DIMS[1]+1+WIN_DIMS[1]/2, WIN_DIMS[2]/2:DIMS[2]-WIN_DIMS[2]+1+WIN_DIMS[2]/2, ] = win_var
# output
return out_img.astype('>f4')
You can use scipy.ndimage.generic_filter. I can't test with matlab, but perhaps this gives you what you're looking for:
import numpy as np
import scipy.ndimage as ndimage
subs = 10 # this is the size of the (square) sub-windows
img = np.random.rand(500, 500)
img_std = ndimage.filters.generic_filter(img, np.std, size=subs)
You can make the sub-windows of arbitrary sizes using the footprint keyword. See this question for an example.