Perlin noise in Python's noise library - python

I have a problem with generating Perlin noise for my project. As I wanted to understand how to use library properly, I tried to follow step-by-step this page: https://medium.com/#yvanscher/playing-with-perlin-noise-generating-realistic-archipelagos-b59f004d8401
In first part, there is code:
import noise
import numpy as np
from scipy.misc import toimage
shape = (1024,1024)
scale = 100.0
octaves = 6
persistence = 0.5
lacunarity = 2.0
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=0)
toimage(world).show()
I copy-paste it with small change at the end (toimage is obsolete) so I have:
import noise
import numpy as np
from PIL import Image
shape = (1024,1024)
scale = 100
octaves = 6
persistence = 0.5
lacunarity = 2.0
seed = np.random.randint(0,100)
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=seed)
Image.fromarray(world, mode='L').show()
I tried a lot of diffrient modes, but this noise is not even close to coherent noise. My result is something like this (mode='L'). Could someone explain me, what am I doing wrong?

Here is the working code. I took the liberty of cleaning it up a little. See comments for details. As a final advice: When testing code, use matplotlib for visualization. Its imshow() function is way more robust than PIL.
import noise
import numpy as np
from PIL import Image
shape = (1024,1024)
scale = .5
octaves = 6
persistence = 0.5
lacunarity = 2.0
seed = np.random.randint(0,100)
world = np.zeros(shape)
# make coordinate grid on [0,1]^2
x_idx = np.linspace(0, 1, shape[0])
y_idx = np.linspace(0, 1, shape[1])
world_x, world_y = np.meshgrid(x_idx, y_idx)
# apply perlin noise, instead of np.vectorize, consider using itertools.starmap()
world = np.vectorize(noise.pnoise2)(world_x/scale,
world_y/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=seed)
# here was the error: one needs to normalize the image first. Could be done without copying the array, though
img = np.floor((world + .5) * 255).astype(np.uint8) # <- Normalize world first
Image.fromarray(img, mode='L').show()

If someone comes after me, with noise library you should rather normalize with
img = np.floor((world + 1) * 127).astype(np.uint8)
This way there will not be any spots of abnormal colour opposite to what it should be.

Related

Why am I getting this underflow error while slicing numpy ndarrays?

I'm trying to make a grid image with pyplot--specifically one that I can use as a "gallery" of a 2d array containing some smaller images of uniform size. I've mostly succeeded, but the blank image I used to put the grid over gets random noise on it. This happens even when I go down to a 4x4 input image :(
When I run this I get the error message
C:\Users\[me]\anaconda3\lib\site-packages\matplotlib\image.py:491:
RuntimeWarning: underflow encountered in true_divide
vrange /= ((a_max - a_min) / frac)
From the relatively orderly noise I thought it was going to be an overflow error, but I guess not... What's going on here, and how can I fix it?
import numpy as np
import matplotlib.pyplot as plt
def grid_binim(a):
bweight = 1
bcolor = .5
#shift = 0
plt.figure()
plt.axis("off")
n = a.shape[0]
size = n*n + 2*n*bweight
out = np.ndarray((size,size))
for i in range(n):
pixeli = n*i+(i)*bweight*2
out[pixeli,:] = bcolor
out[pixeli+n+1,:] = bcolor
for j in range(n):
pixelj = n*j+(j)*bweight*2
out[pixeli:pixeli+n+1,pixelj] = bcolor
out[pixeli:pixeli+n+1,pixelj+n+1] = bcolor
plt.imshow(out,cmap=plt.cm.gray,vmin=0.0,vmax=1.0)
blank = np.zeros((4,4,4,4))
grid_binim(blank)

Image reconstruction with compressed sensing

I'm trying to code a demonstration of compressed sensing for my final year project but am getting poor image reconstruction when using the Lasso algorithm. I've relied on the following as a reference: http://www.pyrunner.com/weblog/2016/05/26/compressed-sensing-python/
However my code has some differences:
I use scikit-learn to perform a lasso optimisation (basis pursuit) as opposed to using cvxpy to perform an l_1 minimisation with an equality constraint as in the article.
I construct psi differently/more simply, testing seems to show that it's correct.
I use a different package to read and write the image.
import numpy as np
import scipy.fftpack as spfft
import scipy.ndimage as spimg
import imageio
from sklearn.linear_model import Lasso
x_orig = imageio.imread('gt40.jpg', pilmode='L') # read in grayscale
x = spimg.zoom(x_orig, 0.2) #zoom for speed
ny,nx = x.shape
k = round(nx * ny * 0.5) #50% sample
ri = np.random.choice(nx * ny, k, replace=False)
y = x.T.flat[ri] #y is the measured sample
# y = np.expand_dims(y, axis=1) ---- this doesn't seem to make a difference, was presumably required with cvxpy
psi = spfft.idct(np.identity(nx*ny), norm='ortho', axis=0) #my construction of psi
# psi = np.kron(
# spfft.idct(np.identity(nx), norm='ortho', axis=0),
# spfft.idct(np.identity(ny), norm='ortho', axis=0)
# )
# psi = 2*np.random.random_sample((nx*ny,nx*ny)) - 1
theta = psi[ri,:] #equivalent to phi*psi
lasso = Lasso(alpha=0.001, max_iter=10000)
lasso.fit(theta, y)
s = np.array(lasso.coef_)
x_recovered = psi#s
x_recovered = x_recovered.reshape(nx, ny).T
x_recovered_final = x_recovered.astype('uint8') #recovered image is float64 and has negative values..
imageio.imwrite('gt40_recovered.jpg', x_recovered_final)
Unfortunately I'm not allowed to post images yet so here is a link to the original zoomed image, the image recovered with lasso and the image recovered with cvxpy (described later):
https://imgur.com/a/LROSug6
As you can see not only is the recovery poor but the image completely corrupted - the colours seem to be negative and the detail from the 50% sample lost. I think I've managed to track down the problem to the Lasso regression - it returns a vector that, when inverse transformed, has values that are not necessarily in the 0-255 range as expected for the image. So the conversion to from dtype float64 to uint8 is rather random (e.g. -55 becomes 255-55=200).
Following this I tried swapping out lasso for the same optimisation as in the article (minimising the l_1 norm subject to theta*s=y using cvxpy):
import cvxpy as cvx
x_orig = imageio.imread('gt40.jpg', pilmode='L') # read in grayscale
x = spimg.zoom(x_orig, 0.2)
ny,nx = x.shape
k = round(nx * ny * 0.5)
ri = np.random.choice(nx * ny, k, replace=False)
y = x.T.flat[ri]
psi = spfft.idct(np.identity(nx*ny), norm='ortho', axis=0)
theta = psi[ri,:] #equivalent to phi*psi
#NEW CODE STARTS:
vx = cvx.Variable(nx * ny)
objective = cvx.Minimize(cvx.norm(vx, 1))
constraints = [theta#vx == y]
prob = cvx.Problem(objective, constraints)
result = prob.solve(verbose=True)
s = np.array(vx.value).squeeze()
x_recovered = psi#s
x_recovered = x_recovered.reshape(nx, ny).T
x_recovered_final = x_recovered.astype('uint8')
imageio.imwrite('gt40_recovered_altopt.jpg', x_recovered_final)
This took nearly 6 hours but finally I got a somewhat satisfactory result. However I would like to perform a demonstration of lasso if possible. Any help in getting the lasso to return appropriate values or somehow converting its result appropriately would be very much appreciated.

Gaussian notch filter in Python

I'm trying to design a Gaussian notch filter in Python to remove periodic noise.I tried implementing the following formula:
Gaussian Notch Filter
And here is the code:
import numpy as np
def gaussian_bandpass_filter(image):
image_array = np.array(image)
#Fourier Transform
fourier_transform = np.fft.fftshift(np.fft.fft2(image_array))
#Size of Image
m = np.shape(fourier_transform)[0]
n = np.shape(fourier_transform)[1]
u = np.arange(m)
v = np.arange(n)
# Find the center
u0 = int(m/2)
v0 = int(n/2)
# Bandwidth
D0 = 10
gaussian_filter = np.zeros(np.shape(fourier_transform))
for x in u:
for y in v:
D1 = math.sqrt((x-m/2-u0)**2 + (y-n/2-v0)**2)
D2 = math.sqrt((x-m/2+u0)**2 + (y-n/2+v0)**2)
gaussian_filter[x][y] = 1 - math.exp(-0.5 * D1*D2/(D0**2))
#Apply the filter
fourier_transform = fourier_transform + gaussian_filter
image_array = np.fft.ifft2(np.fft.ifftshift(fourier_transform))
return image_array
this function is supposed to apply the Gaussian notch filter to an image and return the filtered image but it doesn't seem to work. I don't know where I went wrong with this (maybe I didn't understand the formula correctly?) so if anyone could help me I would really appreciate it.
Edit:
As an example, here is a noisy image.
Using the existing gaussian_filter function in scipy.ndimage library, I get this, which is acceptable.
But my function returns this. (I'm using PIL.Image.fromarray function to convert array to image)

Creating a 2D Gaussian random field from a given 2D variance

I've been trying to create a 2D map of blobs of matter (Gaussian random field) using a variance I have calculated. This variance is a 2D array. I have tried using numpy.random.normal since it allows for a 2D input of the variance, but it doesn't really create a map with the trend I expect from the input parameters. One of the important input constants lambda_c should manifest itself as the physical size (diameter) of the blobs. However, when I change my lambda_c, the size of the blobs does not change if at all. For example, if I set lambda_c = 40 parsecs, the map needs blobs that are 40 parsecs in diameter. A MWE to produce the map using my variance:
import numpy as np
import random
import matplotlib.pyplot as plt
from matplotlib.pyplot import show, plot
import scipy.integrate as integrate
from scipy.interpolate import RectBivariateSpline
n = 300
c = 3e8
G = 6.67e-11
M_sun = 1.989e30
pc = 3.086e16 # parsec
Dds = 1097.07889283e6*pc
Ds = 1726.62069147e6*pc
Dd = 1259e6*pc
FOV_arcsec_original = 5.
FOV_arcmin = FOV_arcsec_original/60.
pix2rad = ((FOV_arcmin/60.)/float(n))*np.pi/180.
rad2pix = 1./pix2rad
x_pix = np.linspace(-FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,n)
y_pix = np.linspace(-FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,n)
X_pix,Y_pix = np.meshgrid(x_pix,y_pix)
conc = 10.
M = 1e13*M_sun
r_s = 18*1e3*pc
lambda_c = 40*pc ### The important parameter that doesn't seem to manifest itself in the map when changed
rho_s = M/((4*np.pi*r_s**3)*(np.log(1+conc) - (conc/(1+conc))))
sigma_crit = (c**2*Ds)/(4*np.pi*G*Dd*Dds)
k_s = rho_s*r_s/sigma_crit
theta_s = r_s/Dd
Renorm = (4*G/c**2)*(Dds/(Dd*Ds))
#### Here I just interpolate and zoom into my field of view to get better resolutions
A = np.sqrt(X_pix**2 + Y_pix**2)*pix2rad/theta_s
A_1 = A[100:200,0:100]
n_x = n_y = 100
FOV_arcsec_x = FOV_arcsec_original*(100./300)
FOV_arcmin_x = FOV_arcsec_x/60.
pix2rad_x = ((FOV_arcmin_x/60.)/float(n_x))*np.pi/180.
rad2pix_x = 1./pix2rad_x
FOV_arcsec_y = FOV_arcsec_original*(100./300)
FOV_arcmin_y = FOV_arcsec_y/60.
pix2rad_y = ((FOV_arcmin_y/60.)/float(n_y))*np.pi/180.
rad2pix_y = 1./pix2rad_y
x1 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x)
y1 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y)
X1,Y1 = np.meshgrid(x1,y1)
n_x_2 = 500
n_y_2 = 500
x2 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_2)
y2 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_2)
X2,Y2 = np.meshgrid(x2,y2)
interp_spline = RectBivariateSpline(y1,x1,A_1)
A_2 = interp_spline(y2,x2)
A_3 = A_2[50:450,0:400]
n_x_3 = n_y_3 = 400
FOV_arcsec_x = FOV_arcsec_original*(100./300)*400./500.
FOV_arcmin_x = FOV_arcsec_x/60.
pix2rad_x = ((FOV_arcmin_x/60.)/float(n_x_3))*np.pi/180.
rad2pix_x = 1./pix2rad_x
FOV_arcsec_y = FOV_arcsec_original*(100./300)*400./500.
FOV_arcmin_y = FOV_arcsec_y/60.
pix2rad_y = ((FOV_arcmin_y/60.)/float(n_y_3))*np.pi/180.
rad2pix_y = 1./pix2rad_y
x3 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_3)
y3 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_3)
X3,Y3 = np.meshgrid(x3,y3)
n_x_4 = 1000
n_y_4 = 1000
x4 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_4)
y4 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_4)
X4,Y4 = np.meshgrid(x4,y4)
interp_spline = RectBivariateSpline(y3,x3,A_3)
A_4 = interp_spline(y4,x4)
############### Function to calculate variance
variance = np.zeros((len(A_4),len(A_4)))
def variance_fluctuations(x):
for i in xrange(len(x)):
for j in xrange(len(x)):
if x[j][i] < 1.:
variance[j][i] = (k_s**2)*(lambda_c/r_s)*((np.pi/x[j][i]) - (1./(x[j][i]**2 -1)**3.)*(((6.*x[j][i]**4. - 17.*x[j][i]**2. + 26)/3.)+ (((2.*x[j][i]**6. - 7.*x[j][i]**4. + 8.*x[j][i]**2. - 8)*np.arccosh(1./x[j][i]))/(np.sqrt(1-x[j][i]**2.)))))
elif x[j][i] > 1.:
variance[j][i] = (k_s**2)*(lambda_c/r_s)*((np.pi/x[j][i]) - (1./(x[j][i]**2 -1)**3.)*(((6.*x[j][i]**4. - 17.*x[j][i]**2. + 26)/3.)+ (((2.*x[j][i]**6. - 7.*x[j][i]**4. + 8.*x[j][i]**2. - 8)*np.arccos(1./x[j][i]))/(np.sqrt(x[j][i]**2.-1)))))
variance_fluctuations(A_4)
#### Creating the map
mean = 0
delta_kappa = np.random.normal(0,variance,A_4.shape)
xfinal = np.linspace(-FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,1000)
yfinal = np.linspace(-FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,1000)
Xfinal, Yfinal = np.meshgrid(xfinal,yfinal)
plt.contourf(Xfinal,Yfinal,delta_kappa,100)
plt.show()
The map looks like this, with the density of blobs increasing towards the right. However, the size of the blobs don't change and the map looks virtually the same whether I use lambda_c = 40*pc or lambda_c = 400*pc.
I'm wondering if the np.random.normal function isn't really doing what I expect it to do? I feel like the pixel scale of the map and the way samples are drawn make no link to the size of the blobs. Maybe there is a better way to create the map using the variance, would appreciate any insight.
I expect the map to look something like this , the blob sizes change based on the input parameters for my variance :
This is quite a well visited problem in (surprise surprise) astronomy and cosmology.
You could use lenstool: https://lenstools.readthedocs.io/en/latest/examples/gaussian_random_field.html
You could also try here:
https://andrewwalker.github.io/statefultransitions/post/gaussian-fields
Not to mention:
https://github.com/bsciolla/gaussian-random-fields
I am not reproducing code here because all credit goes to the above authors. However, they did just all come right out a google search :/
Easiest of all is probably a python module FyeldGenerator, apparently designed for this exact purpose:
https://github.com/cphyc/FyeldGenerator
So (adapted from github example):
pip install FyeldGenerator
from FyeldGenerator import generate_field
from matplotlib import use
use('Agg')
import matplotlib.pyplot as plt
import numpy as np
plt.figure()
# Helper that generates power-law power spectrum
def Pkgen(n):
def Pk(k):
return np.power(k, -n)
return Pk
# Draw samples from a normal distribution
def distrib(shape):
a = np.random.normal(loc=0, scale=1, size=shape)
b = np.random.normal(loc=0, scale=1, size=shape)
return a + 1j * b
shape = (512, 512)
field = generate_field(distrib, Pkgen(2), shape)
plt.imshow(field, cmap='jet')
plt.savefig('field.png',dpi=400)
plt.close())
This gives:
Looks pretty straightforward to me :)
PS: FoV implied a telescope observation of the gaussian random field :)
A completely different and much quicker way may be just to blur the delta_kappa array with gaussian filter. Try adjusting sigma parameter to alter the blobs size.
from scipy.ndimage.filters import gaussian_filter
dk_gf = gaussian_filter(delta_kappa, sigma=20)
Xfinal, Yfinal = np.meshgrid(xfinal,yfinal)
plt.contourf(Xfinal,Yfinal,dk_ma,100, cmap='jet')
plt.show();
this is image with sigma=20
this is image with sigma=2.5
ThunderFlash, try this code to draw the map:
# function to produce blobs:
from scipy.stats import multivariate_normal
def blob (positions, mean=(0,0), var=1):
cov = [[var,0],[0,var]]
return multivariate_normal(mean, cov).pdf(positions)
"""
now prepare for blobs generation.
note that I use less dense grid to pick blobs centers (regulated by `step`)
this makes blobs more pronounced and saves calculation time.
use this part instead of your code section below comment #### Creating the map
"""
delta_kappa = np.random.normal(0,variance,A_4.shape) # same
step = 10 #
dk2 = delta_kappa[::step,::step] # taking every 10th element
x2, y2 = xfinal[::step],yfinal[::step]
field = np.dstack((Xfinal,Yfinal))
print (field.shape, dk2.shape, x2.shape, y2.shape)
>> (1000, 1000, 2), (100, 100), (100,), (100,)
result = np.zeros(field.shape[:2])
for x in range (len(x2)):
for y in range (len(y2)):
res2 = blob(field, mean = (x2[x], y2[y]), var=10000)*dk2[x,y]
result += res2
# the cycle above took over 20 minutes on Ryzen 2700X. It could be accelerated by vectorization presumably.
plt.contourf(Xfinal,Yfinal,result,100)
plt.show()
you may want to play with var parameter in blob() to smoothen the image and with step to make it more compressed.
Here is the image that I got using your code (somehow axes are flipped and more dense areas on the top):

Bradley-Roth Adaptive Thresholding Algorithm - How do I get better performance?

I have the following code for image thresholding, using the Bradley-Roth image thresholding method.
from PIL import Image
import copy
import time
def bradley_threshold(image, threshold=75, windowsize=5):
ws = windowsize
image2 = copy.copy(image).convert('L')
w, h = image.size
l = image.convert('L').load()
l2 = image2.load()
threshold /= 100.0
for y in xrange(h):
for x in xrange(w):
#find neighboring pixels
neighbors =[(x+x2,y+y2) for x2 in xrange(-ws,ws) for y2 in xrange(-ws, ws) if x+x2>0 and x+x2<w and y+y2>0 and y+y2<h]
#mean of all neighboring pixels
mean = sum([l[a,b] for a,b in neighbors])/len(neighbors)
if l[x, y] < threshold*mean:
l2[x,y] = 0
else:
l2[x,y] = 255
return image2
i = Image.open('test.jpg')
windowsize = 5
bradley_threshold(i, 75, windowsize).show()
This works fine when windowsize is small and the image is small. I've been using this image for testing:
I'm experiencing processing times of about 5 or 6 seconds when using a window size of 5, but if I bump my window size up to 20 and the algorithm is checking 20 pixels in each direction for the mean value, i get times upwards of one minute for that image.
If I use an image with a size like 2592x1936 with a window size of only 5, it takes nearly 10 minutes to complete.
So, how can I improve those times? Would a numpy array be faster? Is im.getpixel faster than loading the image into pixel access mode? Are there any other tips for speed boosts? Thanks in advance.
Referencing our comments, I wrote a MATLAB implementation of this algorithm here: Extract a page from a uniform background in an image, and it was quite fast on large images.
If you'd like a better explanation of the algorithm, please see my other answer here: Bradley Adaptive Thresholding -- Confused (questions). This may be a good place to start if you want a better understanding of the code I wrote.
Because MATLAB and NumPy are similar, this is a re-implementation of the Bradley-Roth thresholding algorithm, but in NumPy. I convert the PIL image into a NumPy array, do the processing on this image, then convert back to a PIL image. The function takes in three parameters: the grayscale image image, the size of the window s and the threshold t. This threshold is different than what you have as this is following the paper exactly. The threshold t is a percentage of the total summed area of each pixel window. If the summed area is less than this threshold, then the output should be a black pixel - else it's a white pixel. The defaults for s and t are the number of columns divided by 8 and rounded, and 15% respectively:
import numpy as np
from PIL import Image
def bradley_roth_numpy(image, s=None, t=None):
# Convert image to numpy array
img = np.array(image).astype(np.float)
# Default window size is round(cols/8)
if s is None:
s = np.round(img.shape[1]/8)
# Default threshold is 15% of the total
# area in the window
if t is None:
t = 15.0
# Compute integral image
intImage = np.cumsum(np.cumsum(img, axis=1), axis=0)
# Define grid of points
(rows,cols) = img.shape[:2]
(X,Y) = np.meshgrid(np.arange(cols), np.arange(rows))
# Make into 1D grid of coordinates for easier access
X = X.ravel()
Y = Y.ravel()
# Ensure s is even so that we are able to index into the image
# properly
s = s + np.mod(s,2)
# Access the four corners of each neighbourhood
x1 = X - s/2
x2 = X + s/2
y1 = Y - s/2
y2 = Y + s/2
# Ensure no coordinates are out of bounds
x1[x1 < 0] = 0
x2[x2 >= cols] = cols-1
y1[y1 < 0] = 0
y2[y2 >= rows] = rows-1
# Ensures coordinates are integer
x1 = x1.astype(np.int)
x2 = x2.astype(np.int)
y1 = y1.astype(np.int)
y2 = y2.astype(np.int)
# Count how many pixels are in each neighbourhood
count = (x2 - x1) * (y2 - y1)
# Compute the row and column coordinates to access
# each corner of the neighbourhood for the integral image
f1_x = x2
f1_y = y2
f2_x = x2
f2_y = y1 - 1
f2_y[f2_y < 0] = 0
f3_x = x1-1
f3_x[f3_x < 0] = 0
f3_y = y2
f4_x = f3_x
f4_y = f2_y
# Compute areas of each window
sums = intImage[f1_y, f1_x] - intImage[f2_y, f2_x] - intImage[f3_y, f3_x] + intImage[f4_y, f4_x]
# Compute thresholded image and reshape into a 2D grid
out = np.ones(rows*cols, dtype=np.bool)
out[img.ravel()*count <= sums*(100.0 - t)/100.0] = False
# Also convert back to uint8
out = 255*np.reshape(out, (rows, cols)).astype(np.uint8)
# Return PIL image back to user
return Image.fromarray(out)
if __name__ == '__main__':
img = Image.open('test.jpg').convert('L')
out = bradley_roth_numpy(img)
out.show()
out.save('output.jpg')
The image is read in and converted to grayscale if required. The output image will be displayed, and it will be saved to the same directory where you ran the script to an image called output.jpg. If you want to override the settings, simply do:
out = bradley_roth_numpy(img, windowsize, threshold)
Play around with this to get good results. Using the default parameters and using IPython, I measured the average time of execution using timeit, and this is what I get for your image you uploaded in your post:
In [16]: %timeit bradley_roth_numpy(img)
100 loops, best of 3: 7.68 ms per loop
This means that running this function repeatedly 100 times on the image you uploaded, the best of 3 execution times gave on average 7.68 milliseconds per run.
I also get this image as a result when I threshold it:
Profiling your code in IPython with %prun yields shows:
ncalls tottime percall cumtime percall filename:lineno(function)
50246 2.009 0.000 2.009 0.000 <ipython-input-78-b628a43d294b>:15(<listcomp>)
50246 0.587 0.000 0.587 0.000 <ipython-input-78-b628a43d294b>:17(<listcomp>)
1 0.170 0.170 2.829 2.829 <ipython-input-78-b628a43d294b>:5(bradley_threshold)
50246 0.058 0.000 0.058 0.000 {built-in method sum}
50257 0.004 0.000 0.004 0.000 {built-in method len}
i.e, almost all of the running time is due to Python loops (slow) and non-vectorized arithmetic (slow). So I would expect big improvements if you rewrite using numpy arrays; alternatively you could use cython if you can't work out how to vectorize your code.
OK, I am a bit late here. Let me share my thoughts on that anyway:
You could speed it up by using dynamic programming to compute the means but it much easier and faster to let scipy and numpy do all the dirty work. (Note that I use Python3 for my code, so xrange is changed to range in your code).
#!/usr/bin/env python3
import numpy as np
from scipy import ndimage
from PIL import Image
import copy
import time
def faster_bradley_threshold(image, threshold=75, window_r=5):
percentage = threshold / 100.
window_diam = 2*window_r + 1
# convert image to numpy array of grayscale values
img = np.array(image.convert('L')).astype(np.float) # float for mean precision
# matrix of local means with scipy
means = ndimage.uniform_filter(img, window_diam)
# result: 0 for entry less than percentage*mean, 255 otherwise
height, width = img.shape[:2]
result = np.zeros((height,width), np.uint8) # initially all 0
result[img >= percentage * means] = 255 # numpy magic :)
# convert back to PIL image
return Image.fromarray(result)
def bradley_threshold(image, threshold=75, windowsize=5):
ws = windowsize
image2 = copy.copy(image).convert('L')
w, h = image.size
l = image.convert('L').load()
l2 = image2.load()
threshold /= 100.0
for y in range(h):
for x in range(w):
#find neighboring pixels
neighbors =[(x+x2,y+y2) for x2 in range(-ws,ws) for y2 in range(-ws, ws) if x+x2>0 and x+x2<w and y+y2>0 and y+y2<h]
#mean of all neighboring pixels
mean = sum([l[a,b] for a,b in neighbors])/len(neighbors)
if l[x, y] < threshold*mean:
l2[x,y] = 0
else:
l2[x,y] = 255
return image2
if __name__ == '__main__':
img = Image.open('test.jpg')
t0 = time.process_time()
threshed0 = bradley_threshold(img)
print('original approach:', round(time.process_time()-t0, 3), 's')
threshed0.show()
t0 = time.process_time()
threshed1 = faster_bradley_threshold(img)
print('w/ numpy & scipy :', round(time.process_time()-t0, 3), 's')
threshed1.show()
That made it much faster on my machine:
$ python3 bradley.py
original approach: 3.736 s
w/ numpy & scipy : 0.003 s
PS: Note that the mean I used from scipy behaves slightly different at the borders than the one from your code (for positions where the window for mean calculation is not fully contained in he image anymore). However, I think that shouldn't be a problem.
Another minor difference is that the window from the for-loops was not exactly centered at the pixel as the offset by xrange(-ws,ws) with ws=5 yields -5,-4-,...,3,4 and results in an average of -0.5. This probably wasn't intended.

Categories

Resources