I am using Python 3.8.10 on Linux Mint 20.3 Una. I am making a series of animations with a multitude (potentially thousands) of fish shapes, each of which are produced by specifying a 2D profile with points, and is then filled in using the Pyplot fill function.
What I would like to be able to do is to apply a unique blur to each of these individual filled regions based on a computed distance to mimic image depth. An added complication is that these filled regions frequently overlap.
In theory, this could be done by exporting SVG files and manually applying the blurs in Inkscape or some other package, but there are potentially thousands of fish and hundreds of frames, so a way to achieve this in code really is the only realistic way to accomplish it, if it is possible.
Here is the minimal code that produces two filled profiles that I would like to blur individually:
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter
#define profile of object with points
x_profile = [0.5,0.485951301332915,0.423371700761206,0.358237605529776,0.281609306290982,0.23180095266422,0.152618567550257,0.053001860296735,-0.005746611462221,-0.060663545623872,-0.05683323438022,-0.257343937095579,-0.317369329156755,-0.345466399463283,-0.469348762061393,-0.492337251833031,-0.5,-0.439974607938825,-0.418263242861681,-0.415709156986512,-0.461686095651334,-0.492337415346851,-0.483397419850022,-0.466794594429313,-0.363346513092306,-0.342912313588113,-0.31864669912198,-0.289272544999412,-0.236909860226751,-0.210090037250083,-0.183269887245775,-0.146233189348514,-0.078544599457363,0.086206203027589,0.210088361233424,0.310982111424531,0.418261893872663,0.478287408569203,0.493612741389321]
y_profile = [-0.019156461632871,0.002554903444271,0.031928934931474,0.051085805348896,0.065134504015981,0.07024308455087,0.071518492350251,0.067688181106599,0.158365179012477,0.068965632828735,0.049808353626761,0.028096988549618,0.025542085105346,0.03192770857782,0.10217038434414,0.104725287788412,0.091954040843463,0.00255449465972,-0.00255449465972,-0.017879827479838,-0.067688181106599,-0.148148017942698,-0.158365179012477,-0.151979555540003,-0.061302557634125,-0.047254267751592,-0.040868235494567,-0.042143643293948,-0.080457792913345,-0.084288104156997,-0.079179523622108,-0.097059759886497,-0.111108049769031,-0.127710834311284,-0.126435426511903,-0.107278556094481,-0.076627072885143,-0.045975589675805,-0.031927299793271]
#this just makes a second object and offsets it down 0.5 units
n_objects = 2
n_points = len(y_profile)
x_points = np.zeros((n_objects, n_points))
y_points = np.zeros((n_objects, n_points))
for i in range(n_objects):
for j in range(n_points):
x_points[i,j] = x_profile[j]
y_points[i,j] = y_profile[j] - i*0.5
#make plot
fig = plt.figure(frameon=False)
fig.set_size_inches(6.5, 6.5)
ax = plt.axes()
ax.set_facecolor((0,0,1.0))
ax.set_xlim(-1,+1)
ax.set_ylim(-1,+1)
ax.set_aspect('equal', adjustable='box')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
#create filled regions defined by copies of profile points (I want to be able to apply a blur these fills individually)
for i in range(n_objects):
plt.fill(x_points[i,:], y_points[i,:], color = (0, 0, 0.5))
#tried the following, but does not work at all.
#handle = plt.fill(x_profile, y_profile, color = (0, 0, 0.5))
#blurred = gaussian_filter(handle, sigma=1)
#show plot (normally exporting PNG frames for animation)
plt.show()
which should yield this image:
Fish Profiles
If this is not possible in Python, I'm open to suggestions as to how this could be implemented dynamically in some other way.
I've seen examples of SciPy Gaussian blur applied to regions of static images, but the blur that I want to achieve is specific to the filled "object" which isn't a neat rectangle. I note that when this image is exported as an SVG the individual filled objects appear as distinct entities in that file, but I don't see a way to assign a handle to it within Python and to apply a blur to it. I've tried variations of 'handle = plt.fill(x,y)' and 'gaussian_filter(handle, sigma=1)' but with no success.
I think i was able to do what you are asking for using convolution but it is not optimized for speed at all. plus, it is kind of hard to tell how well it will translate to your bigger code.
Going off of whay you posted, I converted the graphs to rgb arrays and convolved each dimension separately with a from scratch convolution function (not my own,1). this code will output the first fish image and then a few seconds later it will output the blurred fish image.
import matplotlib.pyplot as plt
import numpy as np
import cv2
import plotly.express as px
def Convolve(img, kernel):
(imgX, imgY) = img.shape[:2]
(kernelX, kernelY) = kernel.shape[:2]
#print(imgX,imgY, kernelX,kernelY)
pad = (kernelX - 1) // 2
img = cv2.copyMakeBorder(img, pad, pad, pad, pad, cv2.BORDER_REPLICATE) #top, bottom, left, right
#the above line prevents error with convolution: operands could not be broadcast together with shapes (23,22) (23,23)
output = np.zeros((imgX, imgY), dtype="float32")
#shift kernel vertical and horizontal across image, pad prevents kernel from going out of bounds
for y in np.arange(pad, imgY + pad):
for x in np.arange(pad, imgX + pad):
#locate specific pixel
roi = img[y - pad:y + pad + 1, x - pad:x + pad + 1]
#print(roi)
#perform convolution
k = (roi * kernel).sum()
#populate the result into the previously created np.zeroes array
output[y - pad, x - pad] = k
return output
# define profile of object with points
x_profile = [0.5, 0.485951301332915, 0.423371700761206, 0.358237605529776, 0.281609306290982, 0.23180095266422,
0.152618567550257, 0.053001860296735, -0.005746611462221, -0.060663545623872, -0.05683323438022,
-0.257343937095579, -0.317369329156755, -0.345466399463283, -0.469348762061393, -0.492337251833031, -0.5,
-0.439974607938825, -0.418263242861681, -0.415709156986512, -0.461686095651334, -0.492337415346851,
-0.483397419850022, -0.466794594429313, -0.363346513092306, -0.342912313588113, -0.31864669912198,
-0.289272544999412, -0.236909860226751, -0.210090037250083, -0.183269887245775, -0.146233189348514,
-0.078544599457363, 0.086206203027589, 0.210088361233424, 0.310982111424531, 0.418261893872663,
0.478287408569203, 0.493612741389321]
y_profile = [-0.019156461632871, 0.002554903444271, 0.031928934931474, 0.051085805348896, 0.065134504015981,
0.07024308455087, 0.071518492350251, 0.067688181106599, 0.158365179012477, 0.068965632828735,
0.049808353626761, 0.028096988549618, 0.025542085105346, 0.03192770857782, 0.10217038434414,
0.104725287788412, 0.091954040843463, 0.00255449465972, -0.00255449465972, -0.017879827479838,
-0.067688181106599, -0.148148017942698, -0.158365179012477, -0.151979555540003, -0.061302557634125,
-0.047254267751592, -0.040868235494567, -0.042143643293948, -0.080457792913345, -0.084288104156997,
-0.079179523622108, -0.097059759886497, -0.111108049769031, -0.127710834311284, -0.126435426511903,
-0.107278556094481, -0.076627072885143, -0.045975589675805, -0.031927299793271]
# make plot
fig = plt.figure(frameon=False)
fig.set_size_inches(6.5, 6.5)
ax = plt.axes()
ax.set_facecolor((0, 0, 1.0))
ax.set_xlim(-1, +1)
ax.set_ylim(-1, +1)
ax.set_aspect('equal', adjustable='box')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
# create filled region defined by profile points (I want to be able to apply a blur to this)
plt.fill(x_profile, y_profile, color=(0, 0, 0.5))
fig.canvas.draw()
data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
array=np.array(data)
R=array[:,:,0]
G=array[:,:,1]
B=array[:,:,2]
fig=px.imshow(array, color_continuous_scale="gray")
fig.show()
_1DKern = cv2.getGaussianKernel(33, 2) # first value is dimensions, second is sigma
_2DKern = np.outer(_1DKern, _1DKern.transpose())
convR = Convolve(R, _2DKern)
convG=Convolve(G,_2DKern)
convB=Convolve(B,_2DKern)
conv=np.stack([convR,convG,convB],2)
fig = px.imshow(conv, color_continuous_scale="gray")
fig.show()
I'm trying to convert a continuous list points (between 0 and 1) into black and white image, representing area under/over list points.
plt.plot(points)
plt.ylabel('True val')
plt.show()
print("Points shape-->", points.shape)
I can save the image produced by matplotlib but i think this could be a nasty workaround
At the end i would like to obtain and image with shape of (224,224) where white zone represent area under line and black zone represent are over line...
image_area = np.zeros((points.shape[0],points.shape[0],))
# ¿?
Any ideas or suggestions how to approach it are welcome! Thanks experts
Here is a basic example of how you could do it. Since the slicing requires integers, you may have to scale your raw data first.
import numpy as np
import matplotlib.pyplot as plt
# your 2D image
image_data = np.zeros((224, 224))
# your points. Here I am just using a random list of points
points = np.random.choice(224, size=224)
# loop over each column in the image and set the values
# under "points" equal to 1
for col in range(len(image_data[0])):
image_data[:points[col], col] = 1
# show the final image
plt.imshow(image_data, cmap='Greys')
plt.show()
Thank you Eric, here the solution with your proposal, thank you very much!
def to_img(points):
shape = points.shape[0]
# your 2D image
image_data = np.zeros((shape, shape))
# your points. Here I am just using a random list of points
# points = np.random.choice(224, size=224)
def minmax_norm_img(data, xmax, xmin):
return (data - xmin) / (xmax - xmin)
points_max = np.max(points)
points_min = np.min(points)
points_norm = minmax_norm_img(points,points_max , points_min)
# loop over each column in the image and set the values
# over "points" equal to 1
for col in range(len(image_data[0])):
image_data[shape-int(points_norm[col]*shape):, col] = 1
return image_data
My work requires applying Local Binary Operator on Images. For that I have already converted the images in Gray then implemented a Connected Components analysis on the image also.
Here is the Code:
Adding Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from skimage.io import imread, imshow
from skimage.color import rgb2gray
from skimage.morphology import (erosion, dilation, closing, opening,area_closing, area_opening)
from skimage.measure import label, regionprops, regionprops_table
Rendering the image
plt.figure(figsize=(6,6))
painting = imread("E:/Project/for_annotation/Gupi Gain0032.jpg")
plt.imshow(painting);
plt.figure(figsize=(6,6))
Binarizing Image
gray_painting = rgb2gray(painting)
binarized = gray_painting<0.55
plt.imshow(binarized);
4.Declaring Kernel
square = np.array([[1,1,1],
[1,1,1],
[1,1,1]])
Dilation function
def multi_dil(im, num, element=square):
for i in range(num):
im = dilation(im, element)
return im
Erosion function
def multi_ero(im, num, element=square):
for i in range(num):
im = erosion(im, element)
return im
Functions Applied
plt.figure(figsize=(6,6))
multi_dilated = multi_dil(binarized, 7)
area_closed = area_closing(multi_dilated, 50000)
multi_eroded = multi_ero(area_closed, 7)
opened = opening(multi_eroded)
plt.imshow(opened);
Label function
plt.figure(figsize=(6,6))
label_im = label(opened)
regions = regionprops(label_im)
plt.imshow(label_im);
Extract features
properties = ['area','convex_area','bbox_area', 'extent', 'mean_intensity','solidity', 'eccentricity', 'orientation']
pd.DataFrame(regionprops_table(label_im, gray_painting,
properties=properties))
Filtering Regions
masks = []
bbox = []
list_of_index = []
for num, x in enumerate(regions):
area = x.area
convex_area = x.convex_area
if (num!=0 and (area>100) and (convex_area/area <1.05)
and (convex_area/area >0.95)):
masks.append(regions[num].convex_image)
bbox.append(regions[num].bbox)
list_of_index.append(num)
count = len(masks)
Extracting Images
fig, ax = plt.subplots(2, int(count/2), figsize=(15,8))
for axis, box, mask in zip(ax.flatten(), bbox, masks):
red = painting[:,:,0][box[0]:box[2], box[1]:box[3]] * mask
green = painting[:,:,1][box[0]:box[2], box[1]:box[3]] * mask
blue = painting[:,:,2][box[0]:box[2], box[1]:box[3]] * mask
image = np.dstack([red,green,blue])
axis.imshow(image)
plt.tight_layout()
plt.figure(figsize=(6,6))
rgb_mask = np.zeros_like(label_im)
for x in list_of_index:
rgb_mask += (label_im==x+1).astype(int)
red = painting[:,:,0] * rgb_mask
green = painting[:,:,1] * rgb_mask
blue = painting[:,:,2] * rgb_mask
image = np.dstack([red,green,blue])
plt.imshow(image);
I am getting an error.
ValueError: Number of columns must be a positive integer, not 0
There is a possible approach which is not very far from what you attempted. Assume the background pixels are assigned the label 0, and the object pixels the value 1.
scan the image row by row;
when you meet a pixel 1, set a new label and perform a flood fill operation, replacing 1 by the new label.
Flood filling can be implemented very simply:
set the starting pixel to the new label;
recursively fill the eight neighbors, if they have a 1.
https://en.wikipedia.org/wiki/Flood_fill
The code of this version is pretty simple. But you will notice that it can easily overflow the stack because the number of pending fills can be as large as the image size.
def FloodFill(X, Y, Label):
I[X,Y]= Label
for all 8-way neighbors (X'=X±1, Y'=Y±1, inside image):
if I[X',Y'] == 1:
FloodFill(X', Y', Label)
def CCL(Image I):
Label= 1
for Y in range(I.Height):
for X in range(I.Width):
if I[X, Y] == 1:
Label+= 1
FloodFill(X, Y, Label)
So I would recommend the scanline version, which is a little more involved.
https://en.wikipedia.org/wiki/Flood_fill#Scanline_fill
I want to use OCR to capture the bowling scores from the monitor at the lances. I had a look at this sudoku solver, as I think its pretty similar - numbers and grids right? It has trouble finding the horizontal lines. Has anyone got any tips for pre-processing this image to make it easier to detect the lines (or numbers!). Also any tips for how to deal with the split (the orange ellipse around some of the 8's int he image)?
So far I have got the outline of the score area and cropped it.
import matplotlib
matplotlib.use('TkAgg')
from skimage import io
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
from skimage.color import rgb2gray
# import pytesseract
from matplotlib.path import Path
from qhd import *
def polygonArea(poly):
"""
Return area of an unclosed polygon.
:see: https://stackoverflow.com/a/451482
:param poly: (n,2)-array
"""
# we need a plain list for the following operations
if isinstance(poly, np.ndarray):
poly = poly.tolist()
segments = zip(poly, poly[1:] + [poly[0]])
return 0.5 * abs(sum(x0*y1 - x1*y0
for ((x0, y0), (x1, y1)) in segments))
filename = 'good.jpg'
image = io.imread(filename)
image = rgb2gray(image)
# Find contours at a constant value of 0.8
contours = measure.find_contours(image, 0.4)
# Display the image and plot all contours found
fig, ax = plt.subplots()
c = 0
biggest = None
biggest_size = 0
for n, contour in enumerate(contours):
curr_size = polygonArea(contour)
if curr_size > biggest_size:
biggest = contour
biggest_size = curr_size
biggest = qhull2D(biggest)
# Approximate that so we just get a rectangle.
biggest = measure.approximate_polygon(biggest, 500)
# vertices of the cropping polygon
yc = biggest[:,0]
xc = biggest[:,1]
xycrop = np.vstack((xc, yc)).T
# xy coordinates for each pixel in the image
nr, nc = image.shape
ygrid, xgrid = np.mgrid[:nr, :nc]
xypix = np.vstack((xgrid.ravel(), ygrid.ravel())).T
# construct a Path from the vertices
pth = Path(xycrop, closed=False)
# test which pixels fall within the path
mask = pth.contains_points(xypix)
# reshape to the same size as the image
mask = mask.reshape(image.shape)
# create a masked array
masked = np.ma.masked_array(image, ~mask)
# if you want to get rid of the blank space above and below the cropped
# region, use the min and max x, y values of the cropping polygon:
xmin, xmax = int(xc.min()), int(np.ceil(xc.max()))
ymin, ymax = int(yc.min()), int(np.ceil(yc.max()))
trimmed = masked[ymin:ymax, xmin:xmax]
plt.imshow(trimmed, cmap=plt.cm.gray), plt.title('trimmed')
plt.show()
https://imgur.com/LijB85I is an example of how the score is displayed.
Hi I am trying to apply the mobius transformation to an image using matplotlib. This is python code to do this.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from numpy import *
img = mpimg.imread('test.jpg') # load an image
zi = [766j, 512+766j, 256+192j]
wi = [738j, 512+496j, 256+173j]
r = ones((600,700,3),dtype=uint8)*255 # empty-white image
for i in range(img.shape[1]):
for j in range(img.shape[0]):
z = complex(i,j)
qf = ((wi[0] * (-wi[1] * (zi[0]-zi[1]) * (z-zi[2]) + wi[2] * (z-zi[1]) * (zi[0]-zi[2])) - wi[1]*wi[2]*(z-zi[0]) * (zi[1]-zi[2])))
qs = (wi[2]*(zi[0]-zi[1])*(z-zi[2])-wi[1]*(z-zi[1])*(zi[0]-zi[2])+wi[0]*(z-zi[0])*(zi[1]-zi[2]))
w = qf/qs
r[int(imag(w)),int(real(w)),:] = img[j,i,:]
plt.subplot(121)
plt.imshow(img,origin='lower',aspect='auto')
plt.subplot(122)
plt.imshow(r,origin='lower',aspect='auto')
plt.show()
if I run this code, I get the following result.
If you see the right side, the size is changed. I want to know the way to fit the result image in the box. The way I did is I hard code the result image size and run the code. However, since the mobius transformation expands and shrink the image, sometimes I get very small image and sometimes I get very big image. Anyone can solve this problem??Thanks!
You can do the following to find the x limits and y limits of your transformed image:
plt.gca().set_aspect('equal')
i, j = np.where(np.all(r!=255, axis=2))
xlimits = j.min(), j.max()
ylimits = i.min(), i.max()
plt.xlim(xlimits)
plt.ylim(ylimits)
the set_aspect() was added to show the image in its original aspect ratio. numpy.where() will find the row and column indices where the image is not white (255, 255, 255), it is taking the minimum and maximum indices to set the new limits.