how to manipulate an image in python to mimic macular degenerative - python

Im trying to find a good package or algorithm to modify an image to push the center of an image outwards to mimic macular degen. The best method I found was image_slicer package and split image into 4 pieces, push the inner corners and stitch images back. But, the join method of the package is not working and documentation is unclear. Does anyone have a package that can do this?
Also, I am trying to push the outside of an image in, to create tunnel vision.
(for both of these I am still trying to preserve the image, although skewed is fine, I am trying to prevent image loss.)
some code I wrote
import image_slicer
#split image into 4 pieces
image_slicer.slice('piegraph.jpeg',4) #just a simple sample img
#code to resize corners
#I can figure this out later.
#stitch images back
tiles = ("pie_01_01.png","pie_01_02.png","pie_02_01.png","pie_02_02.png")
image_slicer.join(tiles)

You can use opencv and numpy to do what you want.
If I understand correctly what you need is a mapping that take the original image and maps them as a function of the distance from the center of the image.
For all the pixels inside the "black hole" you want to be black and all others you want them to be bunched together.
So if we take the original image to be:
The result you are looking for is something like:
The following code dose this. The parameters that you need to play with are
RBlackHole - The radius of your black hole
FACTOR - Changes the amount of "bunching" too small and all the pixels will mapped also to black too large and they will not be bunched.
import cv2
import numpy as np
import math
# Read img
img = cv2.imread('earth.jpg')
rows,cols,ch = img.shape
# Params
FACTOR = 75
RBlackHole = 10
# Create a 2d mapping between the image and a new warp
smallSize = min(rows,cols)
xMap = np.zeros((rows,cols), np.float32)
yMap = np.zeros_like(xMap)
for i in range(rows):
for j in range(cols):
# Calculate the distance of the current pixel from the cneter of the image
r = math.sqrt((i-rows/2)*(i-rows/2) + (j-cols/2)*(j-cols/2))
# If the pixles are in the radius of the black hole
# mapped them to a location outside of the image.
if r <= RBlackHole:
xMap[i, j] = rows*cols
yMap[i, j] = rows*cols
else:
# Mapped the pixels as a function of the distance from the center.
# The further thay are the "buncher thay will be"
xMap[i, j] = (r-RBlackHole)*(j - cols/2)/FACTOR + cols/2
yMap[i, j] = (r-RBlackHole)*(i - rows/2)/FACTOR + rows/2
# Applay the remmaping
dstImg = cv2.remap(img,xMap,yMap,cv2.INTER_CUBIC)
# Save output image
cv2.imwrite("blackHoleWorld.jpg", dstImg)

Related

Using python to fill an image with one color in steps from 0% to 100%

I have a black image that I need to fill with a new color.
I want to generate new images starting from 1% to 100% (generating an
image for every 1% filled).
Examples for 4 fill-ratios
Heart image filled with 1%, 5%, 10% and 15%
Research I did
I did a lot of research on the internet and the closest I came was this link:
Fill an image with color but keep the alpha (Color overlay in PIL)
However, as I don't have much experience with Python for image editing, I couldn't move forward or modify the code as needed.
Edit:
I was trying with this code from the link
from PIL import Image
import numpy as np
# Open image
im = Image.open('2746646.png')
# Make into Numpy array
n = np.array(im)
# Set first three channels to red
n[..., 0:3] = [ 255, 0, 0 ]
# Convert back to PIL Image and save
Image.fromarray(n).save('result.png')
But it only generates a single image (as if it were 100%, I need 100 images with 1% filled in each one).
Updated Answer
Now you have shared your actual starting image, it seems you don't really want to replace black pixels, but actually opaque pixels. If you split your image into its constituent RGBA channels and lay them out left-to-right R,G,B then A, you can see you want to fill where the alpha (rightmost) channel is white, rather than where the RGB channels are black:
That changes the code to this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image, ensure not palettised, and make into Numpy array
im = Image.open('muscle.png').convert('RGBA')
# Make Numpy array
RGBA = np.array(im)
# Get RGB part
RGB = RGBA[..., :3]
# Get greyscale version of image as Numpy array
alpha = RGBA[..., 3]
# Find X,Y coordinates of all black pixels in image
blkY, blkX = np.where(alpha==255)
# Just take one entry per row, even if multiple black pixels in it
uniqueRows = np.unique(blkY)
# How many rows are there with black pixels in?
nUniqueRows = len(uniqueRows)
for percent in range(2,101):
# Work out filename based on percentage
filename = f'result-{percent:03d}.png'
# How many rows do we need to fill?
nRows = int(nUniqueRows * percent/100.0)
# Which rows are they? Negative index because filling bottom-up.
rows = uniqueRows[-nRows:]
print(f'DEBUG: filename: {filename}, percent: {percent}, nRows: {nRows}, rows: {rows}')
# What are the indices onto blkY, blkX ?
indices = np.argwhere(np.isin(blkY, rows))
# Make those pixels black
RGB[blkY[indices.ravel()], blkX[indices.ravel()], :3] = [0,255,0]
res = Image.fromarray(RGBA).save(filename)
Original Answer
That was fun! This seems to work - though it's not that efficient. It is not a true "floodfill", see note at end.
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image, ensure not palettised, and make into Numpy array
im = Image.open('heart.png').convert('RGB')
# Make Numpy array
na = np.array(im)
# Get greyscale version of image as Numpy array
grey = np.array(im.convert('L'))
# Find X,Y coordinates of all black pixels in image
blkY, blkX = np.where(grey==0)
# Just take one entry per row, even if multiple black pixels in it
uniqueRows = np.unique(blkY)
# How many rows are there with black pixels in?
nUniqueRows = len(uniqueRows)
for percent in range(1,101):
# Work out filename based on percentage
filename = f'result-{percent:03d}.png'
# How many rows do we need to fill?
nRows = int(nUniqueRows * percent/100.0)
# Which rows are they? Negative index because filling bottom-up.
rows = uniqueRows[-nRows:]
# print(f'DEBUG: filename: {filename}, percent: {percent}, nRows: {nRows}, rows: {rows}')
# What are the indices onto blkY, blkX ?
indices = np.argwhere(np.isin(blkY, rows))
# Make those pixels green
na[blkY[indices.ravel()], blkX[indices.ravel()], :] = [0,255,0]
res = Image.fromarray(na).save(filename)
Note that this isn't actually a true "flood fill" - it is more naïve than that - because it doesn't seem necessary for your image. If you add another shape, it will fill that too:

How to find the largest blank(white) square area in the doc and return its coordinates and area?

I need to find the largest empty area in the document and display its coordinates, center point and area, using python to put a QR Code there.
I think OpenCV and Numpy should be enough for this task.
What kinda THRESH to use? Because there are a lot of types of scans:
gray, BW, with color, and how to find the contour properly?
How this can be implemented in the fastest way? An example using the
first scan from google is attached, where you can see that the code
should find the largest empty square area.
#Mark Setchell Thanks! This code works perfectly for all docs with a white background, but when I use smth with a color in the background it finds a completely different area. Also, to keep thin lines in the docs I used Erode after thresholding. Tried to change thresholding and erode parameters, still not working properly.
Edited post, added color pictures.
Here's a possible approach:
#!/usr/bin/env python3
import cv2
import numpy as np
def largestSquare(im):
# Make image square of 100x100 to simplify and speed up
s = 100
work = cv2.resize(im, (s,s), interpolation=cv2.INTER_NEAREST)
# Make output accumulator - uint16 is ok because...
# ... max value is 100x100, i.e. 10,000 which is less than 65,535
# ... and you can make a PNG of it too
p = np.zeros((s,s), np.uint16)
# Find largest square
for i in range(1, s):
for j in range(1, s):
if (work[i][j] > 0 ):
p[i][j] = min(p[i][j-1], p[i-1][j], p[i-1][j-1]) + 1
else:
p[i][j] = 0
# Save result - just for illustration purposes
cv2.imwrite("result.png",p)
# Work out what the actual answer is
ind = np.unravel_index(np.argmax(p, axis=None), p.shape)
print(f'Location: {ind}')
print(f'Length of side: {p[ind]}')
# Load image and threshold
im = cv2.imread('page.png', cv2.IMREAD_GRAYSCALE)
_, thr = cv2.threshold(im,127,255,cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# Get largest white square
largestSquare(thr)
Output
Location: (21, 77)
Length of side: 18
Notes:
I edited out your red annotation so it didn't interfere with my algorithm.
I did Otsu thresholding to get pure black and white - that may or may not be appropriate to your use case. It will depend on your scans and paper background etc.
I scaled the image down to 100x100 so it doesn't take all day to run. You will need to scale the results back up to the size of your original image but I assume you can do that easily enough.
Keywords: Image processing, image, Python, OpenCV, largest white square, largest empty space.

Detect if an OCR text image is upside down

I have some hundreds of images (scanned documents), most of them are skewed. I wanted to de-skew them using Python.
Here is the code I used:
import numpy as np
import cv2
from skimage.transform import radon
filename = 'path_to_filename'
# Load file, converting to grayscale
img = cv2.imread(filename)
I = cv2.cvtColor(img, COLOR_BGR2GRAY)
h, w = I.shape
# If the resolution is high, resize the image to reduce processing time.
if (w > 640):
I = cv2.resize(I, (640, int((h / w) * 640)))
I = I - np.mean(I) # Demean; make the brightness extend above and below zero
# Do the radon transform
sinogram = radon(I)
# Find the RMS value of each row and find "busiest" rotation,
# where the transform is lined up perfectly with the alternating dark
# text and white lines
r = np.array([np.sqrt(np.mean(np.abs(line) ** 2)) for line in sinogram.transpose()])
rotation = np.argmax(r)
print('Rotation: {:.2f} degrees'.format(90 - rotation))
# Rotate and save with the original resolution
M = cv2.getRotationMatrix2D((w/2,h/2),90 - rotation,1)
dst = cv2.warpAffine(img,M,(w,h))
cv2.imwrite('rotated.jpg', dst)
This code works well with most of the documents, except with some angles: (180 and 0) and (90 and 270) are often detected as the same angle (i.e it does not make difference between (180 and 0) and (90 and 270)). So I get a lot of upside-down documents.
Here is an example:
The resulted image that I get is the same as the input image.
Is there any suggestion to detect if an image is upside down using Opencv and Python?
PS: I tried to check the orientation using EXIF data, but it didn't lead to any solution.
EDIT:
It is possible to detect the orientation using Tesseract (pytesseract for Python), but it is only possible when the image contains a lot of characters.
For anyone who may need this:
import cv2
import pytesseract
print(pytesseract.image_to_osd(cv2.imread(file_name)))
If the document contains enough characters, it is possible for Tesseract to detect the orientation. However, when the image has few lines, the orientation angle suggested by Tesseract is usually wrong. So this can not be a 100% solution.
Python3/OpenCV4 script to align scanned documents.
Rotate the document and sum the rows. When the document has 0 and 180 degrees of rotation, there will be a lot of black pixels in the image:
Use a score keeping method. Score each image for it's likeness to a zebra pattern. The image with the best score has the correct rotation. The image you linked to was off by 0.5 degrees. I omitted some functions for readability, the full code can be found here.
# Rotate the image around in a circle
angle = 0
while angle <= 360:
# Rotate the source image
img = rotate(src, angle)
# Crop the center 1/3rd of the image (roi is filled with text)
h,w = img.shape
buffer = min(h, w) - int(min(h,w)/1.15)
roi = img[int(h/2-buffer):int(h/2+buffer), int(w/2-buffer):int(w/2+buffer)]
# Create background to draw transform on
bg = np.zeros((buffer*2, buffer*2), np.uint8)
# Compute the sums of the rows
row_sums = sum_rows(roi)
# High score --> Zebra stripes
score = np.count_nonzero(row_sums)
scores.append(score)
# Image has best rotation
if score <= min(scores):
# Save the rotatied image
print('found optimal rotation')
best_rotation = img.copy()
k = display_data(roi, row_sums, buffer)
if k == 27: break
# Increment angle and try again
angle += .75
cv2.destroyAllWindows()
How to tell if the document is upside down? Fill in the area from the top of the document to the first non-black pixel in the image. Measure the area in yellow. The image that has the smallest area will be the one that is right-side-up:
# Find the area from the top of page to top of image
_, bg = area_to_top_of_text(best_rotation.copy())
right_side_up = sum(sum(bg))
# Flip image and try again
best_rotation_flipped = rotate(best_rotation, 180)
_, bg = area_to_top_of_text(best_rotation_flipped.copy())
upside_down = sum(sum(bg))
# Check which area is larger
if right_side_up < upside_down: aligned_image = best_rotation
else: aligned_image = best_rotation_flipped
# Save aligned image
cv2.imwrite('/home/stephen/Desktop/best_rotation.png', 255-aligned_image)
cv2.destroyAllWindows()
Assuming you did run the angle-correction already on the image, you can try the following to find out if it is flipped:
Project the corrected image to the y-axis, so that you get a 'peak' for each line. Important: There are actually almost always two sub-peaks!
Smooth this projection by convolving with a gaussian in order to get rid of fine structure, noise, etc.
For each peak, check if the stronger sub-peak is on top or at the bottom.
Calculate the fraction of peaks that have sub-peaks on the bottom side. This is your scalar value that gives you the confidence that the image is oriented correctly.
The peak finding in step 3 is done by finding sections with above average values. The sub-peaks are then found via argmax.
Here's a figure to illustrate the approach; A few lines of you example image
Blue: Original projection
Orange: smoothed projection
Horizontal line: average of the smoothed projection for the whole image.
here's some code that does this:
import cv2
import numpy as np
# load image, convert to grayscale, threshold it at 127 and invert.
page = cv2.imread('Page.jpg')
page = cv2.cvtColor(page, cv2.COLOR_BGR2GRAY)
page = cv2.threshold(page, 127, 255, cv2.THRESH_BINARY_INV)[1]
# project the page to the side and smooth it with a gaussian
projection = np.sum(page, 1)
gaussian_filter = np.exp(-(np.arange(-3, 3, 0.1)**2))
gaussian_filter /= np.sum(gaussian_filter)
smooth = np.convolve(projection, gaussian_filter)
# find the pixel values where we expect lines to start and end
mask = smooth > np.average(smooth)
edges = np.convolve(mask, [1, -1])
line_starts = np.where(edges == 1)[0]
line_endings = np.where(edges == -1)[0]
# count lines with peaks on the lower side
lower_peaks = 0
for start, end in zip(line_starts, line_endings):
line = smooth[start:end]
if np.argmax(line) < len(line)/2:
lower_peaks += 1
print(lower_peaks / len(line_starts))
this prints 0.125 for the given image, so this is not oriented correctly and must be flipped.
Note that this approach might break badly if there are images or anything not organized in lines in the image (maybe math or pictures). Another problem would be too few lines, resulting in bad statistics.
Also different fonts might result in different distributions. You can try this on a few images and see if the approach works. I don't have enough data.
You can use the Alyn module. To install it:
pip install alyn
Then to use it to deskew images(Taken from the homepage):
from alyn import Deskew
d = Deskew(
input_file='path_to_file',
display_image='preview the image on screen',
output_file='path_for_deskewed image',
r_angle='offest_angle_in_degrees_to_control_orientation')`
d.run()
Note that Alyn is only for deskewing text.

How can i find cycles in a skeleton image with python libraries?

I have many skeletonized images like this:
How can i detect a cycle, a loop in the skeleton?
Are there "special" functions that do this or should I implement it as a graph?
In case there is only the graph option, can the python graph library NetworkX can help me?
You can exploit the topology of the skeleton. A cycle will have no holes, so we can use scipy.ndimage to find any holes and compare. This isn't the fastest method, but it's extremely easy to code.
import scipy.misc, scipy.ndimage
# Read the image
img = scipy.misc.imread("Skel.png")
# Retain only the skeleton
img[img!=255] = 0
img = img.astype(bool)
# Fill the holes
img2 = scipy.ndimage.binary_fill_holes(img)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img == img2).all()
# As a test break the cycles
img3 = img.copy()
img3[0:200, 0:200] = 0
img4 = scipy.ndimage.binary_fill_holes(img3)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img3 == img4).all()
I've used your "B" picture as an example. The first two images are the original and the filled version which detects a cycle. In the second version, I've broken the cycle and nothing gets filled, thus the two images are the same.
First, let's build an image of the letter B with PIL:
import Image, ImageDraw, ImageFont
image = Image.new("RGBA", (600,150), (255,255,255))
draw = ImageDraw.Draw(image)
fontsize = 150
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationMono-Regular.ttf", fontsize)
txt = 'B'
draw.text((30, 5), txt, (0,0,0), font=font)
img = image.resize((188,45), Image.ANTIALIAS)
print type(img)
plt.imshow(img)
you may find a better way to do that, particularly with path to the fonts. Ii would be better to load an image instead of generating it. Anyway, we have now something to work on:
Now, the real part:
import mahotas as mh
img = np.array(img)
im = img[:,0:50,0]
im = im < 128
skel = mh.thin(im)
noholes = mh.morph.close_holes(skel)
plt.subplot(311)
plt.imshow(im)
plt.subplot(312)
plt.imshow(skel)
plt.subplot(313)
cskel = np.logical_not(skel)
choles = np.logical_not(noholes)
holes = np.logical_and(cskel,noholes)
lab, n = mh.label(holes)
print 'B has %s holes'% str(n)
plt.imshow(lab)
And we have in the console (ipython):
B has 2 holes
Converting your skeleton image to a graph representation is not trivial, and I don't know of any tools to do that for you.
One way to do it in the bitmap would be to use a flood fill, like the paint bucket in photoshop. If you start a flood fill of the image, the entire background will get filled if there are no cycles. If the fill doesn't get the entire image then you've found a cycle. Robustly finding all the cycles could require filling multiple times.
This is likely to be very slow to execute, but probably much faster to code than a technique where you trace the skeleton into graph data structure.

Crop non symmetric area of an image with Python/PIL

Is there a way to cut out non rectangular areas of an image with Python PIL?
e.g. in this picture I want to exclude all black areas as well as towers, rooftops and poles.
http://img153.imageshack.us/img153/5330/skybig.jpg
I guess the ImagePath Module can do that, but furthermore, how can I read data of e.g. a svg file and convert it into a path?
Any help will be appreciated.
(My sub question is presumably the easier task: how to cut at least a circle of an image?)
If I understood correctly, you want to make some areas transparent within the image. And these areas are random shaped. Easiest way (that I can think of) is to create a mask and put it to the alpha channel of the image. Below is a code that shows how to do this.
If your question was "How to create a polygon mask" I will redirect you to:
SciPy Create 2D Polygon Mask
and look the accepted answer.
br,
Juha
import numpy
import Image
# read image as RGB and add alpha (transparency)
im = Image.open("lena.png").convert("RGBA")
# convert to numpy (for convenience)
imArray = numpy.asarray(im)
# create mask (zeros + circle with ones)
center = (200,200)
radius = 100
mask = numpy.zeros((imArray.shape[0],imArray.shape[1]))
for i in range(imArray.shape[0]):
for j in range(imArray.shape[1]):
if (i-center[0])**2 + (j-center[0])**2 < radius**2:
mask[i,j] = 1
# assemble new image (uint8: 0-255)
newImArray = numpy.empty(imArray.shape,dtype='uint8')
# colors (three first columns, RGB)
newImArray[:,:,:3] = imArray[:,:,:3]
# transparency (4th column)
newImArray[:,:,3] = mask*255
# back to Image from numpy
newIm = Image.fromarray(newImArray, "RGBA")
newIm.save("lena3.png")
Edit
Actually, I could not resist... the polygon mask solution was so elegant (replace the above circle with this):
# create mask
polygon = [(100,100), (200,100), (150,150)]
maskIm = Image.new('L', (imArray.shape[0], imArray.shape[1]), 0)
ImageDraw.Draw(maskIm).polygon(polygon, outline=1, fill=1)
mask = numpy.array(maskIm)
Edit2
Now when I think of it. If you have a black and white svg, you can load your svg directly as mask (assuming white is your mask). I have no sample svg images, so I cannot test this. I am not sure if PIL can open svg images.

Categories

Resources