Removing black edge artifacts from a transparent image - python

I have a set of transparent PNG images with black artifacts around the edges, like this:
I'm looking for a way to clean up the borders automatically. I tried simply masking out pixels under a certain RGB value, but the images themselves can also contain black pixels, and those then get filtered out. I'm using Python3 and opencv3/PIL.
My question is: How can I get rid of the black edges, while preserving black pixels that are not part of an edge?
EDIT: As usr2564301 pointed out below, very few (if any) of the edge pixels are pure black. I still need to remove them, so I'd want to use some threshold value and remove pixels that are neighbors to a transparent pixel and are either:
Darker than the threshold, or
Darker by at least threshold than any neighboring non-transparent pixel.

Try taking the alpha channel and eroding it by a couple of pixels. I am illustrating the technique with ImageMagick because that's easier, but you can do the same thing with OpenCV:
convert pinkboythingwithcathead.png \( +clone -alpha extract -morphology erode disk:2 \) -compose copy-alpha -composite result.png

You can antialias the edges of the alpha channel in ImageMagick as follows:
Input:
convert image.png -channel a -blur 0x2 -level 50x100% +channel result.png
Adjust the 2 using smaller values than 2 if thinner black border and larger than 2 if broader black borders.

Related

How to convert white background pixels into black in an image using OpenCV, Python?

I have some cropped images and i also applied otsu thresholding on it. Now, i want to convert white pixels that are in the background into black pixels
Need solution to solve this problem.

How to remove white background of image and make it transparent?

I am trying to remove the white background of few images programmatically and make it transparent. The format of the image is simple, it has a white background and a single object mainly positioned in the middle.
I want to replicate the functionality of the https://remove.bg website which makes the background of the image transparent.
I have tried using sharp or imageMagick software to make the background transparent. But it makes some of the pixels inside the main object become transparent too. Which I definitely don't want.
Was using the below code in imageMagik to convert the white background to transparent
convert brownie.jpg -transparent white brownie.png
convert cakebig.jpg -fuzz 1% -transparent white cakebig.png
Didn't seem to work perfectly.
After running the commands, the images did become transparent but few of the pixels inside the main object got affected too.
Input Image
Output from ImageMagik (See how some of the pixels inside main object got transparent)
Expected Output from https://remove.bg (See no effect on the main object)
It seems like an image processing problem and seems like OpenCV is the best solution for this. I don't know much about the library so it's a little tricky. Was checking out some code and came across grab cut and graph cut algorithm which can be used for image segmentation. But I am not totally sure about it. Please help in the proper solution to this in OpenCV.
I will add a little more to Mark Setchell's excellent ImageMagick answer about using a fuzzy flood fill by adding some antialiasing via blurring the alpha channel a little and then zeroing blurred values below mid-gray. This will smooth the jagged edges left from the fuzzy flood fill. Note too large a fuzz value will cause leakage into your ice cream, because the color of the ice cream is similar to that of the background. Too little fuzz value will not adequately remove the background. So the fuzz value tolerance in this image is tight.
Input:
For ImageMagick 6:
convert img.png -fuzz 2% -fill none -draw "matte 0,0 floodfill" -channel alpha -blur 0x2 -level 50x100% +channel result.png
For Imagemagick 7:
magick img.png -fuzz 2% -fill none -draw "alpha 0,0 floodfill" -channel alpha -blur 0x2 -level 50x100% +channel result.png
You will need to download the result to see that the background is transparent and the outline is smooth.

Count shaded and unshaded rectangles in an grid image with noise

I am trying to count all the shaded and un-shaded rectangles in this grid using python. I tried contour detection in OpenCV and was not able to achieve this. I also tried the hough line transform and detected the lines in the image, but I am not able to figure out how to proceed further. Is there a better way of doing it? Can someone suggest a way to proceed?
As your image looks very clean, I would
threshold the image to select white regions: gray regions and black lines will be black
use findContours() to count white blobs
do another threshold to select black lines. Only black lines will be black, everything else white
XOR the two images: this way you should have the gray regions
use findContours() to count the gray blobs
EDIT:
The ellipse cuts some rectangles and this will affect your count. If you want to remove it, the thresholds are not enough (both the ellipse and the rectangle lines are black). A possible way to do it:
With Hough Lines you can detect the lines,
draw in a new image the vertical and horizontal lines (ignore diagonal lines as they may be part od the ellipse)
with boolean operations (and, xor or or) between the thresholded images and the lines image you should be able to keep only the lines and remove the ellipse

Get two images from image which has two images pasted in a single document- Python/C++

I have a document - say the back and front of an ID. Something like this
The dotted lines are that of a poorly scanned ID, so the borders are not clear
The objective is to retrieve image 1 and image 2 as two separate images. The scanned documents are black and white.
My questions:
1. Is this feasible?
2. Any ideas/code snippets on how to proceed would be appreciated.
Thanks in advance,
Mark Setchell asked that I show his example with my script, multicrop2, which does something similar to all his commands. See http://www.fmwconcepts.com/imagemagick/multicrop2/index.php
I noticed that he had a black border around his image. So I need to remove that first with an Imagemagick shave and pipe it to my script, multicrop2.
convert vZiPW.jpg -shave 3x3 miff:- | multicrop2 -u 1 -f 1 -m save -d 10000 vZiPW_shaved.png multicrop_results.jpg
In the script, I use -u 1 to do a deskew to unrotate the extracted images. I use -f 1 (a fuzz value or tolerance of 1%) to allow for JPG compression changes to the background white color during a floodfill to make a mask. I also save the mask that my script first extracts to locate the two images. Since the images have colors close to white there may be small holes or specks. So my script will fill those holes using a connected components process. Thus it ignores any regions whose areas are smaller than 10000 pixels as indicated by the -d 10000. More importantly, it uses connected components to locate the bounding boxes of the two large regions from the mask and then crops and unrotates the corresponding regions from the input image.
Here is the raw mask after a floodfill operation and before the connected components processing to remove the small specks:
Here are the two extracted images after the deskew unrotation.
As you haven't provided and images... I will provide some samples. I am using ImageMagick because you can just do it at the command-line without needing to compile anything, but you can apply exactly the same techniques in OpenCV with Python.
So, here's a sample image.
Now, the scan may have noise and it may not have pure whites and blacks if not exposed correctly, so as a first step, you could normalise or auto-level the image, threshold it to pure black and white and apply a median filter to remove noise - especially if it is a JPEG.
convert scan.jpg -normalize -threshold 90% -median 9x9 -negate result.png
Maybe you want to close the holes now using some morphology:
convert scan.jpg -normalize -threshold 90% -median 9x9 -negate -morphology close disk:7 result.png
Now you could do some "Connected Component Analysis" to find the blobs:
convert scan.jpg -threshold 90% -median 9x9 -negate \
-morphology close disk:7 \
-define connected-components:verbose=1 \
-connected-components 8 -auto-level result.png
Sample Output
Objects (id: bounding-box centroid area mean-color):
0: 695x297+0+0 355.4,150.9 107426 srgb(0,0,0)
2: 276x188+352+54 487.8,148.7 50369 srgb(255,255,255)
1: 275x194+43+44 185.9,143.3 46695 srgb(255,255,255)
3: 78x47+56+56 96.4,72.9 1731 srgb(0,0,0)
4: 18x16+168+183 176.5,190.4 194 srgb(0,0,0)
That gives a "labelled" image, which we are not actually going to use, but identifies each blob it has found in a successively lighter shade.
Now look at the textual output above, and you can see there are 5 blobs - one per line. You can check the 4th field which is the area of the blob and the fifth field which is the colour - this will help distinguish between black and white blobs. Let's look at a couple of the blobs and draw them in on the original image:
convert scan.jpg -fill "rgba(255,0,0,0.5)" -draw "rectangle 352,54 628,242" result1.png
convert scan.jpg -fill "rgba(255,0,0,0.5)" -draw "rectangle 43,44 318,238" result2.png
Now we can chop the individual pages out:
convert scan.jpg -crop 276x188+352+54 doc1.png
convert scan.jpg -crop 275x194+43+44 doc2.png
You can try findContours.
findContours (image, contours, hierarchy,
CV_RETR_EXTERNAL,
CV_CHAIN_APPROX_SIMPLE,
Point(0, 0));
Then use drawContours to see if the desired images are selected.
You can find the bounding box for a contour and need not worry about the bad scan.
If this doesn't work straight away, try to pre process your image , you could try morphing, thresholding etc.
I hope this helps!

Use a color to create an alpha channel for a PNG?

I have several images that claim to have a transparent background but are actually white. I'd like to use Python Image Library/PIL to set that white background color to actually be transparent.
Since PNG uses an alpha channel, I'd love to create the alpha channel by finding contiguous areas of white from the edges of the image (so I don't get "holes" of transparency when the image contains white data).
Any tips on how to create the alpha channel this way?
I'd guess you'd want to run across the image in a spiral from the outside, setting a pixel to transparent if it is white, and a pixel further towards the edge is also white transparent. Stop once you've done a whole circle without changing any pixels.
Shouldn't be too difficult to write such a loop.
Do some kind of flood fill, seeded from the white edge pixels.

Categories

Resources