How with python make photocopy effect like in photoshop - python

Tried ImageFilter.FIND_EDGES from PIL, tried charcoal effect from imageMagick but get not what I want.
I want make this effect:
Please, help me found the way to make this

In ImageMagick, something like this seems to be close. Convert to grayscale, then apply a Laplacian of Gaussian (or perhaps Difference of Gaussians) and then invert (negate) the colors so light becomes dark and dark becomes light. You can control the smoothness of the edging using by changing the x3 value. Larger will be broader and smaller will be finer edges.
convert flower_rose.jpg -colorspace gray -define convolve:scale=5! -morphology Convolve LoG:0x3 -negate result.png

Related

How to remove white background of image and make it transparent?

I am trying to remove the white background of few images programmatically and make it transparent. The format of the image is simple, it has a white background and a single object mainly positioned in the middle.
I want to replicate the functionality of the https://remove.bg website which makes the background of the image transparent.
I have tried using sharp or imageMagick software to make the background transparent. But it makes some of the pixels inside the main object become transparent too. Which I definitely don't want.
Was using the below code in imageMagik to convert the white background to transparent
convert brownie.jpg -transparent white brownie.png
convert cakebig.jpg -fuzz 1% -transparent white cakebig.png
Didn't seem to work perfectly.
After running the commands, the images did become transparent but few of the pixels inside the main object got affected too.
Input Image
Output from ImageMagik (See how some of the pixels inside main object got transparent)
Expected Output from https://remove.bg (See no effect on the main object)
It seems like an image processing problem and seems like OpenCV is the best solution for this. I don't know much about the library so it's a little tricky. Was checking out some code and came across grab cut and graph cut algorithm which can be used for image segmentation. But I am not totally sure about it. Please help in the proper solution to this in OpenCV.
I will add a little more to Mark Setchell's excellent ImageMagick answer about using a fuzzy flood fill by adding some antialiasing via blurring the alpha channel a little and then zeroing blurred values below mid-gray. This will smooth the jagged edges left from the fuzzy flood fill. Note too large a fuzz value will cause leakage into your ice cream, because the color of the ice cream is similar to that of the background. Too little fuzz value will not adequately remove the background. So the fuzz value tolerance in this image is tight.
Input:
For ImageMagick 6:
convert img.png -fuzz 2% -fill none -draw "matte 0,0 floodfill" -channel alpha -blur 0x2 -level 50x100% +channel result.png
For Imagemagick 7:
magick img.png -fuzz 2% -fill none -draw "alpha 0,0 floodfill" -channel alpha -blur 0x2 -level 50x100% +channel result.png
You will need to download the result to see that the background is transparent and the outline is smooth.

How to repair obscured blocks in a QR code image

I have many RGB images that contain a printed sheet of paper with a QR code in an outdoor setting. Because of the bright sun interfering with image capture, about 20% of images are unreadable:
I'm using magick in R to handle the image manipulation, and then the Python package pyzbar (which wraps zbar) to do the detection.
I can use image_threshold to find all the pixels that are within the 95% quantile and force them to pure black, which fixes about half my images:
But some of them remain unreadable, like this one. I can see with my human eyes that I need to fill in some of the anchor points in the upper left, so I mocked this up in MS Paint:
With that manual manipulation, this image is now easily read. Is there any way to do this kind of repair automatically? I don't mind translating code from Python or the ImageMagick CLI, so R-only answers aren't necessary.
My general approach:
library(magick)
library(reticulate)
pyzbar <- import("pyzbar.pyzbar")
magick_to_numpy <- function(img) {
round(255 * as.numeric(magick::image_data(img, "rgb")))
}
image_read("testfile.jpg") %>%
image_threshold("black", "95%") %>%
magick_to_numpy() %>%
pyzbar$decode()
Usual result:
list()
Desired result:
[[1]]
Decoded(data=b'W TRR C6 T2', type='QRCODE',
rect=Rect(left=176, top=221, width=373, height=333),
polygon=[Point(x=176, y=226), Point(x=202, y=554),
Point(x=549, y=544), Point(x=524, y=221)])
You may be able to improve some of your images, but the one you provided has lost too much black to pure white. So gaps will appear that are too large to close up. The best way I know in Imagemagick command line, would be to process the image converted to grayscale using -lat (local area thresholding) and perhaps some morphology open.
Input:
convert img.jpg -colorspace gray -negate -lat 50x50-1% -negate -morphology open square:7 result.png
Normally one would use the % term as positive. But here you have lost too much data and I want to include as much as possible that is not pure white before it makes too much black. So I push it to -1%. The -negate is needed since -lat works only on white objects on black background. So I have to negate before and after. I then try to fill some the black regions using some morphology open.

Removing highlighted areas when converting pdf to image?

I'm trying to convert pdfs to png which usually works great but I occasionally get this result.
There's two parts that are 'highlighted' which I'm not sure why since ImageMagick doesn't consistently do this.
Here's the code I'm working with:
with Image(filename=pdf, resolution=200) as src:
src.background_color = Color('white')
src.alpha_channel = 'remove'
images = src.sequence
Image(images[1]).save(filename='test.png')
I thought maybe there was a problem with transparency so the first two lines are related to this question.
How can I get this image to just show up normally like this
image which looks correct? Thanks!
The issue you have is that your input has an alpha channel. So just removing the alpha channel or flattening it on white leaves that area as gray, since it is in the underlying image.
The best way to fix that is using ImageMagick -lat function.
See http://www.imagemagick.org/script/command-line-options.php#lat
As I do not have your original, I can only process your resulting PNG file, which shows this behavior.
Input with transparency
Processing
convert input.png -background white -flatten -negate -lat 25x25+10% -negate result.png

Get two images from image which has two images pasted in a single document- Python/C++

I have a document - say the back and front of an ID. Something like this
The dotted lines are that of a poorly scanned ID, so the borders are not clear
The objective is to retrieve image 1 and image 2 as two separate images. The scanned documents are black and white.
My questions:
1. Is this feasible?
2. Any ideas/code snippets on how to proceed would be appreciated.
Thanks in advance,
Mark Setchell asked that I show his example with my script, multicrop2, which does something similar to all his commands. See http://www.fmwconcepts.com/imagemagick/multicrop2/index.php
I noticed that he had a black border around his image. So I need to remove that first with an Imagemagick shave and pipe it to my script, multicrop2.
convert vZiPW.jpg -shave 3x3 miff:- | multicrop2 -u 1 -f 1 -m save -d 10000 vZiPW_shaved.png multicrop_results.jpg
In the script, I use -u 1 to do a deskew to unrotate the extracted images. I use -f 1 (a fuzz value or tolerance of 1%) to allow for JPG compression changes to the background white color during a floodfill to make a mask. I also save the mask that my script first extracts to locate the two images. Since the images have colors close to white there may be small holes or specks. So my script will fill those holes using a connected components process. Thus it ignores any regions whose areas are smaller than 10000 pixels as indicated by the -d 10000. More importantly, it uses connected components to locate the bounding boxes of the two large regions from the mask and then crops and unrotates the corresponding regions from the input image.
Here is the raw mask after a floodfill operation and before the connected components processing to remove the small specks:
Here are the two extracted images after the deskew unrotation.
As you haven't provided and images... I will provide some samples. I am using ImageMagick because you can just do it at the command-line without needing to compile anything, but you can apply exactly the same techniques in OpenCV with Python.
So, here's a sample image.
Now, the scan may have noise and it may not have pure whites and blacks if not exposed correctly, so as a first step, you could normalise or auto-level the image, threshold it to pure black and white and apply a median filter to remove noise - especially if it is a JPEG.
convert scan.jpg -normalize -threshold 90% -median 9x9 -negate result.png
Maybe you want to close the holes now using some morphology:
convert scan.jpg -normalize -threshold 90% -median 9x9 -negate -morphology close disk:7 result.png
Now you could do some "Connected Component Analysis" to find the blobs:
convert scan.jpg -threshold 90% -median 9x9 -negate \
-morphology close disk:7 \
-define connected-components:verbose=1 \
-connected-components 8 -auto-level result.png
Sample Output
Objects (id: bounding-box centroid area mean-color):
0: 695x297+0+0 355.4,150.9 107426 srgb(0,0,0)
2: 276x188+352+54 487.8,148.7 50369 srgb(255,255,255)
1: 275x194+43+44 185.9,143.3 46695 srgb(255,255,255)
3: 78x47+56+56 96.4,72.9 1731 srgb(0,0,0)
4: 18x16+168+183 176.5,190.4 194 srgb(0,0,0)
That gives a "labelled" image, which we are not actually going to use, but identifies each blob it has found in a successively lighter shade.
Now look at the textual output above, and you can see there are 5 blobs - one per line. You can check the 4th field which is the area of the blob and the fifth field which is the colour - this will help distinguish between black and white blobs. Let's look at a couple of the blobs and draw them in on the original image:
convert scan.jpg -fill "rgba(255,0,0,0.5)" -draw "rectangle 352,54 628,242" result1.png
convert scan.jpg -fill "rgba(255,0,0,0.5)" -draw "rectangle 43,44 318,238" result2.png
Now we can chop the individual pages out:
convert scan.jpg -crop 276x188+352+54 doc1.png
convert scan.jpg -crop 275x194+43+44 doc2.png
You can try findContours.
findContours (image, contours, hierarchy,
CV_RETR_EXTERNAL,
CV_CHAIN_APPROX_SIMPLE,
Point(0, 0));
Then use drawContours to see if the desired images are selected.
You can find the bounding box for a contour and need not worry about the bad scan.
If this doesn't work straight away, try to pre process your image , you could try morphing, thresholding etc.
I hope this helps!

How to remove black -grid programmatically?

My ideas are:
1.0. [unsolved, hard image-detection] Breaking image into squares and removing borders, surely other techniques!
1.1. [unsolved] Imagemagick: crop (instructions here), remove
certain borders -- this may take a
lot of time to locate the grid, image detection
problem (comparing white/black here) -- or there may be some magic wand style filter.
1.2. [unsolved] Python: you probably need thisfrom PIL import Image.
Obivously, Gimp's eraser is the wrong way to solve this problem since it's slow and error-prone. How would you remove the grid programmatically?
P.s. Casual discussion about this problem in Graphics.SE here that contains more physical and mechanical hacks.
If all images consist of black lines over a gray grid, you could adjust the white threshold to remove the grid (e.g. with ImageMagick):
convert -white-threshold 80% with-grid.png without-grid.png
You will probably have to experiment with the exact threshold value. 80% worked for me with your sample image. This will make the lines pixelated. But perhaps resampling can reduce that to an acceptable amount, e.g. with:
convert -resize 200% -white-threshold 80% -resize 50% with-grid.png without-grid.png
In your image the grid is somewhat lighter than the drawing, so we can set a threshold, and filter the image such that all 'light' pixels are set to white. Using PIL it could look like this:
import Image
def filter(x):
#200 is our cutoff, try adjusting it to see the difference.
if x > 200:
return 255
return x
im = Image.open('bird.png')
im = im.point(filter)
im.show()
Processing your uploaded image with this code gives:
Which in this case is a pretty good result. Provided your drawing is darker than the grid, you should be able to use this method without too many problems.
Feedback to the answers: emulbreh and fraxel
The python -version utilizes the ImageMagick so let's consider the ImageMagick. It does not work with colored version like the below due to different color-channel -profiles. Let's investigate this a bit further.
$ convert -white-threshold 0% bird.png without.png
This picture shows the amount of noise in the original scanned picture.
Puzzle: removing the right -hand corner as an example
I inversed the colors $ convert -negate whiteVersion.png blackVersion.png to make it easier to vizualise. Now with the below black photo, I wanted to remove the blue right corner i.e. make it black -- it means that I want to set BG channels to 0 of BG with 100% channel -value.
$ convert -channel BG -threshold 100% bbird.png without.png
Now the only thing left is of course Red -channel, I removed GB but white still have Red left. Now how can I remove just the right-hand -corner? I need to specify area and then do the earlier -operations.
How can I get this working with arbitrary photo where you want to remove certain color but leave some colors intact?
I don't know an easy way. The first problem is color-detection problem -- you specify some condition for colors (R,G,B) with some inequality. If the condition is true, you remove it in just the part. Now you do this for all basic colors i.e. when (R,G,B)=(100%,0,0), (R,G,B)=(0,100%,0) and (R,G,B)=(0,0,100%). Does there exist some ready implementation for this? Probably but it is much nicer to do it yourself, puzzle set!
Prerequisite knowledge
Tutorials here and here about Imagemagick.
In order to understand this topic, we need to know some basic physics: white color is a mixture of all colors and black consists of no colors.

Categories

Resources