Blur part of an Image and blend it with the Background - python

I need to blur faces to protect the privacy of people in street view images like Google does in Google Street View. The blur should not make the image aesthetically unpleasant. I read in the paper titled Large-scale Privacy Protection in Google Street View by Google (link) that Google does the following to blur the detected faces.
We chose to apply a combination of noise and aggressive Gaussian blur that we alpha-blend smoothly with the background starting at the edge of the box.
Can someone explain how to perform this task? I understand Gaussian Blur, but how to blend it with the background?
Code will be helpful but not required
My question is not how to blur a part of image?, it is how to blend the blurred portion with the background so that blur is not unpleasant? Please refer to the quote I provided from the paper.
I have large images and a lot of them. An iterative process as in the possible duplicate will be time consuming.
EDIT
If someone ever wants to do something like this, I wrote a Python implementation. It isn't exactly what I was asking for but it does the job.
Link: pyBlur

I'm reasonably sure the general idea is:
Create a shape for the area you want to blur (say a rectangle).
Extend your shape by X pixels outwards.
Apply a gradient on alpha from 0.0 .. 1.0 (or similar) over the extended area.
Apply blur the extended area (ignoring alpha)
Now use an alpha-blend to apply the modified image to the original image.
Adding noise in a similar way to the original image would make it further less obvious that it's been blurred (because the blur will of course also blur away the noise).
I don't know the exact parameters for how much to grow, what values to use for the alpha gradient, etc, but that's what I understand from the quoted text.

Related

How do I segmentate dirty 7-seg LCD with reflections

The LCD display cannot be cleaned, the light conditions cannot be changed and the background can be tricky. So, I cannot use any kind of segmentation by color, search for rectangles and use Otsu. MSER doesn't give a good result. I even tried to locate display relatively to "DEXP" logo. The logo turned out to be too small to do this with sufficient accuracy. Bilateral filtering or gaussian blur helps, but not much. Even supposing that I found ROI, local thresholding gives too noisy results. Morphological transformations don't help. Is there a way to extract digits for further OCR?

Dash blurred part of the image

Is it possible to dash the blurred part of the image?
Right now I am using python with OpenCV. I know only how to load images and display if the image is blurred.
My input is a blurred image:
I would like to get:
I do not have:
original/unblurred image.
Output can have still blurred parts but dashed.
Thanks a lot for help!
You could try by computing the "Variance of the Laplacian" on parts of the image to detect the regions that have a low variation in greyscales (= assumed blurry) and which regions have a high variation in greyscale (= assumed non-blurry).
There is a nice tutorial on how to check if an image is blurry, it can be found here
There is also a post here that explains the theory behind it.
It ain't a complete solution, but it might be a way to start.

Will "cutting and pasting" blurred mask of an image onto the original negatively impact results?

I am new to python and image processing. I have an image and a binary mask of an ROI (region of interest) of that image. I want to blur only the ROI of that image. Will cropping out my ROI, applying the blur to this cropped out part, and pasting it back on the original affect my results in any negative or undesired way?
image
mask of ROI
cropped out ROI
blurred ROI
final: blurred ROI pasted back onto the original (using answer from here)
My inexperienced brain says, "No, it won't." But reading better minds on this stackoverflow thread gives me pause.
I am hoping to confirm that my method will not affect pixels outside my ROI in any way. (I am hoping for some confirmation beyond just me "eyeballing" it.)
If someone is willing, I would also like to better understand the concerns related to edge pixels referenced in the post cited immediately above. (I would inquire on that thread, but I do not have enough points to do so.)
Lastly, I have a suspicion there is a better way to fade out my ROI to transparent (from inner edge of the ROI being opaque to the outer edge being transparent). If someone is willing to point out a better method, I am teachable and it would be appreciated.
(This is my first time posting on stackoverflow. I do it with some fear and trembling. Please kindly point out if I am not following proper protocol in some way. I would like to do things right and be welcomed back in the future. :) Thanks.)

How can I extract hand features from these images?

I have two different types of images (which I cannot post due to reputation, so I've linked them.):
Image 1 Image 2
I was trying to extract hand features from the images using OpenCV and Python. Which kinda looks like this:
import cv2
image = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(image, (5,5), 0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
retval, thresh1 = cv2.threshold(gray, 70, 255, / cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
cv2.imshow('image', thresh1)
cv2.waitKey(0)
The result of which looks like this:
Image 1 Image 2
The change in background in the second image is messing with the cv2.threshold() function and its not getting the skin parts right. Is there a way to do this right?
As a follow up question, what is the best way to extract hand features? I tried a HaaR Cascade and I didn't really get results? Should I train my own cascade? What other options do I have?
It's hard to say based on a sample size of two images, but I would try OpenCV's Integral Channel Features (ChnFtrs), which are like supercharged Haar features that can take cues from colour as well as any other image channels you care to create and provide.
In any case, you are going to have to train your own cascades. Separate cascades for front and profile shots of course.
Take out your thresholding by skin colour, because as you've already noticed, it may throw away some or all of the hands depending on the actual subject's skin colour and lighting. ChnFtrs will do the skin detection for you more robustly than a fixed threshold can. (Though for future reference, all humans are actually orange :))
You could eliminate some false positives by only detecting within a bounding box of where you expect the hands to be.
Try both RGB and YUV channels to see what works best. You could also throw in the results of edge detection (say, Canny, maximised across your 3 colour channels) for good measure. At the end, you could cull channels which are underused to save processing if necessary.
If you have much variation in hand pose, you may need to group similar poses and train a separate ChnFtrs cascade for each group. Individual cascades do not have a branching structure, so they do not cope well when the positive samples are disjoint in parameter space. This is, AFAIK, a bit of an unexplored area.
A correctly trained ChnFtrs cascade (or several) may give you a bounding box for the hands, which will help in extracting hand contours, but it can't exclude invalid contours within the same bounding box. Most other object detection routines will also have this problem.
Another option, which may be better/simpler than ChnFtrs, is LINEMOD (a current favourite of mine). It has the advantage that there's no complex training process, nor any training time needed.

finding contours of a plant root

I have an image which has roots like this:
I want to crop each root individually out.
I initially thought of heavy dilation followed by erosion and contour detection of the blob, but since the roots are thin, it does not work well.
I also directly applied canny edge and contour detection like this image below. It has around 62000 contours, but I cannot use it to get the outline of each root.
I also thresholded the image using HSV followed by some median blurring. But it did not reduce much noise. Further blurring only leads to losing the root features.
Can anyone suggest me a better approach to tackle this problem? Will any Machine Learning based approach work better? Thanks
Use Cv2.BoundingRect(), it gets the rectangle area of your contour. Also you can use MedianFilter to get rid of "salt and pepper" noise in your picture.
First step detect big root, by user select or big erosion.
Second step base on previous big area center coordinate, make a fill algorithm (like Paint Bucket Tool in Paint) on this original threshold image

Categories

Resources