Bokeh-like Blur with Mask as Intensity of Blur Radius - python

I know how to Gaussian blur with Pillow, but can't track how to mask it by intensity of the radius value with a mask.
I am using MiDaS package to produce depth maps form 2D images. What I want to do is be able to blur the original image by the depth mask as a pseudo depth of field.
Here is a visual demonstration of the result I'm after with CV2 or Pillow (I don't understand which can do what I'm after.)
Note: I'm sorry if this is considered junk, I've sat on this question for a month. I tried scouring the net for something like this, and all I found was Poor Man's Portrait Mode which I could not get to work, and also would be reproducing depth maps when I already have them from my script and used for the 3D image creation.
Edit:
I did come up with this, using composite Not sure why I didn't take note of it before. Though I have to say, the results aren't too great. I think I really do need to emulate some sort of shape blur like bokeh.
sharpen = 3
boxBlur = 5
oimg = Image.open('2.png').convert('RGB')
width, height = oimg.size
mimg = Image.open('2_depth.png').resize((width, height)).convert('L')
bimg = oimg.filter(ImageFilter.BoxBlur(int(boxBlur)))
bimg = bimg.filter(ImageFilter.BLUR)
for i in range(sharpen):
bimg = bimg.filter(ImageFilter.SHARPEN)
rimg = Image.composite(oimg, bimg, mimg)
Basically get your image, and mask, ensure the mask matches the image (I had a issue where images didn't match, but were the same size, just saved different from 2 saved the same way)
Blur your image to a new variable, however you like, Gaussian, etc. Gaussian was too soft for me. Add whatever extra filtering you want
Composite the results together, using depth map as a mask for composite.
Note: If someone knows how to achieve a different sort of blur that mimics bokeh, I'd like to know, and have adjusted the question title. I read about a discBlur but couldn't find anything for PIL/CV2.

I’ve got only a brute-force solution with iteration over pixels: Variable blur intensity.
My code is working but not as efficiently as I want.
You can try. Open your image as input and put your depth map in the variable blur_map.

Related

Image Operations with Python

I hope you're all doing well!
I'm new to Image Manipulation, and so I want to apologize right here for my simple question. I'm currently working on a problem that involves classifying an object called jet into two known categories. This object is made of sub-objects. My idea is to use this sub-objects to transform each jet in a pixel image, and then applying convolutional neural networks to find the patterns.
Here is an example of the pixel images:
jet's constituents pixel distribution
To standardize all the images, I want to find the two most intense pixels and make sure the axis connecting them is in the vertical direction, as well as make sure that the most intense pixel is at the top. It also would be good to impose that one of the sides (left or right) of the image contains the majority of the intensity and to normalize the intensity of the whole image to 1.
My question is: as I'm new to this kind of processing, I don't know if there is a library in Python that can handle these operations. Are you aware of any?
PS: the picture was taken from here:https://arxiv.org/abs/1407.5675
You can look into OpenCV library for Python:
https://docs.opencv.org/master/d6/d00/tutorial_py_root.html.
It supports a lot of image processing functions.
In your case, it probably would be easier to convert the image into a more suitable color space in which one axis stands for color intensity (e.g HSI, HSL, HSV) and trying to find indices of the maximum values along this axis (this should return the pixels with the highest intensity in the image).
Generally, in Python, we use PIL library for basic manipulations with images and OpenCV for advances ones.
But, if understand your task correctly, you can just think of an image as a multidimensional array and use numpy to manipulate it.
For example, if your image is stored in a variable of type numpy.array called img, you can find maximum value along the desired axis just by writing:
img.max(axis=0)
To normalize image you can use:
img /= img.max()
To find which image part is brighter, you can split an img array into desired parts and calculate their mean:
left = img[:, :int(img.shape[1]/2), :]
right = img[:, int(img.shape[1]/2):, :]
left_mean = left.mean()
right_mean = right.mean()

Extracting objects with image-difference

Working on object detection in Python with opencv.
I have two pictures
The reference picture with no object in it.
Picture with object.
The result of the images is:
The problem is, the pattern of the reference image is now on my objects. I want to remove this pattern and I don't know how to do it. For further image processing I need the the correct outline of the objects.
Maybe you know how to fix it, or have better ideas to exctract the object.
I would be glad for your help.
Edit: 4. A black object:
As #Mark Setchell commented, the difference of the two images shows which pixels contain the object, you shouldn't try to use it as the output. Instead, find the pixels with a significant difference, and then read those pixels directly from the input image.
Here, I'm using Otsu thresholding to find what "significant difference" is. There are many other ways to do this. I then use the inverse of the mask to blank out pixels in the input image.
import PyDIP as dip
bg = dip.ImageReadTIFF('background.tif')
bg = bg.TensorElement(1) # The image has 3 channels, let's use just the green one
fg = dip.ImageReadTIFF('object.tif')
fg = fg.TensorElement(1)
mask = dip.Abs(bg - fg) # Difference between the two images
mask, t = dip.Threshold(mask, 'otsu') # Find significant differences only
mask = dip.Closing(mask, 7) # Smooth the outline a bit
fg[~mask] = 0 # Blank out pixels not in the mask
I'm using PyDIP above, not OpenCV, because I don't have OpenCV installed. You can easily do the same with OpenCV.
An alternative to smoothing the binary mask as I did there, is to smooth the mask image before thresholding, for example with dip.Gauss(mask,[2]), a Gaussian smoothing.
Edit: The black object.
What happens with this image, is that its illumination has changed significantly, or you have some automatic exposure settings in your camera. Make sure you have turned all of that off so that every image is exposed exactly the same, and that you use the raw images directly off of the camera for this, not images that have gone through some automatic enhancement procedure or even JPEG compression if you can avoid it.
I computed the median of the background image divided by the object image (fg in the code above, but for this new image), which came up to 1.073. That means that the background image is 7% brighter than the object image. I then multiplied fg by this value before computing the absolute difference:
mask = dip.Abs(fg * dip.Median(bg/fg)[0][0] - bg)
This helped a bit, but it showed that the changes in contrast are not consistent across the image.
Next, you can change the threshold selection method. Otsu assumes a bimodal histogram, and works well if you have a significant number of pixels in each group (foreground and background). Here we'll have fewer pixels belonging to the object, because only some of the object pixels have a different color from the background. The 'triangle' method is suitable in this case:
mask, t = dip.Threshold(mask, 'triangle')
This will lead to a mask that contains only some of the object pixels. You'll have to add some additional knowledge about your object (i.e. it is a rotated square) to find the full object. There are also some isolated background pixels that are being picked up by the threshold, those are easy to eliminate using a bit of blurring before the threshold or a small opening after.
Getting the exact outline of the object in this case will be impossible with your current setup. I would suggest you improve your setup by either:
making the background more uniform in illumination,
using color (so that there are fewer possible objects that match the background color so exactly as in this case),
using infrared imaging (maybe the background could have different properties from all the objects to be detected in infrared?),
using back-illumination (this is the best way if your aim is to measure the objects).

apply a filter on images, to make them appear like they are taken from further away

I want to apply a filter on images, to make them appear like they are taken from further away, as they really are.
For reference:
Left image below is from ~1m away from the plant.
Right image is from 10m away.
Which filters or combination of filters should I use to get the right image from the left one. I suppose I can use some sort of blurring and pixelation.
I wanted to ask here, so see if there is kind of a standard way to do this in image processing that gives realistic results.
I need to implement this in python 3 and I know how to implement a blur with opencv.
I'd do a cubic downsample, then a nearest neighbor upsample, with a little blur for polish:
img = cv2.imread(impath,-1)
w,h = img.shape[:2]
down = cv2.resize(img,(int(w/3),int(h/3)),interpolation=cv2.INTER_CUBIC)
up = cv2.resize(down,(w,h),interpolation=cv2.INTER_NEAREST)
up = cv2.GaussianBlur(up,(5,5),2.4,2.4)
cv2.imshow('',up)
cv2.imshow('in',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

how can i get a black and white image for the following picture?

I want to convert the picture into black and white image accurately where the seeds will be represented by white color and the background as black color. I would like to have it in python opencv code. Please help me out
I got good result for the above picture using the given code below. Now I have another picture for which thresholding doesn't seem to work. How can I tackle this problem. The output i got is in the following picture
also, there are some dents in the seeds, which the program takes it as the boundary of the seed which is not a good results like in the picture below. How can i make the program ignore dents. Is masking the seeds a good option in this case.
I converted the image from BGR color space to HSV color space.
Then I extracted the hue channel:
Then I performed threshold on it:
Note:
Whenever you face difficulty in certain areas try working in a different color space, the HSV color space being most prominent.
UPDATE:
Here is the code:
import cv2
import numpy as np
filename = 'seed.jpg'
img = cv2.imread(filename) #---Reading image file---
hsv_img = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) #---Converting RGB image to HSV
hue, saturation, value, = cv2.split(hsv_img) #---Splitting HSV image to 3 channels---
blur = cv2.GaussianBlur(hue,(3,3),0) #---Blur to smooth the edges---
ret,th = cv2.threshold(blur, 38, 255, 0) #---Binary threshold---
cv2.imshow('th.jpg',th)
Now you can perform contour operations to highlight your regions of interest also. Try it out!! :)
ANOTHER UPDATE:
I found the contours higher than a certain constraint to get this:
There are countless ways for image segmentation.
The simplest one is a global threshold operation. If you want to know more on other methods you should read some books. Which I recommend anyway befor you do any further image processing. It doesn't make much sense to start image processing if you don't know the most basic tools.
Just to show you how this could be achieved:
I converted the image from RGB to HSB. I then applied separate global thresholds to the hue and brightness channels to get the best segmentation result for both images.
Both binary images were then combined using a pixelwise AND operation. I did this because both channels gave sub-optimal results, but their overlap was pretty good.
I also applied some morphological operators to clean up the results.
Of course you can just invert the image to get the desired black background...
Thresholds and the used channels of course depend on the image you have and what you want to achieve. This is a very case-specific process that can be dynamically adapted to a limited extend.
This could be followed by labling or whatever you need:

image matching in opencv python

I've been working on a project of recognizing a flag shown in the camera using opencv python.
I've already tried using surf, color histogram matching, and template matching. But of these 3, it does not always return the correct answer. what i want now, is what would be the best solution to this problem of mine.
Example of the template images:
Here is an example of flag shown in camera.
what to use if this is the kind of images that i want to recognize?
Update code in matchTemplate
flags=["Cambodia.jpg","Laos.jpg","Malaysia.jpg","Myanmar.jpg","Philippines.jpg","Singapore.jpg","Thailand.jpg","Vietnam.jpg","Indonesia.jpg","Brunei.jpg"]
while True:
methods = 'cv2.TM_CCOEFF_NORMED'
list_of_pics=[]
for flag in flags:
template= cv2.imread(flag,0)
img = cv2.imread('philippines2.jpg',0)
# generate Gaussian pyramid for A
G = template.copy()
gpA = [G]
for i in xrange(6):
G = cv2.pyrDown(G)
gpA.append(G)
n=0
for x in gpA:
w, h = x.shape[::-1]
method = eval(methods)#
# Apply template Match
res = cv2.matchTemplate(img,x,method)
matchVal=res[0][0]
picDict={"matchVal":matchVal,"name":flag}
list_of_pics.append(picDict)
n=n+1
newlist = sorted(list_of_pics, key=operator.itemgetter('matchVal'),reverse=True)
#print newlist
matched_image=newlist[0]['name']
print matched_image
k=cv2.waitKey(10)
if (k==27):
break
cv2.destroyAllWindows()
I don't think that you can get good results from SURF/SIFT because:
SURF/SIFT need keypoints to detect the object but in your case, you have to detect flags and most of the flags are mostly uniform and do not provide much keypoints.
In your webcam frame, you have several things rather than having only flag. Those several things also contribute to get the keypoints.
Solution: i still think that you should use matchTemplate() of opencv which you have already tried but the problem in your version is that you didn't consider the fact that matchTemplate() is not scale and orientation invariant. So, the solution is to use Gaussian pyramid and create the different size (half, one forth, double etc.) of your sample flags. After getting the same flag in 2-5 different size, you should perform the matchTemplate() between every size of flag and the webcam frame.
Strategy:
Receive the webcam frame
Load the image of a flag.
Using Gaussian pyramid, create smaller and bigger images of that flag (you don't need to store them.)
Perform matchTemplate() between the webcam frame and each size of flag.
Result = with which so ever image you get the maximum correlation value is the flag present in your webcam.
REMEMBER: matchTemplate is not scale and orientation invariant. so, if you rotate the image or make it larger/smaller in the webcam frame...you won't get the good results.
SURF cannot be applied to the images that have no corners (when gradient is mostly goes in one direction like in a striped flag). Color histogram of the whole object may not work since both of your examples have similar colors. However, if you can apply a histogram to different parts of the image it will work better.
What you need to do is to split your training image on say 4 quadrants and create 4 color histograms. The testing stage will integrate these 4 back projected histograms and check for the right spatial order of responses. Color histogram is quite robust to rotations, scaling and perspective. It changes with illumination so you need to have liberal matching thresholds. Spatial resolution from 4 quadrants will help to ameliorate this situation.
For the future I recommend studying methods in more detail to understand their applicability rather than trying them randomly.

Categories

Resources