How do I go about changing the saturation of an image using PIL or Pillow? Preferably I'd like to be able to use the solution together with the django-imagekit package. The reason I need to change the saturation is to create an effect where when the user hovers a black-and-white image it turns to colored.
You probably want ImageEnhance.Color.
img = PIL.Image.open('bus.png')
converter = PIL.ImageEnhance.Color(img)
img2 = converter.enhance(0.5)
This gives an image with half the "color" of the original. This isn't exactly the same thing as half the saturation (because half or double the saturation would usually underflow or overflow), but it's probably what you actually want most of the time. As the docs say, it works like the "color" knob on a TV.
Here's an example of the same image at 0.5, 1.0, and 2.0 color:
If you want a greyscale image, simply convert it to the L (Luminance) mode:
greyscale = rgba_image.convert('L')
Applying that to my ninja:
If you need intermediary steps, you need to convert the RGB value to HLS or HSV, adjust the saturation, then convert it back to RGB again. You could use colorsys for that, or adapt this numpy solution; I would expect the latter to perform better.
If you're using django-imagekit, you can just use the bundled Adjust processor:
from imagekit.processors import Adjust
Adjust(color=0.5)
Under the hood, this will do exactly what #abarnert recommended.
Related
Working on object detection in Python with opencv.
I have two pictures
The reference picture with no object in it.
Picture with object.
The result of the images is:
The problem is, the pattern of the reference image is now on my objects. I want to remove this pattern and I don't know how to do it. For further image processing I need the the correct outline of the objects.
Maybe you know how to fix it, or have better ideas to exctract the object.
I would be glad for your help.
Edit: 4. A black object:
As #Mark Setchell commented, the difference of the two images shows which pixels contain the object, you shouldn't try to use it as the output. Instead, find the pixels with a significant difference, and then read those pixels directly from the input image.
Here, I'm using Otsu thresholding to find what "significant difference" is. There are many other ways to do this. I then use the inverse of the mask to blank out pixels in the input image.
import PyDIP as dip
bg = dip.ImageReadTIFF('background.tif')
bg = bg.TensorElement(1) # The image has 3 channels, let's use just the green one
fg = dip.ImageReadTIFF('object.tif')
fg = fg.TensorElement(1)
mask = dip.Abs(bg - fg) # Difference between the two images
mask, t = dip.Threshold(mask, 'otsu') # Find significant differences only
mask = dip.Closing(mask, 7) # Smooth the outline a bit
fg[~mask] = 0 # Blank out pixels not in the mask
I'm using PyDIP above, not OpenCV, because I don't have OpenCV installed. You can easily do the same with OpenCV.
An alternative to smoothing the binary mask as I did there, is to smooth the mask image before thresholding, for example with dip.Gauss(mask,[2]), a Gaussian smoothing.
Edit: The black object.
What happens with this image, is that its illumination has changed significantly, or you have some automatic exposure settings in your camera. Make sure you have turned all of that off so that every image is exposed exactly the same, and that you use the raw images directly off of the camera for this, not images that have gone through some automatic enhancement procedure or even JPEG compression if you can avoid it.
I computed the median of the background image divided by the object image (fg in the code above, but for this new image), which came up to 1.073. That means that the background image is 7% brighter than the object image. I then multiplied fg by this value before computing the absolute difference:
mask = dip.Abs(fg * dip.Median(bg/fg)[0][0] - bg)
This helped a bit, but it showed that the changes in contrast are not consistent across the image.
Next, you can change the threshold selection method. Otsu assumes a bimodal histogram, and works well if you have a significant number of pixels in each group (foreground and background). Here we'll have fewer pixels belonging to the object, because only some of the object pixels have a different color from the background. The 'triangle' method is suitable in this case:
mask, t = dip.Threshold(mask, 'triangle')
This will lead to a mask that contains only some of the object pixels. You'll have to add some additional knowledge about your object (i.e. it is a rotated square) to find the full object. There are also some isolated background pixels that are being picked up by the threshold, those are easy to eliminate using a bit of blurring before the threshold or a small opening after.
Getting the exact outline of the object in this case will be impossible with your current setup. I would suggest you improve your setup by either:
making the background more uniform in illumination,
using color (so that there are fewer possible objects that match the background color so exactly as in this case),
using infrared imaging (maybe the background could have different properties from all the objects to be detected in infrared?),
using back-illumination (this is the best way if your aim is to measure the objects).
I want to convert the picture into black and white image accurately where the seeds will be represented by white color and the background as black color. I would like to have it in python opencv code. Please help me out
I got good result for the above picture using the given code below. Now I have another picture for which thresholding doesn't seem to work. How can I tackle this problem. The output i got is in the following picture
also, there are some dents in the seeds, which the program takes it as the boundary of the seed which is not a good results like in the picture below. How can i make the program ignore dents. Is masking the seeds a good option in this case.
I converted the image from BGR color space to HSV color space.
Then I extracted the hue channel:
Then I performed threshold on it:
Note:
Whenever you face difficulty in certain areas try working in a different color space, the HSV color space being most prominent.
UPDATE:
Here is the code:
import cv2
import numpy as np
filename = 'seed.jpg'
img = cv2.imread(filename) #---Reading image file---
hsv_img = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) #---Converting RGB image to HSV
hue, saturation, value, = cv2.split(hsv_img) #---Splitting HSV image to 3 channels---
blur = cv2.GaussianBlur(hue,(3,3),0) #---Blur to smooth the edges---
ret,th = cv2.threshold(blur, 38, 255, 0) #---Binary threshold---
cv2.imshow('th.jpg',th)
Now you can perform contour operations to highlight your regions of interest also. Try it out!! :)
ANOTHER UPDATE:
I found the contours higher than a certain constraint to get this:
There are countless ways for image segmentation.
The simplest one is a global threshold operation. If you want to know more on other methods you should read some books. Which I recommend anyway befor you do any further image processing. It doesn't make much sense to start image processing if you don't know the most basic tools.
Just to show you how this could be achieved:
I converted the image from RGB to HSB. I then applied separate global thresholds to the hue and brightness channels to get the best segmentation result for both images.
Both binary images were then combined using a pixelwise AND operation. I did this because both channels gave sub-optimal results, but their overlap was pretty good.
I also applied some morphological operators to clean up the results.
Of course you can just invert the image to get the desired black background...
Thresholds and the used channels of course depend on the image you have and what you want to achieve. This is a very case-specific process that can be dynamically adapted to a limited extend.
This could be followed by labling or whatever you need:
Here i have one RGB image where i need want extract plane of intensity.
I have tried HSL, in this i took L Luminosity but its not similar with Intensity, and tried RGB2GRAY but this also little bit similar but not actual.
so is there any special code to get intensity of the image? or is there any calculation of Intensity?
Try to use BGR2GRAY(and so on - BGR2HSL etc) instead of RGB2GRAY - OpenCV usually use BGR channel order, not RGB.
The default format of RGB in OpenCV is BGR. So, you can get the intensity of your image using OpenCV like below:
intensity_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2HSV);
intensity_image[:,:,2] is the value image of your original image
Hope this helps.
I am trying to convert a color image to pure BW. I looked around for some code to do this and settled with
im = Image.open("mat.jpg")
gray = im.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save("result_bw.jpg")
However, the result still has grays!
So, I tried to do it myself:
floskel = Image.open("result_bw.jpg")
flopix = floskel.load()
for i in range (0,floskel.size[0]):
for j in range (0, floskel.size[1]):
print flopix[i,j]
if flopix[i,j]>100:
flopix[i,j]=255
else:
flopix[i,j]=0
But, STILL, there are grays in the image.
Am I doing something wrong?
As sebdelsol mentioned, it's much better to use im.convert('1') directly on the colour source image. The standard PIL "dither" is Floyd-Steinberg error diffusion, which is generally pretty good (depending on the image), but there are a variety of other options, eg random dither and ordered dither, although you'd have to code them yourself, so they'd be quite a bit slower.
The conversion algorithm(s) you use in the code in the OP is just simple thresholding, which generally loses a lot of detail, although it's easy to write. But I guess in this case you were just trying to confirm your theory about grey pixels being present in the final image. But as sebdelsol said, it just looks like there are grey pixels due to the "noise", i.e. regions containing a lot of black and white pixels mixed together, which you should be able to verify if you zoom into the image.
FWIW, if you do want to do your own pixel-by-pixel processing of whole images it's more efficient to get a list of pixels using im.getdata() and put them back into an image with im.putdata(), rather than doing that flopix[i,j] stuff. Of course, if you don't need to know coordinates, algorithms that use im.point() are usually pretty quick.
Finally, JPEG isn't really suitable for B&W images, it was designed for images with (mostly) continuous tone. Try saving as PNG; the resulting files will probably be a lot smaller than the equivalent JPEGs. It's possible to reduce JPEG file size by saving with low quality settings, but the results generally don't look very good.
You'd rather use convert to produce a mode('1') image. It would be faster and better since it use dithering by default.
bw = im.convert('1')
The greys you see appear probably in the parts of the image with noise near the 128 level, that produces high frequency B&W that looks grey.
I'm not sure how I would go about reducing the color palette of a PIL Image. I would like to reduce an image's palette to the 5 prominent colors found in that image. My overall goal is to do some basic color sampling.
That's easy, just use the undocumented colors argument:
result = image.convert('P', palette=Image.ADAPTIVE, colors=5)
I'm using Image.ADAPTIVE to avoid dithering
I assume you want to do something more sophisticated than posterize. "Sampling" as you say, will take some finesse, as the 5 most common colors in the image are likely to be similar to one another. Maybe take a look at the 5 most separated peaks in a histogram.
The short answer is to use the Image.quantize method. For more info, see: How do I convert any image to a 4-color paletted image using the Python Imaging Library ?