I'm not sure how I would go about reducing the color palette of a PIL Image. I would like to reduce an image's palette to the 5 prominent colors found in that image. My overall goal is to do some basic color sampling.
That's easy, just use the undocumented colors argument:
result = image.convert('P', palette=Image.ADAPTIVE, colors=5)
I'm using Image.ADAPTIVE to avoid dithering
I assume you want to do something more sophisticated than posterize. "Sampling" as you say, will take some finesse, as the 5 most common colors in the image are likely to be similar to one another. Maybe take a look at the 5 most separated peaks in a histogram.
The short answer is to use the Image.quantize method. For more info, see: How do I convert any image to a 4-color paletted image using the Python Imaging Library ?
Related
I have images like this ones:
And I want to change its colour palette so they look like this one:
What I do is to reduce the amounts of colours in the image to a specific subset of 17. I need to be this specific 17 because I have to apply this process to around 700 images and I need consistency.
When I use gimp to archive this goal what I usually do is to create a colour palette based on an image and then Image > Mode > Indexed and choose the colour palette.
My plan was to do something similar using Pillow, but I haven't been able to find a successful way to do it.
Any suggestions? Should I try a different approach?
Thanks!
I am trying to convert a color image to pure BW. I looked around for some code to do this and settled with
im = Image.open("mat.jpg")
gray = im.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save("result_bw.jpg")
However, the result still has grays!
So, I tried to do it myself:
floskel = Image.open("result_bw.jpg")
flopix = floskel.load()
for i in range (0,floskel.size[0]):
for j in range (0, floskel.size[1]):
print flopix[i,j]
if flopix[i,j]>100:
flopix[i,j]=255
else:
flopix[i,j]=0
But, STILL, there are grays in the image.
Am I doing something wrong?
As sebdelsol mentioned, it's much better to use im.convert('1') directly on the colour source image. The standard PIL "dither" is Floyd-Steinberg error diffusion, which is generally pretty good (depending on the image), but there are a variety of other options, eg random dither and ordered dither, although you'd have to code them yourself, so they'd be quite a bit slower.
The conversion algorithm(s) you use in the code in the OP is just simple thresholding, which generally loses a lot of detail, although it's easy to write. But I guess in this case you were just trying to confirm your theory about grey pixels being present in the final image. But as sebdelsol said, it just looks like there are grey pixels due to the "noise", i.e. regions containing a lot of black and white pixels mixed together, which you should be able to verify if you zoom into the image.
FWIW, if you do want to do your own pixel-by-pixel processing of whole images it's more efficient to get a list of pixels using im.getdata() and put them back into an image with im.putdata(), rather than doing that flopix[i,j] stuff. Of course, if you don't need to know coordinates, algorithms that use im.point() are usually pretty quick.
Finally, JPEG isn't really suitable for B&W images, it was designed for images with (mostly) continuous tone. Try saving as PNG; the resulting files will probably be a lot smaller than the equivalent JPEGs. It's possible to reduce JPEG file size by saving with low quality settings, but the results generally don't look very good.
You'd rather use convert to produce a mode('1') image. It would be faster and better since it use dithering by default.
bw = im.convert('1')
The greys you see appear probably in the parts of the image with noise near the 128 level, that produces high frequency B&W that looks grey.
My ideas are:
1.0. [unsolved, hard image-detection] Breaking image into squares and removing borders, surely other techniques!
1.1. [unsolved] Imagemagick: crop (instructions here), remove
certain borders -- this may take a
lot of time to locate the grid, image detection
problem (comparing white/black here) -- or there may be some magic wand style filter.
1.2. [unsolved] Python: you probably need thisfrom PIL import Image.
Obivously, Gimp's eraser is the wrong way to solve this problem since it's slow and error-prone. How would you remove the grid programmatically?
P.s. Casual discussion about this problem in Graphics.SE here that contains more physical and mechanical hacks.
If all images consist of black lines over a gray grid, you could adjust the white threshold to remove the grid (e.g. with ImageMagick):
convert -white-threshold 80% with-grid.png without-grid.png
You will probably have to experiment with the exact threshold value. 80% worked for me with your sample image. This will make the lines pixelated. But perhaps resampling can reduce that to an acceptable amount, e.g. with:
convert -resize 200% -white-threshold 80% -resize 50% with-grid.png without-grid.png
In your image the grid is somewhat lighter than the drawing, so we can set a threshold, and filter the image such that all 'light' pixels are set to white. Using PIL it could look like this:
import Image
def filter(x):
#200 is our cutoff, try adjusting it to see the difference.
if x > 200:
return 255
return x
im = Image.open('bird.png')
im = im.point(filter)
im.show()
Processing your uploaded image with this code gives:
Which in this case is a pretty good result. Provided your drawing is darker than the grid, you should be able to use this method without too many problems.
Feedback to the answers: emulbreh and fraxel
The python -version utilizes the ImageMagick so let's consider the ImageMagick. It does not work with colored version like the below due to different color-channel -profiles. Let's investigate this a bit further.
$ convert -white-threshold 0% bird.png without.png
This picture shows the amount of noise in the original scanned picture.
Puzzle: removing the right -hand corner as an example
I inversed the colors $ convert -negate whiteVersion.png blackVersion.png to make it easier to vizualise. Now with the below black photo, I wanted to remove the blue right corner i.e. make it black -- it means that I want to set BG channels to 0 of BG with 100% channel -value.
$ convert -channel BG -threshold 100% bbird.png without.png
Now the only thing left is of course Red -channel, I removed GB but white still have Red left. Now how can I remove just the right-hand -corner? I need to specify area and then do the earlier -operations.
How can I get this working with arbitrary photo where you want to remove certain color but leave some colors intact?
I don't know an easy way. The first problem is color-detection problem -- you specify some condition for colors (R,G,B) with some inequality. If the condition is true, you remove it in just the part. Now you do this for all basic colors i.e. when (R,G,B)=(100%,0,0), (R,G,B)=(0,100%,0) and (R,G,B)=(0,0,100%). Does there exist some ready implementation for this? Probably but it is much nicer to do it yourself, puzzle set!
Prerequisite knowledge
Tutorials here and here about Imagemagick.
In order to understand this topic, we need to know some basic physics: white color is a mixture of all colors and black consists of no colors.
I want to get the most prominent color of an image, and the language can be in either python or ruby.
Is this easily done?
I don't know if this is what you mean, but maybe it will be helpful:
require 'rubygems'
require 'RMagick'
include Magick
image = Image.read("stack.png")[0]
hash = image.color_histogram
color, number = hash.max{|a,b| a[1] <=> b[1]}
puts color.to_color
This worked like a charm for very simple image (only 5 colors), but should work for more complex images too (I did not tested that; returned hash will be quite big in that case, so you might want to use quantize on your image before using color_histogram).
Some resources :
color_histogram
quantize
I hope this was useful to you. :)
OK. Let me introduce the library for Ruby.
Using Camellia, http://camellia.sourceforge.net/examples.html, you can label the area with the most prominent color.
Not sure if this is what you mean, but the Python PIL has im.histogram() and im.getcolors() functions. http://effbot.org/imagingbook/image.htm
i need to segment an image into regions .i'm using pil.i found no module to segment image in pil. I need this segmented regions as a list or dictionary.
Actually i'm trying to compare the images for similarity in content aware fashion.for that i need to segment the image. i tried segwin tool but it is drawing another image(which is not required and also time consuming)
thans in advance
The easiest way to segment an image into regions is creating an other image called labelmap. The "region 1" is represented by all the 1 valued pixels within the labelmap, and so on. If you need the pixels of the "region 3" you just binarize the labelmap with a thershold equal to 3 and multiply the result with the original image.
Like Oliver I advise WrapItk.
For this task i prefer numpy and scipy. In terms of image processing these two have all you need. For array math i recommend numexpr. Take a look at http://docs.scipy.org/doc/scipy/reference/ndimage.html
Take a look at the PIL Handbook, you can use the "crop" function to get a subregion of the image.
You might want to try the python bindings for ITK, a segmentation tool in C++.