I want to get the most prominent color of an image, and the language can be in either python or ruby.
Is this easily done?
I don't know if this is what you mean, but maybe it will be helpful:
require 'rubygems'
require 'RMagick'
include Magick
image = Image.read("stack.png")[0]
hash = image.color_histogram
color, number = hash.max{|a,b| a[1] <=> b[1]}
puts color.to_color
This worked like a charm for very simple image (only 5 colors), but should work for more complex images too (I did not tested that; returned hash will be quite big in that case, so you might want to use quantize on your image before using color_histogram).
Some resources :
color_histogram
quantize
I hope this was useful to you. :)
OK. Let me introduce the library for Ruby.
Using Camellia, http://camellia.sourceforge.net/examples.html, you can label the area with the most prominent color.
Not sure if this is what you mean, but the Python PIL has im.histogram() and im.getcolors() functions. http://effbot.org/imagingbook/image.htm
Related
I've been following this tutorial , but the problem is, my squares have illustrations on it, which causes opencv to pick up on those as well. At least, I think that's what the problem is.
Original image:
I am aware this might work better with a black background but this is all I have to work with for now.
This is the result of my attempt:
windows, python2, opencv3.3-dev
Try to process the S channel in HSV space like this.
convert to hsv
seperate the s-channel
threshold the s-channel
some other post-process(such as morph-op)
So, I have a PNG image file like the following example, and I need it to be converted into PGM format.
I'm using Ubuntu and Python, so any of terminal or Python tools would suit just fine. And there sure is a plenty of ways to do this: using ImageMagick convert command or pngtopam package or Python PIL library, etc.
But the point is, the quality of the image is essential in my case, and all of those failed in keeping it, always ending up with:
No need to mention this is totally not what I want to see. And the interesting thing is that when I tried to convert the same image into PGM manually using GIMP, it turned out quite well, looking exactly the way I'd like it to, i.e. the same as the PNG one.
So, that means it is possible to get a PGM image in fine quality after all, and now I'd really appreciate if someone can tell me how do I do that using terminal/Python tools. I guess, there should be some ImageMagick option that does the trick, it's just that I'm not aware of any.
You lost the antialiasing, which is conveyed via the alpha channel. To preserve it, use:
convert in.png -flatten out.pgm
Without -flatten, convert simply deletes the alpha channel; with -flatten it composites the input image against the background color, which is white by default.
Here are the results, magnified 10x so you can see what's going on:
Not flattened:
Flattened:
I have images such as the one below from which I need to count the prominent white spots. Unfortunately my object counting algorithm is becoming confused due to those "fuzzy" white areas. It can sometimes see hundreds of objects there.
So what I'm wondering is whether there's some way to perhaps exaggerate the white spots and suppress the "fuzzy" areas either using filters in GIMP or Python libraries.
Thank you!
Increase the contrast in GIMP.
You probably want an adaptive threshold.
The modules that I know have this in Python are scikit-image and OpenCV.
I ended up using G'MIC's Bilateral Filtering, it was the perfect tool for the job.
I am new to Python, and trying to use PIL to perform a parsing task I need for an Arduino project. This question pertains to the Image.convert() method and the options for color palettes, dithering etc.
I've got some hardware capable of displaying images with only 16 colors at a time (but they can be specified RGB triplets). So, I'd like to automate the task of taking an arbitrary true-color PNG image, choosing an "optimum" 16-color palette to represent it, and converting the image to a palettized one containing ONLY 16 colors.
I want to use dithering. The problem is, the image.convert() method seems to be acting a bit funky. Its arguments aren't completely documented (PIL documentation for Image.convert()) so I don't know if it's my fault or if the method is buggy.
A simple version of my code follows:
import Image
MyImageTrueColor = Image.new('RGB',100,100) # or whatever dimension...
# I paste some images from several other PNG files in using MyImageTrueColor.paste()
MyImageDithered = MyImageTrueColor.convert(mode='P',
colors=16,
dither=1
)
Based on some searches I did (e.g.: How to reduce color palette with PIL) I would think this method should do what I want, but no luck. It dithers the image, but yields an image with more than 16 colors.
Just to make sure, I removed the "dither" argument. Same output.
I re-added the "dither=1" argument and threw in the Image.ADAPTIVE argument (as shown in the link above) just to see what happened. This resulted in an image that contained 16 colors, but NO dithering.
Am I missing something here? Is PIL buggy? The solution I came up with was to perform 2 steps, but that seems sloppy and unnecessary. I want to figure out how to do this right :-) For completeness, here's the version of my code that yields the correct result - but it does it in a sloppy way. (The first step results in a dithered image with >16 colors, and the second results in an image containing only 16 colors.)
MyImage_intermediate = MyImageTrueColor.convert(mode='P',
colors=16
)
MyImageDithered = MyImage_intermediate.convert(mode='P',
colors=16,
dither=1,
palette=Image.ADAPTIVE
)
Thanks!
Well, you're not calling things properly, so it shouldn't be working… but even if we were calling things right, I'm not sure it would work.
First, the "official" free version of the PIL Handbook is both incomplete and out of date; the draft version at http://effbot.org/imagingbook/image.htm is less incomplete and out of date.
im.convert(“P”, **options) ⇒ image
Same, but provides better control when converting an “RGB” image to an
8-bit palette image. Available options are:
dither=. Controls dithering. The default is FLOYDSTEINBERG, which
distributes errors to neighboring pixels. To disable dithering, use
NONE.
palette=. Controls palette generation. The default is WEB, which is
the standard 216-color “web palette”. To use an optimized palette, use
ADAPTIVE.
colors=. Controls the number of colors used for the palette when
palette is ADAPTIVE. Defaults to the maximum value, 256 colors.
So, first, you can't use colors without ADAPTIVE—for obvious reason: the only other choice is WEB, which only handles a fixed 216-color palette.
And second, you can't pass 1 to dither. That might work if it happened to be the value of FLOYDSTEINBERG, but that's 3. So, you're passing an undocumented value; who knows what that will do? Especially since, looking through all of the constants that sound like possible names for dithering algorithms, none of them have the value 1.
So, you could try changing it to dither=Image.FLOYDSTEINBERG (along with palette=Image.ADAPTIVE) and see if that makes a difference.
But, looking at the code, it looks like this isn't going to do any good:
if mode == "P" and palette == ADAPTIVE:
im = self.im.quantize(colors)
return self._new(im)
This happens before we get to the dithering code. So it's exactly the same as calling the (now deprecated/private) method quantize.
Multiple threads suggest that the high-level convert function was only intended to expose "dither to web palette" or "map to nearest N colors". That seems to have changed slightly with 1.1.6 and beyond, but the documentation and implementation are both still incomplete. At http://comments.gmane.org/gmane.comp.python.image/2947 one of the devs recommends reading the PIL/Image.py source.
So, it looks like that's what you need to do. Whatever Image.convert does in Image.WEB mode, you want to do that—but with the palette that would be generated by Image.quantize(colors), not the web palette.
Of course most of the guts of that happens in the C code (under self.im.quantize, self.im.convert, etc.), but you may be able to do something like this pseudocode:
dummy = img.convert(mode='P', paletter='ADAPTIVE', colors=16)
intermediate = img.copy()
intermediate.setpalette(dummy.palette)
dithered = intermediate._new(intermediate.im.convert('P', Image.FLOYDSTEINBERG))
Then again, you may not. You may need to look at the C headers or even source to find out. Or maybe ask on the PIL mailing list.
PS, if you're not familiar with PIL's guts, img.im is the C imaging object underneath the PIL Image object img. From my past experience, this isn't clear the first 3 times you skim through PIL code, and then suddenly everything makes a lot more sense.
I'm not sure how I would go about reducing the color palette of a PIL Image. I would like to reduce an image's palette to the 5 prominent colors found in that image. My overall goal is to do some basic color sampling.
That's easy, just use the undocumented colors argument:
result = image.convert('P', palette=Image.ADAPTIVE, colors=5)
I'm using Image.ADAPTIVE to avoid dithering
I assume you want to do something more sophisticated than posterize. "Sampling" as you say, will take some finesse, as the 5 most common colors in the image are likely to be similar to one another. Maybe take a look at the 5 most separated peaks in a histogram.
The short answer is to use the Image.quantize method. For more info, see: How do I convert any image to a 4-color paletted image using the Python Imaging Library ?