How to separate each white blob from my png images? - python

I am given a png image which is strictly two colours: Black and White. More specifically it has a black background and some white marks (we call blobs). Each image has about 30 to 50 such blobs.
Our task is to generate those 30 to 50 sperate images from our given image, with each having one blob .
For example, we have a given image:
We need to convert them into:
And so on with all the blobs. Please guide me on how to do it, I am comfortable with all standard image processing libraries in python.

I think you are looking for scipy.ndimage.measurements.label:
from scipy.ndimage.measurements import label
lb = label(my_bw_image)
msks = []
for li in xrange(1, lb.max()+1):
msks.append(lb==li)
You should have all your masks in msks list.

Related

how do i increase the resolution of my gif file?

As I am trying to create a gif file, the file has been created successfully but it is pixelating. So if anyone can help me out with how to increase resolution.
.Here is the code:-
import PIL
from PIL import Image
import NumPy as np
image_frames = []
days = np.arange(0, 12)
for i in days:
new_frame = PIL.Image.open(
r"C:\Users\Harsh Kotecha\PycharmProjects\pythonProject1\totalprecipplot" + "//" + str(i) + ".jpg"
)
image_frames.append(new_frame)
image_frames[0].save(
"precipitation.gif",
format="GIF",
append_images=image_frames[1:],
save_all="true",
duration=800,
loop=0,
quality=100,
)
Here is the Gif file:-
Here are the original images:-
image1
image2
iamge3
Updated Answer
Now that you have provided some images I had a go at disabling the dithering:
#!/usr/bin/env python3
from PIL import Image
# User editable values
method = Image.FASTOCTREE
colors = 250
# Load images precip-01.jpg through precip-12.jpg, quantize to common palette
imgs = []
for i in range(1,12):
filename = f'precip-{i:02d}.jpg'
print(f'Loading: {filename}')
try:
im = Image.open(filename)
pImage = im.quantize(colors=colors, method=method, dither=0)
imgs.append(pImage)
except:
print(f'ERROR: Unable to open {filename}')
imgs[0].save(
"precipitation.gif",
format="GIF",
append_images=imgs[1:],
save_all="true",
duration=800,
loop=0
)
Original Answer
Your original images are JPEGs which means they likely have many thousands of colours 2. When you make an animated GIF (or even a static GIF) each frame can only have 256 colours in its palette.
This can create several problems:
each frame gets a new, distinct palette stored with it, thereby increasing the size of the GIF (each palette is 0.75kB)
colours get dithered in an attempt to make the image look as close as possible to the original colours
different colours can get chosen for frames that are nearly identical which means colours flicker between distinct shades on successive frames - can cause "twinkling" like stars
If you want to learn about GIFs, you can learn 3,872 times as much as I will ever know by reading Anthony Thyssen's excellent notes here, here and here.
Your image is suffering from the first problem because it has 12 "per frame" local colour tables as well as a global colour table3. It is also suffering from the second problem - dithering.
To avoid the dithering, you probably want to do some of the following:
load all images and append them all together into a 12x1 monster image, and find the best palette for all the colours. As all your images are very similar, I think that you'll get away with generating a palette just from the first image without needing to montage all 12 - that'll be quicker
now palettize each image, with dithering disabled and using the single common palette
save your animated sequence of the palletised images, pushing in the singe common palette from the first step above
2: You can count the number of colours in an image with ImageMagick, using:
magick YOURIMAGE -format %k info:
3: You can see the colour tables in a GIF with gifsicle using:
gifsicle -I YOURIMAGE.GIF

Image.open() gives a plain white image

I am trying to edit this image:
However, when I run
im = Image.open(filename)
im.show()
it outputs a completely plain white image of the same size. Why is Image.open() not working? How can I fix this? Is there another library I can use to get non-255 pixel values (the correct pixel array)?
Thanks,
Vinny
Image.open actually seems to work fine, as does getpixel, putpixel and save, so you can still load, edit and save the image.
The problem seems to be that the temp file the image is saved in for show is just plain white, so the image viewer shows just a white image. Your original image is 16 bit grayscale, but the temp image is saved as an 8 bit grayscale.
My current theory is that there might actually be a bug in show where a 16 bit grayscale image is just "converted" to 8 bit grayscale by capping all pixel values to 255, resulting in an all-white temp image since all the pixels values in the original are above 30,000.
If you set a pixel to a value below 255 before calling show, that pixel shows correctly. Thus, assuming you want to enhance the contrast in the picture, you can open the picture, map the values to a range from 0 to 255 (e.g. using numpy), and then use show.
from PIL import Image
import numpy as np
arr = np.array(Image.open("Rt5Ov.png"))
arr = (arr - arr.min()) * 255 // (arr.max() - arr.min())
img = Image.fromarray(arr.astype("uint8"))
img.show()
But as said before, since save seems to work as it should, you could also keep the 16 bit grayscale depth and just save the edited image instead of using show.
you can use openCV library for loading images.
import cv2
img = cv2.imread('image file')
plt.show(img)

Extract text of a certain color ignoring the rest

I have image and need text from that image. Only need to convert time that is in yellow color and need to ignore the background text.
I am using textract with Python
I tried to convert rgb to grey but still getting the garbage result. Its reading data rom the background
from PIL import Image
import pytesseract
image_file = Image.open('timeline_with_background_text.png')
image_file = image_file.convert('L') # convert image to black and white
image_file.save('question.png')
text = pytesseract.image_to_string(image_file, lang = 'eng',config='-psm 6')
print(text)
From the image just need to convert time that is displays in Yellow color like "34:53"
You may be able to do this with the ImageMagick library in python.
If your yellow text will always be the exact same yellow, perhaps you could do something like this.
First, get the hex value of the yellow colour you want to keep. (lets say its #ffff00).
Then, use image magic to fill any colour EXCEPT that #ffff00 colour with black. That should leave you with an image that only shows your time.
convert original.png -fill black +opaque '#ffff00' onlyTime.png
https://www.imagemagick.org/script/command-line-options.php#opaque
In case the yellow colour is not always exactly the same, you can try to play around with the -fuzz option.
https://www.imagemagick.org/script/command-line-options.php#fuzz
Using the image you provided, I tried the following:
.\convert.exe C:\YLD2g.png -fill black -fuzz 20% +opaque '#c0861e' c:\onlyTime2.png
and the result was:
onlyTime2
That should be good enough for tesseract

How to convert a 1 channel image into a 3 channel with PIL?

I have an image that has one channel. I would like duplicate this one channel such that I can get a new image that has the same channel, just duplicated three times. Basically, making a quasi RBG image.
I see some info on how to do this with OpenCV, but not in PIL. It looks easy in Numpy, but again, PIL is different. I don't want to get into the habit of jumping from library to library all the time.
Here's one way without looking too hard at the docs..
fake image:
im = Image.new('P', (16,4), 127)
Get the (pixel) size of the single band image; create a new 3-band image of the same size; use zip to create pixel tuples from the original; put that into the new image..
w, h = im.size
ima = Image.new('RGB', (w,h))
data = zip(im.getdata(), im.getdata(), im.getdata())
ima.putdata(list(data))
Or even possibly
new = im.convert(mode='RGB')
just use:
image = Image.open(image_info.path).convert("RGB")
can convert both 1-channel and 4-channel to 3-channel

Find Image components using python/PIL

Is there a function in PIL/Pillow that for a grayscale image, will separate the image into sub images containing the components that make up the original image? For example, a png grayscale image with a set of blocks in them. Here, the images types always have high contrast to the background.
I don't want to use openCV, I just need some general blob detection, and was hoping Pillow/PIL might have something that does that already.
Yes, it is possible. You can use edge detection algorithms in PIL.
Sample code:
from PIL import Image, ImageFilter
image = Image.open('/tmp/sample.png').convert('RGB')
image = image.filter(ImageFilter.FIND_EDGES)
image.save('/tmp/output.png')
sample.png :
output.png:
Not using PIL, but worth a look I think:
I start with a list of image files that I've imported as a list of numpy arrays, and I create a list of boolean versions where the threshold is > 0
from skimage.measure import label, regionprops
import numpy as np
bool_array_list= []
for image in image_files:
bool_array = np.copy(image)
bool_array[np.where(bool_array > 0)] = 1
bool_array_list.append(bool_array)
img_region_list = []
Then I use label to identify the different areas, using 8-directional connectivity, and regionprops gives me a bunch of metrics, such as size and location.
for item in bool_array_list:
tmp_region_list = regionprops(label(item,
connectivity=2
)
)
img_region_list.append(tmp_region_list)

Categories

Resources