I have two images: an image with a text and an image as the dirty background.
Clean Image
Dirty Background Image
How will I overlay the clean image to the dirty background image using Python? Please assume that the clean image has the smaller size compared to the dirty background image.
There's a library called pillow (which is a fork of PIL) that can do this for you. You can play around with the placements a little, but I think it looks good.
# Open your two images
cleantxt = Image.open('cleantext.jpg')
dirtybackground = Image.open('dirtybackground.jpg')
# Convert the image to RGBA
cleantxt = cleantxt.convert('RGBA')
# Return a sequence object of every pixel in the text
data = cleantxt.getdata()
new_data = []
# Turn every pixel that looks lighter than gray into a transparent pixel
# This turns everything except the text transparent
for item in data:
if item[0] >= 123 and item[1] >= 123 and item[2] >= 123:
new_data.append((255, 255, 255, 0))
else:
new_data.append(item)
# Replace the old pixel data of the clean text with the transparent pixel data
cleantxt.putdata(new_data)
# Resize the clean text to fit on the dirty background (which is 850 x 555 pixels)
cleantxt.thumbnail((555,555), Image.ANTIALIAS)
# Save the clean text if we want to use it for later
cleantxt.save("cleartext.png", "PNG")
# Overlay the clean text on top of the dirty background
## (0, 0) is the pixel where you place the top left pixel of the clean text
## The second cleantxt is used as a mask
## If you pass in a transparency, the alpha channel is used as a mask
dirtybackground.paste(cleantxt, (0,0), cleantxt)
# Show it!
dirtybackground.show()
# Save it!
dirtybackground.save("dirtytext.png", "PNG")
Here's the output image:
Related
I have been trying to write a code to extract cracks from an image using thresholding. However, I wanted to keep the background black. What would be a good solution to keep the outer boundary visible and the background black. Attached below is the original image along with the threshold image and the code used to extract this image.
import cv2
#Read Image
img = cv2.imread('Original.png')
# Convert into gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Image processing ( smoothing )
# Averaging
blur = cv2.blur(gray,(3,3))
ret,th1 = cv2.threshold(blur,145,255,cv2.THRESH_BINARY)
inverted = np.invert(th1)
plt.figure(figsize = (20,20))
plt.subplot(121),plt.imshow(img)
plt.title('Original'),plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(inverted,cmap='gray')
plt.title('Threshold'),plt.xticks([]), plt.yticks([])
Method 1
Assuming the circle in your images stays in one spot throughout your image set you can manually create a black 'mask' image with a white hole in the middle, then overlay it on the final inverted image.
You can easily make the mask image using your favorite image editor's magic wand tool.
I made this1 by also expanding the circle inwards by one pixel to take into account some of the pixels the magic wand tool couldn't catch.
You would then use the mask image like this:
mask = cv2.imread('/path/to/mask.png')
masked = cv2.bitwise_and(inverted, inverted, mask=mask)
Method 2
If the circle does NOT stay is the same spot throughout your entire image set you can try to make the mask from all the fully black pixels in your original image. This assumes that the 'sample' itself (the thing with the cracks) does not contain fully black pixels. Although this will result in the text on the bottom left to be left white.
# make all the non black pixels white
_,mask = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
1 The original is not the same size as your inverted image and thus the mask I made won't actually fit, you're gonna have to make it yourself.
I would like to achieve something similar to this:
I currently have the image on the red background but I am unsure how to draw a translucent rectangle such as on the image above to put the text on in order to make it pop out more. I’m pretty sure it can be achieved using OpenCV but I am fairly new to Python and it seems very confusing. (I can’t seem to do it properly and it’s starting to annoy me). Here is my current image (ignore the white outline):
Here is one way to achieve the same results in Python/OpenCV.
Read the input
Crop the desired region to darken
Create the same sized black image
Blend the two image (crop 75% and black 25%)
Draw text on the blended image
Copy the text image back to the same location in the input
Save results
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("chimichanga.jpg")
# define undercolor region in the input image
x,y,w,h = 66,688,998,382
# define text coordinates in the input image
xx,yy = 250,800
# compute text coordinates in undercolor region
xu = xx - x
yu = yy - y
# crop undercolor region of input
sub = img[y:y+h, x:x+w]
# create black image same size
black = np.zeros_like(sub)
# blend the two
blend = cv2.addWeighted(sub, 0.75, black, 0.25, 0)
# draw text on blended image
text = cv2.putText(blend, "CHIMICHANGA", (xu,yu), cv2.FONT_HERSHEY_SIMPLEX, 2, (255,255,255), cv2.LINE_8, bottomLeftOrigin=False )
# copy text filled region onto input
result = img.copy()
result[y:y+h, x:x+w] = text
# write result to disk
cv2.imwrite("chimichanga_result.jpg", result)
# display results
cv2.imshow("BLEND", blend)
cv2.imshow("TEXT", text)
cv2.imshow("RESULT", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I'm trying to remove the background from product images, save them as transparent png's and got to a point where I can't figure out how and why I get the white line around the products like a fuzziness(see second image) don't know the real word for the effect. Also I'm losing the Nike swoosh which is white too :(
from PIL import Image
img = Image.open('test.jpg')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] > 247 and item[1] > 247 and item[2] > 247:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save("test.png", "PNG")
Any ideas how I can fix this so I get clean selections, edges ?
Take a copy of your image and use PIL/Pillow's ImageDraw.floodfill() to flood fill from the top-left corner using a reasonable tolerance - that way you will only fill to the edges of the shirt and avoid the Nike logo.
Then take the background outline and make it white and everything else black and try applying some morphology (from scikit-image maybe) to dilate the white a little larger to hide the jaggies.
Finally, put the resulting new layer into the image with putalpha().
I am really pushed for time, but here are the bones of it. Just missing the copy of the original image at the start and the putalpha() of the new alpha layer back at the end...
from PIL import Image, ImageDraw
import numpy as np
import skimage.morphology
# Open the shirt
im = Image.open('shirt.jpg')
# Make all background pixels (not including Nike logo) into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=10)
# DEBUG
im.show()
Experiment with the threshold (thresh) here. If you make it 50, it works much more cleanly and may be good enough to stop.
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
# DEBUG
Image.fromarray((bgMask*255).astype(np.uint8)).show()
# Make a disk-shaped structuring element
strel = skimage.morphology.disk(13)
# Perform a morphological closing with structuring element
closed = skimage.morphology.binary_closing(bgMask,selem=strel)
# DEBUG
Image.fromarray((closed*255).astype(np.uint8)).show()
If you are unfamiliar with morphology, Anthony Thyssen has some excellent noes worth reading here.
By the way, you could also use potrace to smooth the outline somewhat.
I had a bit more time today so here is a more complete version. You can experiment with the morphology disk sizes and floodfill thresholds according to your images till you find something tailored for your needs:
#!/bin/env python3
from PIL import Image, ImageDraw
import numpy as np
import skimage.morphology
# Open the shirt and make a clean copy before we dink with it too much
im = Image.open('shirt.jpg')
orig = im.copy()
# Make all background pixels (not including Nike logo) into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=50)
# DEBUG
im.show()
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
# DEBUG
Image.fromarray((bgMask*255).astype(np.uint8)).show()
# Make a disk-shaped structuring element
strel = skimage.morphology.disk(13)
# Perform a morphological closing with structuring element to remove blobs
newalpha = skimage.morphology.binary_closing(bgMask,selem=strel)
# Perform a morphological dilation to expand mask right to edges of shirt
newalpha = skimage.morphology.binary_dilation(newalpha, selem=strel)
# Make a PIL representation of newalpha, converting from True/False to 0/255
newalphaPIL = (newalpha*255).astype(np.uint8)
newalphaPIL = Image.fromarray(255-newalphaPIL, mode='L')
# DEBUG
newalphaPIL.show()
# Put new, cleaned up image into alpha layer of original image
orig.putalpha(newalphaPIL)
orig.save('result.png')
As regards using potrace to smooth the outline, you would save new alphaPIL as a PGM format image because that is what potrace likes as input. So that would be:
newalphaPIL.save('newalpha.pgm')
Now you can play around, oops I meant "experiment carefully" with potrace to smooth the alpha outline. The basic command is:
potrace -b pgm newalpha.pgm -o smoothalpha.pgm
You can then re-load the image smoothalpha.pgm back into your Python and use it on the last line in the putalpha() call. Here is an animation of the difference between the original unsmoothed alpha and the smoothed one:
Look carefully at the edges to see the difference. You may want to experiment with resizing the alpha either to twice the size or half the size before smoothing to see what effect that has.
I tried so hard to converting PNG to Bitmap smoothly but failed every time.
but now I think I might found a reason.
it's because of the alpha channels.
('feather' in Photoshop)
Input image:
Output I've expected:
Current output:
I want to convert it to 8bit Bitmap and colour every invisible(alpha) pixels to purple(#FF00FF) and set them to dot zero. (very first palette)
but apparently, the background area and the invisible area around the actual image has a different colour.
i want all of them coloured same as background.
what should i do?
i tried these three
image = Image.open(file).convert('RGB')
image = Image.open(file)
image = image.convert('P')
pp = image.getpalette()
pp[0] = 255
pp[1] = 0
pp[2] = 255
image.putpalette(pp)
image = Image.open('feather.png')
result = image.quantize(colors=256, method=2)
the third method looks better but it becomes the same when I save it as a bitmap.
I just want to get it over now. I wasted too much time on this.
if i remove background from the output file,
it still looks awkward.
You question is kind of misleading as You stated:-
I want to convert it to 8bit Bitmap and colour every invisible(alpha) pixels to purple(#FF00FF) and set them to dot zero. (very first palette)
But in the description you gave an input image having no alpha channel. Luckily, I have seen your previous question Convert PNG to 8 bit bitmap, therefore I obtained the image containing alpha (that you mentioned in the description) but didn't posted.
HERE IS THE IMAGE WITH ALPHA:-
Now we have to obtain .bmp equivalent of this image, in P mode.
from PIL import Image
image = Image.open(r"Image_loc")
new_img = Image.new("RGB", (image.size[0],image.size[1]), (255, 0, 255))
cmp_img = Image.composite(image, new_img, image).quantize(colors=256, method=2)
cmp_img.save("Destination_path.bmp")
OUTPUT IMAGE:-
I have two images, one overlay and one background.
I want to create a new image, by editing overlay image and manipulating it to show only the pixels which have blue colour in the background image.
I dont want to add the background, it is only for selecting the pixels.
Rest part should be transparent.
Any hints or ideas please? PS: I edited result image with paint so its not perfect.
Image 1 is background image.
Image 2 is overlay image.
Image 3 is the check I want to perform. (to find out which pixels have blue in background and making remaining pixels transparent)
Image 4 is the result image after editing.
I renamed your images according to my way of thinking, so I took this as image.png:
and this as mask.png:
Then I did what I think you want as follows. I wrote it quite verbosely so you can see all the steps along the way:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open input images
image = Image.open("image.png")
mask = Image.open("mask.png")
# Get dimensions
h,w=image.size
# Resize mask to match image, taking care not to introduce new colours (Image.NEAREST)
mask = mask.resize((h,w), Image.NEAREST)
mask.save('mask_resized.png')
# Convert both images to numpy equivalents
npimage = np.array(image)
npmask = np.array(mask)
# Make image transparent where mask is not blue
# Blue pixels in mask seem to show up as RGB(163 204 255)
npimage[:,:,3] = np.where((npmask[:,:,0]<170) & (npmask[:,:,1]<210) & (npmask[:,:,2]>250),255,0).astype(np.uint8)
# Identify grey pixels in image, i.e. R=G=B, and make transparent also
RequalsG=np.where(npimage[:,:,0]==npimage[:,:,1],1,0)
RequalsB=np.where(npimage[:,:,0]==npimage[:,:,2],1,0)
grey=(RequalsG*RequalsB).astype(np.uint8)
npimage[:,:,3] *= 1-grey
# Convert numpy image to PIL image and save
PILrgba=Image.fromarray(npimage)
PILrgba.save("result.png")
And this is the result:
Notes:
a) Your image already has an (unused) alpha channel present.
b) Any lines starting:
npimage[:,:,3] = ...
are just modifying the 4th channel, i.e. the alpha/transparency channel of the image