I tried to get the difference of two images. Here are my two images.
However, I only got a blank image like this
I use OpenCV package in python. The code I use is:
image3 = image1 - image2
plt.imshow(image3)
plt.show()
The backgrounds of the two images are not the same. I don't understand why the difference of the two images is just blank. Can someone help me with this?
Thank you in advance.
This works fine for me in Python/OpenCV using cv2.absdiff(). I suggest you use cv2.imshow() to view your results and cv2.imwrite() to save your results.
import cv2
import numpy as np
image1 = cv2.imread('image1.png')
image2 = cv2.imread('image2.png')
diff = cv2.absdiff(image1, image2)
print(np.amin(diff), np.amax(diff))
cv2.imwrite('diff.png', diff)
cv2.imshow('diff', diff)
cv2.waitKey(0)
Result:
Min and Max Values In Diff:
0 91
Related
I'm currently working on implementation in python the algorithm presented in https://arxiv.org/abs/1611.03270. In the following paper there is a part when we create epipolar lines and we want to take part of the image between those lines. Creation of the lines is fairly easy and it can be done with approach presented for instance here https://docs.opencv.org/3.4/da/de9/tutorial_py_epipolar_geometry.html. I tried to find a solution that would get me a part of the image between those lines (with some set width) but I couldn't find any. I know that I could manually take values from pixels via calculating if they are under/above lines but maybe there is a more elegant solution to this problem? Do you guys have any idea or maybe experienced similar problem in the past?
you can do like this
import numpy as np
import cv2
# lets say this is our image
np.random.seed(42)
img = np.random.randint(0, high=256, size=(400,400), dtype=np.uint8)
cv2.imshow('random image', img)
# we can create a mask with epipolar points and AND with the original image
mask = np.zeros([400, 400],dtype=np.uint8)
pts = np.array([[20,20],[100,350],[165,240],[30,30]], np.int32)
cv2.fillPoly(mask, [pts], 255)
cv2.imshow('mask', mask)
filt_img = img&mask
cv2.imshow('filtered image', filt_img)
cv2.waitKey(0)
Hi i am using the below code to merge more than one image using PIL module
from PIL import Image
img0 = Image.open("w1.png")
img0 = img0.convert('RGBA')
img1 = Image.open("body.png")
img1 = img1.convert('RGBA')
img0.paste(img1, (0,0), img1)
img0.save('0.png')
img2 = Image.open("0.png")
img2 = img2.convert('RGBA')
img3 = Image.open("d1.png")
img3 = img3.convert('RGBA')
img2.paste(img3, (0,0), img3)
img2.show()
would like to know if there is way i can merge more than two images.
i have 8 images that i need to merge.
Thank you for any suggestion.
would like to know if there is way i can merge more than two images. i have 8 images that i need to merge.
Just... keep paste-ing new images on?
All Image.paste does is merge the parameter image onto the subject, you can keep doing that, and it'll keep merging the new images.
if you want to merge img,I suggesting a tool called stegsolve
There is a download url:
https://github.com/eugenekolo/sec-tools/tree/master/stego/stegsolve/stegsolve
There is usage:
here I have a small project on which I block for weeks
I Have a display is 3840x2400 monochrome pixels. Nevertheless, it is driven like 1280(RGB)x2400, whereas each RGB subpixel maps to one monochrome pixel.
Therefore, in order to display real 3840x2400 one has to map 3 consecutive pixels of the monochrome image to one pseudo-RGB pixel. This yields a 1280x2400 wide image, where each RGB subpixel corresponds to one real monochrome pixel.
I try to do this in python3.9 with numpy and PIL
The code below:
from PIL import Image
import numpy as np
def TransTo1224(SourcePngFileName, DestPngFileName):
#trans png file from 3840x2400 to 1280X2400(RGB)
print('~~~~~~~')
print(SourcePngFileName)
imgSrc = Image.open(SourcePngFileName)
dataSrc = np.array(imgSrc)
dataDest = dataSrc.reshape(2400,1280,3)
imgDest = Image.fromarray(dataDest, 'RGB')
imgDest.save(DestPngFileName)
TransTo1224("./source/test1.png","./output/test1.png")
I have a error:
dataDest = dataSrc.reshape(2400,1280,3)
ValueError: cannot reshape array of size 27648000 into shape (2400,1280,3)
I don't understand my mistake, if someone can help me, thank you in advance.
try this
dataDest = vv.reshape(2400,1280,3,-1)
or
dataDest = vv.reshape(2400,1280,3,3)
using dataDest = dataSrc.reshape(2400,1280,3) it wont work
ok i solved my problem it came indeed from my input image, the code works with some images but not the one i want to remap. besides i didn't understand where this multiple of 3 came from the
3840x2400x3 = 27648000.
Well my problem came from the mode of the image which was in RGB.
it was enough for me to convert this mode in "L", luminance before making my reshape
from PIL import Image
import numpy as np
def TransTo1224(SourcePngFileName, DestPngFileName):
#trans png file from 3840x2400 to 1280X2400(RGB)
print('~~~~~~~')
print(SourcePngFileName)
imgSrc = Image.open(SourcePngFileName)
imgSrc = imgSrc.convert('L') # <-----
dataSrc = np.array(imgSrc)
dataDest = dataSrc.reshape(2400,1280,3)
imgDest = Image.fromarray(dataDest, 'RGB')
imgDest.save(DestPngFileName)
TransTo1224("./source/test1.png","./output/test1.png")
Thank you all for helping me
I am trying to produce this final image
from this original image
I tried with erosions, dilations and setting brightness/contrast, but I am not able to get a result similar to this. Specifically, I can not get a single grey pixel to generate several pixels around it, as is the case in the above image.
Any advice ?
I am not quite sure I understand what you want. But here is one attempt to stretch the dynamic range of intensities using Python/OpenCV/Skimage.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image
img = cv2.imread('white_spots.png')
# stretch dynamic range
result = skimage.exposure.rescale_intensity(img, in_range=(127.5,255), out_range=(0,255))
# save result
cv2.imwrite('white_spots_stretched.png', result)
# Display result
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I am having 3 to 4 images and I am trying to combine them all into one image (put into one window) and then show them through CV2.imshow() function. But the problem is that every solution to this problem is for exactly the same dimension images, which is not my case. My images are all of different dimensions. Kindly help me out how to solve this problem? I have four images with different dimension and want output like this
|||||||||||||||||||||||||||||||||
|| Image1 || Image2 ||
||||||||||||||||||||||||||||||||||
|| Image1 || Image2 ||
||||||||||||||||||||||||||||||||||
Currently, I have code like this for two images which work on only equally sized Images
im = cv2.imread('1.png')
img = cv2.imread('2.jpg')
both = np.hstack((im,im))
cv2.imshow('imgc',both)
cv2.waitKey(10000)
Use im.resize() function of opencv to resize the image and then do the combining task.
Always use a reference dimension such as 1000 x 800 (you can change it)
import cv2
import numpy as np
list_of_img_paths = [path2,path3,path4]
im = cv2.imread(path1)
imstack = cv2.resize(im,(1000,800))
for path in list_of_img_paths:
im = cv2.imread(path)
im = cv2.resize(im,(1000,800))
# hstack to join image horizontally
imstack = np.hstack((imstack,im))
cv2.imshow('stack',imstack)
cv2.waitKey(0)