When I read an image using opencv imread function, I find its height and width being swapped as what it should be. Like my original image is of dimensions (610 by 406) but on being read using opencv::imread function, its dimensions are 406 by 610. Also, if I rotate my original image before passing it to the function then also, no change. The image read still has original dimensions.
Please see example code and images for clarification:
So, below I have provided the input images: one is original and second one is rotated (I rotated it using windows rotate command, by right-clicking and selecting 'rotate right'). Output I get for both the images is same. It seems to me that rotating image did not actually change its shape. I think so because, when I try to put the rotated image here then also, it was showing the un-rotated version of it only (in the preview) so, I had to take a screen-capture of it and then, paste it here.
This is the code:
import cv2
import numpy as np
import sys
import os
image = cv2.imread("C:/img_8075.jpg")
print "image shape: ",image.shape
cv2.imshow("image",image)
cv2.waitKey(0)
image2 = cv2.imread("C:/img_8075_Rotated.jpg")
print "image shape: ",image2.shape
cv2.imshow("image",image2)
cv2.waitKey(0)
The result I get for this is: image shape: (406,610,3)
image shape: (406,610,3)
for both the images.
I am unable to paste input/output pictures here since, it says you should have '10 reputations' and I have just joined.
Any suggestions would be helpful. thanks!
I believe you are just getting the conventions mixed up. OpenCV Mat structures can be accessed (ROW,COLUMN).
So a 1920x1080 image will be 1080 ROWS by 1920 COLUMNS (1080,1920)
Commonly Mat.rows represent the image's height,and the Mat.cols represent the image's width.
Related
I am trying to edit this image:
However, when I run
im = Image.open(filename)
im.show()
it outputs a completely plain white image of the same size. Why is Image.open() not working? How can I fix this? Is there another library I can use to get non-255 pixel values (the correct pixel array)?
Thanks,
Vinny
Image.open actually seems to work fine, as does getpixel, putpixel and save, so you can still load, edit and save the image.
The problem seems to be that the temp file the image is saved in for show is just plain white, so the image viewer shows just a white image. Your original image is 16 bit grayscale, but the temp image is saved as an 8 bit grayscale.
My current theory is that there might actually be a bug in show where a 16 bit grayscale image is just "converted" to 8 bit grayscale by capping all pixel values to 255, resulting in an all-white temp image since all the pixels values in the original are above 30,000.
If you set a pixel to a value below 255 before calling show, that pixel shows correctly. Thus, assuming you want to enhance the contrast in the picture, you can open the picture, map the values to a range from 0 to 255 (e.g. using numpy), and then use show.
from PIL import Image
import numpy as np
arr = np.array(Image.open("Rt5Ov.png"))
arr = (arr - arr.min()) * 255 // (arr.max() - arr.min())
img = Image.fromarray(arr.astype("uint8"))
img.show()
But as said before, since save seems to work as it should, you could also keep the 16 bit grayscale depth and just save the edited image instead of using show.
you can use openCV library for loading images.
import cv2
img = cv2.imread('image file')
plt.show(img)
I'm trying to convert EPS images to JPEG using Pillow. But the results are of low quality. I'm trying to use resize method, but it gets completely ignored. I set up the size of JPEG image as (3600, 4700), but the resulted image has (360, 470) size. My code is:
eps_image = Image.open('img.eps')
height = eps_image.height * 10
width = eps_image.width * 10
new_size = (height, width)
print(new_size) # prints (3600, 4700)
eps_image.resize(new_size, Image.ANTIALIAS)
eps_image.save(
'img.jpeg',
format='JPEG'
dpi=(9000, 9000),
quality=95)
UPD. Vasu Deo.S noticed one my error, and thanks to him the JPG image has become bigger, but quality is still low. I've tried different DPI, sizes, resample values for resize function, but the result does not change much. How can i make it better?
The problem is that PIL is a raster image processor, as opposed to a vector image processor. It "rasterises" vector images (such as your EPS file and SVG files) onto a grid when it opens them because it can only deal with rasters.
If that grid doesn't have enough resolution, you can never regain it. Normally, it rasterises at 100dpi, so if you want to make bigger images, you need to rasterise onto a larger grid before you even get started.
Compare:
from PIL import Image
eps_image = Image.open('image.eps')
eps_image.save('a.jpg')
The result is 540x720:
And this:
from PIL import Image
eps_image = Image.open('image.eps')
# Rasterise onto 4x higher resolution grid
eps_image.load(scale=4)
eps_image.save('a.jpg')
The result is 2160x2880:
You now have enough quality to resize however you like.
Note that you don't need to write any Python to do this at all - ImageMagick will do it all for you. It is included in most Linux distros and is available for macOS and Windows and you just use it in Terminal. The equivalent command is like this:
magick -density 400 input.eps -resize 800x600 -quality 95 output.jpg
It's because eps_image.resize(new_size, Image.ANTIALIAS) returns an resized copy of an image. Therefore you have to store it in a separate variable. Just change:-
eps_image.resize(new_size, Image.ANTIALIAS)
to
eps_image = eps_image.resize(new_size, Image.ANTIALIAS)
UPDATE:-
These may not solve the problem completely, but still would help.
You are trying to save your output image as a .jpeg, which is a
lossy compression format, therefore information is lost during the
compression/transformation (for the most part). Change the output
file extension to a lossless compression format like .png so that
data would not be compromised during compression. Also change
quality=95 to quality=100 in Image.save()
You are using Image.ANTIALIAS for resampling the image, which is
not that good when upscaling the image (it has been replaced by
Image.LANCZOS in newer version, the clause still exists for
backward compatibility). Try using Image.BICUBIC, which produces
quite favorable results (for the most part) when upscaling the image.
EDIT: Sorry, the first version of the code was bullshit, I tried to remove useless information and made a mistake. Problem stays the same, but now it's the code I actually used
I think my problem is probably very basic but I cant find a solution. I basically just wanted to play around with PIL and convert an image to an array and backward, then save the image. It should look the same, right? In my case the new image is just gibberish, it seems to have some structure but it is not a picture of a plane like it should be:
def array_image_save(array, image_path ='plane_2.bmp'):
image = Image.fromarray(array, 'RGB')
image.save(image_path)
print("Saved image: {}".format(image_path))
im = Image.open('plane.bmp').convert('L')
w,h = im.size
array_image_save(np.array(list(im.getdata())).reshape((w,h)))
Not entirely sure what you are trying to achieve but if you just want to transform the image to a numpy array and back, the following works:
from PIL import Image
import numpy as np
def array_image_save(array, image_path ='plane_2.bmp'):
image = Image.fromarray(array)
image.save(image_path)
print("Saved image: {}".format(image_path))
im = Image.open('plane.bmp')
array_image_save(np.array(im))
You can just pass a PIL image to np.array and it takes care of the proper shaping. The reason you get distorted data is because you convert the pil image to greyscale (.convert('L')) but then try to save it as RGB.
I have a simple code to try out opencv image blending with addWeighted()
function. It shows me the error which
Sizes of input arguments do not match
The following is my code
import cv2
import numpy as np
img1 = cv2.imread('/home/jianyepa/Downloads/gtr1.jpg')
img2 = cv2.imread('/home/jianyepa/Downloads/r1.png')
dst = cv2.addWeighted(img1,0.7,img2,0.3,0)
cv2.imshow('dst', dst)
cv2.waitkey(0)
cv2.destroyAllWindows
I have check the size and channels of both image with img.shape, both images showing (720, 1280, 3). I have no idea why this error coming.
Please assist. Thank you.
List of possible problems:
Either the size and number of channels of the images do not match
Or both the images might be of different file type.
In your case, it is not the first. Both the images have the same size and same number of channels as well.
But the problem lies with the different image file types. .png files have another channel called the alpha channel which is not present in .jpg files. This would have caused your problem.
I have been trying to draw rectangle on a black image, uscv2.rectangle.Here is my code : (It is just a sample, in actual code there is a loop i.e values x2,y2,w2,h2 changes in a loop)
heir = np.zeros((np.shape(image1)[0],np.shape(image1)[1]),np.uint8);
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
It is giving the following output:
Why the image is like that? Why the boundaries are not just a line a width 5.
I have tried, but I am not able to figure it out.
Can't post this in a comment, but it's a negative answer: the same operations work for me on Windows/python 2.7.8/opencv3.1
import numpy as np
import cv2
heir = np.zeros((100,200),np.uint8);
x2=10
y2=20
w2=30
h2=40
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
Because you are loading the image to be tagged (draw rectangles) in grayscale, thats why when you are adding rectangles/bounding boxes the colors are being converted to grayscale.
To fix the issue, open image in "color" format. Since, you didn't included that part of code, here is the proposed solution:
tag_img = cv2.imread(MYIMAGE,1)
Pay attention to the second parameter here, which is "1" and means load image as color. Read more about reading images here: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html