I am reading a image using open cv and when it shows the image it zooms it in. The image is 1080 x 1257. I also tried another image with dimension 5961 x 3059 it zooms it in even more. If I use img=cv2.imread("hello.jpg",50)
it shows the first image in original dimension but is of grayscale, but the second image is still not in original dimension. So how do I display original dimension images
Help me with this as am I an absolute beginner with OpenCV.
here is the second image I was talking about.
here is the output of the image
img=cv2.imread("rainbow.png",0)
cv2.imshow('israinbow',img)
cv2.waitKey(5000)
cv2.destroyAllWindows()
Related
I try to run this code for 3D face reconstruction from github, the image result is combination of three images ( original image, reconstructed face image ,and reconstructed face with landmarks) I fail to save or display only the reconstructed face image.
today I treid a python filtering code that supposes to increase the noise in the image(de-noising) for a gray-scale image(medical image) and it's for a skull, the problem is i keep getting colored pixels, i mean the noise increased in terms of colored image, not in grayscale so please help me to make the code filter in gray-scale mode, extra details :
the code :
enter link description to see the filter code
original image :
the de-noised image after applying noise filter :
you can see the problem clearly that when i zoom into the picture i can see the colored pixels, while it supposes to be a gray-scale form
colored pixels in the filtered image :
partial zoom in
full zoom in
so please guess does anybody knows how to edit that code so that it can increase the noise in form of grayscale mode.
Your input image is a 3-channel JPEG. Make it greyscale (1 channel) before applying noise then it won't be able to treat the channels differently because there will only be one.
img.transform_colorspace('gray')
I am implementing gabor kernels, when I display the kernels while running the code (before saving them) they give me a picture like this
But after saving the kernels as jpg images using cv2.imwrite, I get like that
Any explanations? and how to save the kernels as in the first image?
There could be different causes. So I have two suggestions:
If you display the first picture with plt.imshow(), export it with plt.savefig(). This should easily be working.
If you still want to export the image with cv2.imwrite() make sure that the picture is correctly rescaled first. (mind that if you have only one channel, you will get a grayscale picture).
If we call the original picture org_img:
img = org_img
min_val,max_val=img.min(),img.max()
img = 255.0*(img - min_val)/(max_val - min_val)
img = img.astype(np.uint8)
cv2.imwrite(img,"picture.png")
I'm trying to load a set of images from a .mat file that contains 100 images. As visible in the screenshots below, when I load the image array I expect some sort of 'O'-like shape to form due to the positions of the 1's but instead get a seemingly nonsensical image formed below. Using the 'L' option to create the image via PIL leads to a completely black image.
I am using scipyio loadmat() to open the .mat file and am using the PIL Image object to create the image. Any help is appreciated!
Problem code and resulting array
This problem was caused as a result of each value in the array being a 1 or 0, multiplying the array by 255 causes the actual image to be realized.
I'm working on image processing with some images I collected myself. Dlib's dlib.load_rgb_image('image_path') method swaps the rows and columns on some images while OpenCV's cv2.imread('image_path') method does not.
I don't want to go in and rotate some of these offending images myself manually because I'm creating an app.
Check out the results below
img = dlib.load_rgb_image("myimg.jpg")
print(img.shape)
--------------------
OUTPUT: (1944, 2592, 3)
(the resultant image is rotated 90 degrees clockwise)
while OpenCV's method returns the correct shape:
img = cv2.imread("myimg.jpg")
print(img.shape)
--------------------
OUTPUT: (2592, 1944, 3)
Does anyone have any idea why this is happening?
I am also attaching the image details of one of the offending photos:
Thanks to #Mark Setchell, for pointing me in the right direction.
The EXIF data is the key, here.
dlib.load_rgb_image() does not take into account the EXIF orientation metadata, so some images are read incorrectly. To remedy this EXIF orientation tag of an image needs to be checked to perform the correct rotation on it.
Here are a few good answers:
Rotating an image with orientation specified in EXIF using Python without PIL including the thumbnail
Apparently, since OpenCV 3.1 imread handles EXIF orientation perfectly.