I'm reading in an image with OpenCV, and trying to do something with it in numpy (rotate 90deg). Viewing the result with imshow from matplotlib, it all seems to be working just fine - image is rotated. I can't use drawing methods from OpenCV on the new image, however. In the following code (I'm running this in a sagemath cloud worksheet):
%python
import cv2
import matplotlib.pyplot as plt
import numpy as np
import os, sys
image = np.array( cv2.imread('imagename.png') )
plt.imshow(image,cmap='gray')
image = np.array(np.rot90(image,3) ) # put it right side up
plt.imshow(image,cmap='gray')
cv2.rectangle(image,(0,0),(100,100),(255,0,0),2)
plt.imshow(image,cmap='gray')
I get the following error on the cv2.rectangle() command:
TypeError: Layout of the output array img is incompatible with cv::Mat (step[ndims-1] != elemsize or step[1] != elemsize*nchannels)
The error goes away if I use np.array(np.rot90(image,4) ) instead (i.e. rotate it 360). So it appears that the change in dimensions is messing it up. Does OpenCV store the dimensions somewhere internally that I need to update or something?
EDIT: Adding image = image.copy() after rot90() solved the problem. See rayryeng's answer below.
This is apparently a bug in the Python OpenCV wrapper. If you look at this question here: np.rot90() corrupts an opencv image, apparently doing a rotation that doesn't result back in the original dimensions corrupts the image and the OP in that post experiences the same error you are having. FWIW, I also experienced the same bug.... no idea why.
A way around this is to make a copy of the image after you rotate, and then show the image. This I can't really explain, but it seems to work. Also, make sure you call plt.show() at the end of your code to show the image:
import cv2
import matplotlib.pyplot as plt
import numpy as np
import os, sys
image = np.array( cv2.imread('imagename.png') )
plt.imshow(image,cmap='gray')
image = np.array(np.rot90(image,3) ) # put it right side up
image = image.copy() # Change
plt.imshow(image,cmap='gray')
cv2.rectangle(image,(0,0),(100,100),(255,0,0),2)
plt.imshow(image,cmap='gray')
plt.show() # Show image
I faced the same problem with numpy 1.11.2 and opencv 3.3.0. Not sure why, but this did the job for me.
Before using cv2.rectangle, add the line below:
image1 = image1.transpose((1,0)).astype(np.uint8).copy()
Reference
Convert data type works for my problem.
The image is of type np.int64 before the convert.
image = image.astype(np.int32) # convert data type
Related
i am newbie to python. I am trying to create a Python Program to image dehazing using dcp. I have an image that need to view at console at first and need to do some dehazing method. unfortunately, here i unable to upload or view the image and it saying Image data cannot be converted to float. I am getting the following error when I try running it.
import cv2
import math
import numpy as np
import matplotlib.pyplot as plt
def DarkChannel(im,sz):
b,g,r = cv2.split(img)
dc = cv2.min(cv2.min(r,g),b)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (sz,sz))
dark = cv2.erode(dc,kernel)
return dark
img = cv2.imread("C:/Users/User/Documents/sypder/img/bird.jpg", 1)
plt.imshow(img)
It seems your file path is wrong since your sample code worked perfectly for me. If you are struggling with file paths you can pass it as a raw string.
img = cv2.imread(r"C:\Users\User\Documents\sypder\img\bird.jpg", 1)
If you fix it like this, it should work. I copied the object name from properties while doing it.
try to read this post :
https://www.pythonfixing.com/2021/10/fixed-typeerror-image-data-can-not.html
it's jupiter based article but you can change them accordingly
I'm a newbie too that come across your question but those might be a help
goodluck!
I am trying to transpose an image using opencv,python but when ı set the destination for it, it doesnt write to it so when ı look at output image ı only see a black screen. Why does that happen?
Here's my code;
import cv2
import numpy as np
a=np.zeros(image.shape).astype(image.dtype)
cv2.transpose(image,a)
cv2.imwrite("a.png",a)
cv2.imshow("hh",a)
cv2.waitKey(0)
cv2.destroyAllWindows()
documentation https://docs.opencv.org/master/d2/de8/group__core__array.html#ga46630ed6c0ea6254a35f447289bd7404
OpenCV is sensitive to matrices it can't completely modify (resize). it can do that for cv::Mat. it can't for numpy arrays.
simply use a = cv2.transpose(image)
I am using the following code:
import cv2
import numpy as np
import pyautogui
import sys
img = pyautogui.screenshot()
cv2.imshow('image',img)
When I run this, it tells me
mat is not a numpy array, neither a scalar
I have tried to use different functions from opencv and it seems they all return the same. What do I need to do in order to take a screenshot then work with it in Open CV?
After some digging, I realise that the pyautogui function is using Pillow which is giving a format that must be adapted to work with opencv.
I added the following code so that it worked:
open_cv_image = np.array(img)
# Convert RGB to BGR
open_cv_image = open_cv_image[:, :, ::-1].copy()
For a program I'm writing, I need to convert an RGB image to grayscale and read it as a NumPy array using PIL.
But when I run the following code, it converts the image not to grayscale, but to a strange color distortion a bit like the output of a thermal camera, as presented.
Any idea what the problem might be?
Thank you!
http://www.loadthegame.com/wp-content/uploads/2014/09/thermal-camera.png
from PIL import Image
from numpy import *
from pylab import *
im = array(Image.open('happygoat.jpg').convert("L"))
inverted = Image.fromarray(im)
imshow(inverted)
show()
matplotlib's imshow is aimed at scientific representation of data - not just image data. By default it's configured to use a high constrast color palette.
You can force it to display data using grayscale by passing the following option:
import matplotlib.cm
imshow(inverted, cmap=matplotlib.cm.Greys_r)
Add this code to view/display an image:
from PIL import Image;
from numpy import *
from pylab import *
im = array(Image.open('happygoat.jpg').convert("L"));
inverted = Image.fromarray(im);
inverted
I am looking for a way to rescale the matrix given by reading in a png file using the matplotlib routine imread,
e.g.
from pylab import imread, imshow, gray, mean
from matplotlib.pyplot import show
a = imread('spiral.png')
#generates a RGB image, so do
show()
but actually I want to manually specify the dimension of $a$, say 200x200 entries, so I need some magic command (which I assume exists but cannot be found by myself) to interpolate the matrix.
Thanks for any useful comments : )
Cheers
You could try using the PIL (Image) module instead, together with numpy. Open and resize the image using Image then convert to array using numpy. Then display the image using pylab.
import pylab as pl
import numpy as np
from PIL import Image
path = r'\path\to\image\file.jpg'
img = Image.open(path)
img.resize((200,200))
a = np.asarray(img)
pl.imshow(a)
pl.show()
Hope this helps.