I have been trying to draw rectangle on a black image, uscv2.rectangle.Here is my code : (It is just a sample, in actual code there is a loop i.e values x2,y2,w2,h2 changes in a loop)
heir = np.zeros((np.shape(image1)[0],np.shape(image1)[1]),np.uint8);
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
It is giving the following output:
Why the image is like that? Why the boundaries are not just a line a width 5.
I have tried, but I am not able to figure it out.
Can't post this in a comment, but it's a negative answer: the same operations work for me on Windows/python 2.7.8/opencv3.1
import numpy as np
import cv2
heir = np.zeros((100,200),np.uint8);
x2=10
y2=20
w2=30
h2=40
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
Because you are loading the image to be tagged (draw rectangles) in grayscale, thats why when you are adding rectangles/bounding boxes the colors are being converted to grayscale.
To fix the issue, open image in "color" format. Since, you didn't included that part of code, here is the proposed solution:
tag_img = cv2.imread(MYIMAGE,1)
Pay attention to the second parameter here, which is "1" and means load image as color. Read more about reading images here: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html
Related
Seems like cv2.imread() or Image.fromarray() is changing original image color to a bluish color. What i am trying to accomplish is to crop the original png image and keep the same colors but the color changes. Not sure how to revert to original color. Please help! ty
`
# start cropping logic
img = cv2.imread("image.png") # import cv2
crop = img[1280:, 2250:2730]
cropped_rendered_image = Image.fromarray(crop) #from PIL import Image
cropped_rendered_image.save("newImageName.png")
`
tried this and other fixes but no luck yet
https://stackoverflow.com/a/50720612/13206968
There is no "changing" going on. It's simply a matter of channel order.
OpenCV natively uses BGR order (in numpy arrays)
PIL natively uses RGB order
Numpy doesn't care
When you call cv.imread(), you're getting BGR data in a numpy array.
When you repackage that into a PIL Image, you are giving it BGR order data, but you're telling it that it's RGB, so PIL takes your word for it... and misinterprets the data.
You can try telling PIL that it's BGR;24 data. See https://pillow.readthedocs.io/en/stable/handbook/concepts.html
Or you can use cv.cvtColor() with the cv.COLOR_BGR2RGB flag (because you have BGR and you want RGB). For the opposite direction, there is the cv.COLOR_RGB2BGR flag.
Using cv2, I am able to find the contours of text in an image. I would like to remove said text and replace it with the average pixel of the surrounding area.
However, the contours are just a bit smaller than I would like, resulting in a blurred edge where one can barely tell what the original text was:
I once chanced upon a cv2 tutorial with a stylized "j" as the sample image. It showed how to "expand" a contour in a manner similar to adding a positive sample next to every pre-existing positive sample in a mask.
If such a method does not already exist in cv2, how may I do this manually?
The function I sought was dilation, as detailed here:
https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html
import cv2
import numpy as np
img = cv2.imread('j.png',0)
kernel = np.ones((5,5),np.uint8)
dilation = cv2.dilate(img,kernel,iterations = 1)
When I read an image using opencv imread function, I find its height and width being swapped as what it should be. Like my original image is of dimensions (610 by 406) but on being read using opencv::imread function, its dimensions are 406 by 610. Also, if I rotate my original image before passing it to the function then also, no change. The image read still has original dimensions.
Please see example code and images for clarification:
So, below I have provided the input images: one is original and second one is rotated (I rotated it using windows rotate command, by right-clicking and selecting 'rotate right'). Output I get for both the images is same. It seems to me that rotating image did not actually change its shape. I think so because, when I try to put the rotated image here then also, it was showing the un-rotated version of it only (in the preview) so, I had to take a screen-capture of it and then, paste it here.
This is the code:
import cv2
import numpy as np
import sys
import os
image = cv2.imread("C:/img_8075.jpg")
print "image shape: ",image.shape
cv2.imshow("image",image)
cv2.waitKey(0)
image2 = cv2.imread("C:/img_8075_Rotated.jpg")
print "image shape: ",image2.shape
cv2.imshow("image",image2)
cv2.waitKey(0)
The result I get for this is: image shape: (406,610,3)
image shape: (406,610,3)
for both the images.
I am unable to paste input/output pictures here since, it says you should have '10 reputations' and I have just joined.
Any suggestions would be helpful. thanks!
I believe you are just getting the conventions mixed up. OpenCV Mat structures can be accessed (ROW,COLUMN).
So a 1920x1080 image will be 1080 ROWS by 1920 COLUMNS (1080,1920)
Commonly Mat.rows represent the image's height,and the Mat.cols represent the image's width.
I followed the tutorial at this page but nothing seems to happen when the line cv2.drawContours(im,contours,-1,(0,255,0),3) is executed. I was expecting to see star.jpg with a green outline, as shown in the tutorial. Here is my code:
import numpy as np
import cv2
im = cv2.imread('C:\Temp\ip\star.jpg')
print im.shape #check if the image is loaded correctly
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im,contours,-1,(0,255,0),3)
pass
There are no error messages. star.jpg is the star from the above mentioned webpage.
I am using opencv version 2.4.8 and Python 2.7.
Is drawContours supposed to show an image on my screen? If so, what did I do wrong? If not, how do I show the image?
Thanks
Edit:
Adding the following lines will show the image:
cv2.imshow("window title", im)
cv2.waitKey()
waitKey() is needed otherwise the window will just show a gray background. According to this post, that's because waitKey() tells it to start handling the WM_PAINT event.
I had the same issue. I believe the issue is that the underlying image is 1-channel rather than 3-channel. Therefore, you need to set the color so it's nonzero in the first element (e.g. (255,0,0)).
i too had the same problem. The thing is it shows, but too dark for our eyes to see.
Solution:
change the colour from (0,255,0) (for some weird reason, i too had give exactly the same color!) to (128,255,0) (or some better brighter colour)
You have to do something to the effect of:
cv2.drawContours(im,contours,-1,(255,255,0),3)
cv2.imshow("Keypoints", im)
cv2.waitKey(0)
I guess your original image is in gray bit plane. Since your bit plane is Gray instead of BGR and so the contour is not showing up. Because it's slightly black and grey which you cannot distinguish. Here's the simple solution [By converting the bit plane]:
im=cv2.cvtColor(im,cv2.COLOR_GRAY2BGR)
cv2.drawContours(im,contours,-1,(0,255,0),3)
I have been hitting my head against the wall for a while with this, so maybe someone out there can help.
I'm using PIL to open a PNG with transparent background and some random black scribbles, and trying to put it on top of another PNG (with no transparency), then save it to a third file.
It comes out all black at the end, which is irritating, because I didn't tell it to be black.
I've tested this with multiple proposed fixes from other posts. The image opens in RGBA format, and it's still messed up.
Also, this program is supposed to deal with all sorts of file formats, which is why I'm using PIL. Ironic that the first format I tried is all screwy.
Any help would be appreciated. Here's the code:
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff)) # xoff and yoff are 0 in my tests
img.save(outfile)
I think what you want to use is the paste mask argument.
see the docs, (scroll down to paste)
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff), mask=layer)
# the transparancy layer will be used as the mask
img.save(outfile)