OpenCV Python: Normalize image - python

I'm new to OpenCV. I want to do some preprocessing related to normalization. I want to normalize my image to a certain size. The result of the following code gives me a black image. Can someone point me to what exactly am I doing wrong? The image I am inputting is a black/white image
import cv2 as cv
import numpy as np
img = cv.imread(path)
normalizedImg = np.zeros((800, 800))
cv.normalize(img, normalizedImg, 0, 255, cv.NORM_MINMAX)
cv.imshow('dst_rt', self.normalizedImg)
cv.waitKey(0)
cv.destroyAllWindows()

as one can see at: http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#cv2.normalize, there is a → dst that say that the result of the normalize function is returned as output parameter. The function doesn't change the input parameter dst in-place.
(The self. in cv.imshow('dst_rt', self.normalizedImg) line is a typo)
import cv2 as cv
import numpy as np
path = r"C:\Users\Public\Pictures\Sample Pictures\Hydrangeas.jpg"
img = cv.imread(path)
normalizedImg = np.zeros((800, 800))
normalizedImg = cv.normalize(img, normalizedImg, 0, 255, cv.NORM_MINMAX)
cv.imshow('dst_rt', normalizedImg)
cv.waitKey(0)
cv.destroyAllWindows()

It's giving you a black image because you are probably using different sizes in img and normalizedImg.
import cv2 as cv
img = cv.imread(path)
img = cv.resize(img, (800, 800))
cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)
cv.imshow('dst_rt', img)
cv.waitKey(0)
cv.destroyAllWindows()
Update: In NumPy there are more intuitive ways to do this ref:
a = np.random.rand(3,2)
# Normalised [0,1]
b = (a - np.min(a))/np.ptp(a)
# Normalised [0,255] as integer: don't forget the parenthesis before astype(int)
c = (255*(a - np.min(a))/np.ptp(a)).astype(int)
# Normalised [-1,1]
d = 2.*(a - np.min(a))/np.ptp(a)-1

When you call cv.imshow() you use self.normalizedImg, instead of normalizedImg.
The self. is used to identify class members and its use in the code you've written is not appropriate. It shouldn't even run as written. However I assume this code has been extracted from a class definition, but you must be consistent in naming variables and self.normalizedImg is different from normalizedImg.

Related

how to increase brightness of a piece of a rgb image without overfolow?

I have an image as shown below and I want to increase the brightness of the lightning section.
my input image:
Here's my code:
import cv2 as cv
import numpy as np
src = cv.imread('./img.jpg')
hsv_src = cv.cvtColor(src, cv.COLOR_BGR2HSV)
v = hsv_src[:,:,2]
value = 50
hsv_src[:,:,2]=np.where((255-v)<value,255,v+value) # v+value> 255
out = cv.cvtColor(hsv_src,cv.COLOR_HSV2BGR)
cv.imshow('output',out)
cv.waitKey(0)
but I eventually got this:
I just want to increase the brightness of lightning but what I'm watching now is increasing the brightness of the entire image. I'm honestly confused and don't know what to do.
You can not add, but multiply by a number. See example:
import cv2 as cv
import numpy as np
src = cv.imread('zHSbF.jpg')
hsv_src = cv.cvtColor(src, cv.COLOR_BGR2HSV)
v = hsv_src[:,:,2]
k = 1.5
hsv_src[:,:,2]=np.clip(np.uint16(hsv_src[:,:,2])*k, 0, 255)
out = cv.cvtColor(hsv_src,cv.COLOR_HSV2BGR)
cv.imwrite('out8.png', out)
cv.imshow('output',out)
cv.waitKey(0)

Python: save binary line masks with assigned line width without using CV2?

now I have starting cord(79,143) and end cord(200,100), width 500 and height 500 of an image and I wan to use them to save a binary mask like pic.
I can use the skimage to save it,but the line width seems fixed,and I do want to use cv2, so is there any other solution to save the mask with custom line width?
and meanwhile ,I have a cv2 program, but it does not work,
I have a program:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = np.zeros((1080,1080,3),np.uint8)
for i in range(3):
im=np.squeeze(img[:,:,i])
print(im)
imgg=cv2.line(im,(0,0),(511,511),255,5)
masks=Image.fromarray((imgg).astype(np.uint8))
masks.save("masks"+str(i)+".png")
and I want to save 3 same masks,but it gave error:
Layout of the output array img is incompatible with cv::Mat (step[ndims-1] != elemsize or step1 != elemsize*nchannels)
any idea how to solve it?
Many thanks!
Many thanks!
The OpenCV drawing line function does have a thickness parameter. You can specify it like this:
# Setup an empty image
im = np.zeros((500, 500), dtype=np.uint8)
# Draw with thickness
im = cv2.line(im, (79,143), (200, 100), color=(255, 255, 255), thickness=10)

OpenCV .imshow() is not displaying a properly sized image

I am trying to read an image using opencv, do some transformations (resize and offsets), then as a last step, do a crop on the image. In my final line crop_img = offset_image[0:1080, 0:1920].copy(), I expect a cropped 1920x1080 image to be created. My crop_img.size print out shows that that is correct. But, when I do an .imshow(), it is displaying the full sized, original image.
import numpy as np
import cv2 as cv
import copy
original = cv.imread("IMG_0015_guides.jpg", cv.IMREAD_UNCHANGED)
img_resize = cv.resize(original, (0,0), fx=.9, fy=.9)
rows,cols,_ = img_resize.shape
M = np.float32([[1,0,100],[0,1,50]])
offset_image = cv.warpAffine(img_resize,M,(cols,rows))
crop_img = offset_image[0:1080, 0:1920].copy()
print('img_resize {}'.format(img_resize.shape))
print('offset_image {}'.format(offset_image.shape))
print('cropped {}'.format(crop_img.shape))
cv.imshow('image',crop_img)
cv.waitKey(0)
cv.destroyAllWindows()
>>> img_resize (3110, 4666, 3)
>>> offset_image (3110, 4666, 3)
>>> cropped (1080, 1920, 3)
I'm totally baffled. Why is it not showing me the cropped 1920x1080 image?
Working with massive images can get confusing when visualizing with OpenCV's imshow.
I ran your code and it seems like its doing what you expect it to do. I suggest resizing your image again for visualization purposes only. The following code ran successfully on this image.
import numpy as np
import cv2 as cv
import copy
original = cv.imread("4k-image-tiger-jumping.jpg", cv.IMREAD_UNCHANGED)
# resize original for visualization purposes only
print('original {}'.format(original.shape))
original_resized = cv.resize(original, (0,0), fx=.1, fy=.1)
cv.imshow('original_resize',original_resized)
img_resize = cv.resize(original, (0,0), fx=.9, fy=.9)
rows,cols,_ = img_resize.shape
M = np.float32([[1,0,100],[0,1,50]])
offset_image = cv.warpAffine(img_resize,M,(cols,rows))
crop_img = offset_image[0:1080, 0:1920].copy()
print('img_resize {}'.format(img_resize.shape))
print('offset_image {}'.format(offset_image.shape))
print('cropped {}'.format(crop_img.shape))
# resize cropped for visualization purposes only
vis_r,vis_c,_ = original_resized.shape
cropped_resized = cv.resize(crop_img, (vis_c, vis_r))
cv.imshow('cropped_resized',cropped_resized)
# cv.imshow('image',crop_img)
cv.waitKey(0)
cv.destroyAllWindows()

Trouble with Canny Edge Detector - Returning black image

I'm trying to run the canny edge detector on this image:
With this code:
def edges(img):
from skimage import feature
img = Image.open(img)
img.convert('L')
array = np.array(img)
out = feature.canny(array, sigma=1, )
return Image.fromarray(out,'L')
edges('Q_3.jpg').save('Q_3_edges.jpg')
But I'm just getting a black image back. Any ideas what I could be doing wrong? I tried sigma of 1 and of 3.
I have the same situation and this helps for me. Before use the Canny filter, just convert your elements of image array to float32 type:
array = np.array(img)
array = array.astype('float32')
out = feature.canny(array, sigma=1, )
Your images need to be in the correct range for the relevant dtype, as discussed in the user manual here: http://scikit-image.org/docs/stable/user_guide/data_types.html
This should be automatically handled if you use the scikit-image image I/O functions:
from skimage import io
img = io.imread('Q_3.jpg')
So the issue was with the canny function returning and array of type boolean.
Oddly, setting the Image.fromarray mode to '1' didn't help. Instead this was the only way I could get it working; converting the output array to grayscale:
def edges(img):
from skimage import feature
img = Image.open(img)
img.convert('L')
array = np.array(img)
out = np.uint8(feature.canny(array, sigma=1, ) * 255)
return Image.fromarray(out,mode='L')
The problem happens when the image is loaded as float (i.e. in the range 0-1). The loader does that for some types of images. You can check the type of the loaded image by:
print(img.dtype)
If the output is something like float64 (i.e. not uint8), then your image is in the range 0-1.
Canny expects an image in the range 0-255. Therefore, the solution is as easy as:
from skimage import img_as_ubyte
img = io.imread("an_image.jpg")
img = img_as_ubyte(img)
Hope this helps,
The problem happens when the image is saved. You can save image with other library like matplotlib:
import numpy as np
import matplotlib.pyplot as plt
from skimage import feature
from skimage import io
def edges(img):
img = io.imread(img)
array = np.array(img)
out = feature.canny(array, sigma=1, )
return out
plt.imsave("canny.jpg", edges("input.jpg"), cmap="Greys")

PIL Check pixel that it's on with eval function

Is there any way using the eval function in PIL to run through all pixels, while checking to see what each value is? The program runs through an image to see if each pixel is a certain rgb, and if it is, then it will turn that pixel into transparency. the eval function in PIL seems it would do the job, but can my function that converts the pixels check the value of the pixel it's on? Thanks in advance.
Updated: Ahh, I see what you want to do. Here is an example using only PIL. It converts all white pixels to red with 50% alpha:
import Image
img = Image.open('stack.png').convert('RGBA')
width, _ = img.size
for i, px in enumerate(img.getdata()):
if px[:3] == (255, 255, 255):
y = i / width
x = i % width
img.putpixel((x, y), (255, 0, 0, 127))
img.save('stack-red.png')
Orig answer: Yes, the Image.eval() function lets you pass in a function which evaluates each pixel, and lets you determine a new pixel value:
import Image
img1 = Image.open('foo.png')
# replace dark pixels with black
img2 = Image.eval(img1, lambda px: 0 if px <= 64 else px)
No, eval will not pass an RGB tuple to a function. It maps a function over each band. You could, however process each band using eval and then use an ImageChops operation to logically combine the bands and get a mask that will be pixel-tuple specific.
By the way, this could be done much more cleanly and efficiently in NumPy if you are so inclined..
import numpy as np
import Image
import ImageChops
im_and = ImageChops.lighter
im = Image.open('test.png')
a = np.array(im)
R,G,B,A = im.split()
color_matches = []
for level,band in zip((255,255,255), (R,G,B)):
b = Image.eval(band, lambda px: 255-(255*(px==level)))
color_matches.append(b)
r,g,b = color_matches
mask = im_and(r, im_and(g, b))
im.putalpha(mask)
im.save('test2.png')

Categories

Resources