How can I better noise addition in python? - python

I'm trying to add a random noise from uniform distribution between min pixel
value and 0.1 times the maximum pixel value to each pixel for each channel of original image.
Here's my code so far:
[in]:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Read image with cv2
image = cv2.imread('example_image.jpg' , 1)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Display image
imshow(image_rgb)
# R,G,B channel separation
R, G, B = cv2.split(image_rgb)
# Creating Noise
noise_R = np.random.uniform(R.min(),R.max()*0.1, R.size)
noise_R.shape = (256,256)
noise_G = np.random.uniform(B.min(),B.max()*0.1, G.size)
noise_G.shape = (256,256)
noise_B = np.random.uniform(G.min(), G.max()*0.1, B.size)
noise_B.shape = (256,256)
# Adding noise to each channel separately
R = R + noise_R
G = G + noise_G
B = B + noise_B
rgb_noise = R + G + B
noisy_image = image + rgb_noise
[out]:
ValueError: operands could not be broadcast together with shapes (256,256,3) (256,256)
I'm getting an ValueError that the array shapes for rgb_noise and image are not equal. I've tried changing the shape of rgb_noise to that of image's but the I get a size error. How to fix it ? Is there any better method ?

Your solution is a bit verbose, and could be made more compact.
However, the reason why you do not get white-ish noise is that you compute your red channel differently from the other two.
Changing this:
noise_R = np.random.uniform(R_min,R_max*0.3, image_G.size)
to this:
noise_R = np.random.uniform(R_min,R_max*0.1, image_R.size)

You can be simplistic and add the noise by only the numpy array.
import numpy
import matplotlib.pyplot as plt
import cv2
Look, plotting the image will only work good with jupyter notebooks.
Do cv2.imshow() for other IDEs.
1) Have your Image
img = cv2.imread('path').astype(np.uint0)
2) Make a random noise
r, g, b = img.shape
noise = np.random.randint(0,255,r*g*b).reshape(r,g,b)
3) Blend them
image_with_noise = cv2.addWeighted(img,0.5,noise,0.5,0)
You can adjust the value of alpha and beta values.
There you have a noisy image!

Related

How to merge two RGBA images

I'm trying to merge two RGBA images (with a shape of (h,w,4)), taking into account their alpha channels.
Example :
What I've tried
I tried to do this using opencv for that, but I getting some strange pixels on the output image.
Images Used:
and
import cv2
import numpy as np
import matplotlib.pyplot as plt
image1 = cv2.imread("image1.png", cv2.IMREAD_UNCHANGED)
image2 = cv2.imread("image2.png", cv2.IMREAD_UNCHANGED)
mask1 = image1[:,:,3]
mask2 = image2[:,:,3]
mask2_inv = cv2.bitwise_not(mask2)
mask2_bgra = cv2.cvtColor(mask2, cv2.COLOR_GRAY2BGRA)
mask2_inv_bgra = cv2.cvtColor(mask2_inv, cv2.COLOR_GRAY2BGRA)
# output = image2*mask2_bgra + image1
output = cv2.bitwise_or(cv2.bitwise_and(image2, mask2_bgra), cv2.bitwise_and(image1, mask2_inv_bgra))
output[:,:,3] = cv2.bitwise_or(mask1, mask2)
plt.figure(figsize=(12,12))
plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGRA2RGBA))
plt.axis('off')
Output :
So what I figured out is that I'm getting those weird pixels because I used cv2.bitwise_and function (Which btw works perfectly with binary alpha channels).
I tried using different approaches
Question
Is there an approach to do this (While keeping the output image as an 8bit image).
I was able to obtain the expected result in 2 stages.
# Read both images preserving the alpha channel
hh1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\house.png', cv2.IMREAD_UNCHANGED)
hh2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\memo.png', cv2.IMREAD_UNCHANGED)
# store the alpha channels only
m1 = hh1[:,:,3]
m2 = hh2[:,:,3]
# invert the alpha channel and obtain 3-channel mask of float data type
m1i = cv2.bitwise_not(m1)
alpha1i = cv2.cvtColor(m1i, cv2.COLOR_GRAY2BGRA)/255.0
m2i = cv2.bitwise_not(m2)
alpha2i = cv2.cvtColor(m2i, cv2.COLOR_GRAY2BGRA)/255.0
# Perform blending and limit pixel values to 0-255 (convert to 8-bit)
b1i = cv2.convertScaleAbs(hh2*(1-alpha2i) + hh1*alpha2i)
Note: In the b=above the we are using only the inverse alpha channel of the memo image
But I guess this is not the expected result. So moving on ....
# Finding common ground between both the inverted alpha channels
mul = cv2.multiply(alpha1i,alpha2i)
# converting to 8-bit
mulint = cv2.normalize(mul, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# again create 3-channel mask of float data type
alpha = cv2.cvtColor(mulint[:,:,2], cv2.COLOR_GRAY2BGRA)/255.0
# perform blending using previous output and multiplied result
final = cv2.convertScaleAbs(b1i*(1-alpha) + mulint*alpha)
Sorry for the weird variable names. I would request you to analyze the result in each line. I hope this is the expected output.
You could use PIL library to achieve this
from PIL import Image
def merge_images(im1, im2):
bg = Image.open(im1).convert("RGBA")
fg = Image.open(im2).convert("RGBA")
x, y = ((bg.width - fg.width) // 2 , (bg.height - fg.height) // 2)
bg.paste(fg, (x, y), fg)
# convert to 8 bits (pallete mode)
return bg.convert("P")
we can test it using the provided images:
result_image = merge_images("image1.png", "image2.png")
result_image.save("image3.png")
Here's the result:

Python: Convert image from RGB to YDbDr color space

Trying to convert image from RGB color space to YDbDr color space according to the formula:
Y = 0.299R + 0.587G + 0.114B
Db = -0.45R - 0.883G +1.333B
Dr = -1.333R + 1.116G + 0.217B
With the following code I'm trying to show only Y channel which should be grayscale image but I keep getting image all in blue color:
import numpy as np
from PIL import Image
import cv2
import matplotlib.pyplot as plt
img = cv2.imread("./pics/Slike_modela/Test/Proba/1_Color.png")
new_img = []
for row in img:
new_row = []
for pixel in row:
Y = 0.299*pixel[2]+0.587*pixel[1]+0.114*pixel[0]
Db = -0.45*pixel[2]-0.883*pixel[1]+1.333*pixel[0]
Dr = -1.333*pixel[2]+1.116*pixel[1]+0.217*pixel[0]
new_pixel = [Y, Db, Dr]
new_row.append(new_pixel)
new_img.append(new_row)
new_img_arr = np.array(new_img)
new_img_arr_y = new_img_arr.copy()
new_img_arr_y[:,:,1] = 0
new_img_arr_y[:,:,2] = 0
print (new_img_arr_y)
cv2.imshow("y image", new_img_arr_y)
key = cv2.waitKey(0)
When printing the result array I see correct numbers according to formula and correct shape of the array.
What is my mistake? How to get Y channel image i.e. grayscale image?
When processing images with Python, you really, really should try to avoid:
treating images as lists and appending millions and millions of pixels, each of which creates a whole new object and takes space to administer
processing images with for loops, which are very slow
The better way to deal with both of these is through using Numpy or other vectorised code libraries or techniques. That is why OpenCV, wand, scikit-image open and handle images as Numpy arrays.
So, you basically want to do a dot product of the colour channels with a set of 3 weights:
import cv2
import numpy as np
# Load image
im = cv2.imread('paddington.png', cv2.IMREAD_COLOR)
# Calculate Y using Numpy "dot()"
Y = np.dot(im[...,:3], [0.114, 0.587, 0.299]).astype(np.uint8)
That's it.

convert IR image to RGB with python

The code below is intended to take an infrared image (B&W) and convert it to RGB. It does so successfully, but with significant noise. I have included a few lines for noise reduction but they don't seem to help. I've included the starting/resulting photos below. Any advice/corrections are welcome and thank you in advance!
from skimage import io
import numpy as np
import glob, os
from tkinter import Tk
from tkinter.filedialog import askdirectory
import cv2
path = askdirectory(title='Select PNG Folder') # shows dialog box and return the path
outpath = askdirectory(title='Select SAVE Folder')
# wavelength in microns
MWIR = 4.5
R = .642
G = .532
B = .44
vector = [R, G, B]
vectorsum = np.sum(vector)
vector = vector / vectorsum #normalize
vector = vector*255 / MWIR #changing this value changes the outcome significantly so I
#have been messing with it in the hopes of fixing it but no luck so far.
vector = np.power(vector, 4)
for file in os.listdir(path):
if file.endswith(".png"):
imIn = io.imread(os.path.join(path, file))
imOut = imIn * vector
ret,thresh = cv2.threshold(imOut,64,255,cv2.THRESH_BINARY)
kernel = np.ones((5, 5), np.uint8)
erode = cv2.erode(thresh, kernel, iterations = 1)
result = cv2.bitwise_or(imOut, erode)
io.imsave(os.path.join(outpath, file) + '_RGB.png',imOut.astype(np.uint8))
Your noise looks like completely random values, so I suspect you have an error in your conversion from float to uint8. But instead of rolling everything for yourself, why don't you just use:
imOut = cv2.cvtColor(imIn,cv2.COLOR_GRAY2BGR)
Here is one way to do that in Python/OpenCV.
Your issue is likely that your channel values are exceeding the 8-bit range.
Sorry, I do not understand the relationship between your R,G,B weights and your MWIR. Dividing by MWIR will do nothing if your weights are properly normalized.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('car.jpg')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# make color channels
red = gray.copy()
green = gray.copy()
blue = gray.copy()
# set weights
R = .642
G = .532
B = .44
MWIR = 4.5
# get sum of weights and normalize them by the sum
R = R**4
G = G**4
B = B**4
sum = R + G + B
R = R/sum
G = G/sum
B = B/sum
print(R,G,B)
# combine channels with weights
red = (R*red)
green = (G*green)
blue = (B*blue)
result = cv2.merge([red,green,blue])
# scale by ratio of 255/max to increase to fully dynamic range
max=np.amax(result)
result = ((255/max)*result).clip(0,255).astype(np.uint8)
# write result to disk
cv2.imwrite("car_colored.png", result)
# display it
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Result
If the noise is coming from the sensor itself, like a grainy noise, you'll need to look into denoising algorithms. scikit-image and opencv provide some denoising algorithms you can try. Maybe take a look at this and this.
I recently learned about matplotlib.cm, which handles colormaps. I've been using those to artificially color IR images, and made a brief example using the same black & white car image used above. Basically, I create a colormap .csv file locally, then refer to it for RGB weights. You may have to pick and choose which colormap you prefer, but that's up to personal preference.
Input image:
Python:
import os
import numpy as np
import cv2
from matplotlib import cm
# Multiple colormap options are available- I've hardcoded viridis for this example.
colormaps = ["viridis", "plasma", "inferno", "magma", "cividis"]
def CreateColormap():
if not os.path.exists("viridis_colormap.csv"):
# Get 256 entries from "viridis" or any other Matplotlib colormap
colormap = cm.get_cmap("viridis", 256)
# Make a Numpy array of the 256 RGB values
# Each line corresponds to an RGB colour for a greyscale level
np.savetxt("viridis_colormap.csv", (colormap.colors[...,0:3]*255).astype(np.uint8), fmt='%d', delimiter=',')
def RecolorInfraredImageToRGB(ir_image):
# Load RGB lookup table from CSV file
lookup_table = np.loadtxt("viridis_colormap.csv", dtype=np.uint8, delimiter=",")
# Make output image, same height and width as IR image, but 3-channel RGB
result = np.zeros((*ir_image.shape, 3), dtype=np.uint8)
# Take entries from RGB LUT according to greyscale values in image
np.take(lookup_table, ir_image, axis=0, out=result)
return result
if __name__ == "__main__":
CreateColormap()
img = cv2.imread("bwcar.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
recolored = RecolorInfraredImageToRGB(gray)
cv2.imwrite("car_recolored.png", recolored)
cv2.imshow("Viridis recolor", recolored)
cv2.waitKey(0)
Output:

How to represent a binary image as a graph with the axis being height and width dimensions and the data being the pixels

I am trying to use Python along with opencv, numpy and matplotlib to do some computer vision for a robot which will use a railing to navigate. I am currently extremely stuck have run out of places to look. My current code is:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('railings.jpg')
railing_image = np.copy(image)
resized_image = cv2.resize(railing_image,(881,565))
gray = cv2.cvtColor(resized_image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 85, 255)
cv2.imshow('test',canny)
image_array = np.array(canny)
ncols, nrows = image_array.shape
count = 0
scan = np.array
for x in range(0,image_array.shape[1]):
for y in range(0,image_array.shape[0]):
if image_array[y, x] == 0:
count += 1
scan = [scan, count]
print(scan)
plt.plot([0, count])
plt.axis([0, nrows, 0, ncols])
plt.show()
cv2.waitKey(0)
I am using a canny image which is stored in an array of 1's and 0's, the image I need represented is
The final result should look something like the following image.
I've tried using a histogram function but I've only managed to get that to output essentially a count of the number of times a 1 or 0 appears.
If anyone could help me or point me in the right direction that would produce a graph that represents the image pixels within a graph of height and width dimensions.
Thank you
I'm not sure how general this is but you could just use numpy argmax to get location of the maximum (like this) in your case. You should avoid loops as this will be very slow, better to use numpy functions. I've imported your image and used the cutoff criterion that 200 or more in the yellow channel is railing,
import cv2
import numpy as np
import matplotlib.pyplot as plt
#This loads the canny image you uploaded
image = cv2.imread('uojHJ.jpg')
#Trim off the top taskbar
trimimage = image[100:, :,0]
#Use argmax with 200 cutoff colour in one channel
maxindex = np.argmax(trimimage[:,:]>200, axis=0)
#Plot graph
plt.plot(trimimage.shape[0] - maxindex)
plt.show()
Where this looks as follows:

Convert a black and white image to array of numbers?

Like the image above suggests, how can I convert the image to the left into an array that represent the darkness of the image between 0 for white and decimals for darker colours closer to 1? as shown in the image usingpython 3`?
Update:
I have tried to work abit more on this. There are good answers below too.
# Load image
filename = tf.constant("one.png")
image_file = tf.read_file(filename)
# Show Image
Image("one.png")
#convert method
def convertRgbToWeight(rgbArray):
arrayWithPixelWeight = []
for i in range(int(rgbArray.size / rgbArray[0].size)):
for j in range(int(rgbArray[0].size / 3)):
lum = 255-((rgbArray[i][j][0]+rgbArray[i][j][1]+rgbArray[i][j][2])/3) # Reversed luminosity
arrayWithPixelWeight.append(lum/255) # Map values from range 0-255 to 0-1
return arrayWithPixelWeight
# Convert image to numbers and print them
image_decoded_png = tf.image.decode_png(image_file,channels=3)
image_as_float32 = tf.cast(image_decoded_png, tf.float32)
numpy.set_printoptions(threshold=numpy.nan)
sess = tf.Session()
squeezedArray = sess.run(image_as_float32)
convertedList = convertRgbToWeight(squeezedArray)
print(convertedList) # This will give me an array of numbers.
I would recommend to read in images with opencv. The biggest advantage of opencv is that it supports multiple image formats and it automatically transforms the image into a numpy array. For example:
import cv2
import numpy as np
img_path = '/YOUR/PATH/IMAGE.png'
img = cv2.imread(img_path, 0) # read image as grayscale. Set second parameter to 1 if rgb is required
Now img is a numpy array with values between 0 - 255. By default 0 equals black and 255 equals white. To change this you can use the opencv built in function bitwise_not:
img_reverted= cv2.bitwise_not(img)
We can now scale the array with:
new_img = img_reverted / 255.0 // now all values are ranging from 0 to 1, where white equlas 0.0 and black equals 1.0
Load the image and then just invert and divide by 255.
Here is the image ('Untitled.png') that I used for this example: https://ufile.io/h8ncw
import numpy as np
import cv2
import matplotlib.pyplot as plt
my_img = cv2.imread('Untitled.png')
inverted_img = (255.0 - my_img)
final = inverted_img / 255.0
# Visualize the result
plt.imshow(final)
plt.show()
print(final.shape)
(661, 667, 3)
Results (final object represented as image):
You can use PIL package to manage images. Here's example how it can be done.
from PIL import Image
image = Image.open('sample.png')
width, height = image.size
pixels = image.load()
# Check if has alpha, to avoid "too many values to unpack" error
has_alpha = len(pixels[0,0]) == 4
# Create empty 2D list
fill = 1
array = [[fill for x in range(width)] for y in range(height)]
for y in range(height):
for x in range(width):
if has_alpha:
r, g, b, a = pixels[x,y]
else:
r, g, b = pixels[x,y]
lum = 255-((r+g+b)/3) # Reversed luminosity
array[y][x] = lum/255 # Map values from range 0-255 to 0-1
I think it works but please note that the only test I did was if values are in desired range:
# Test max and min values
h, l = 0,1
for row in array:
h = max([max(row), h])
l = min([min(row), l])
print(h, l)
You have to load the image from the path and then transform it to a numpy array.
The values of the image will be between 0 and 255. The next step is to standardize the numpy array.
Hope it helps.

Categories

Resources