Recovering an image from Gaussian Noise given random seed - python

I've been trying to add reproducible Gaussian Noise by fixing a random seed, saving the image, reading the image and regenerate the Gaussian Noise, and 'subtracting' it to recover the original image. Here's the (pseudo-)code for what I've tried so far:
SEED = 1234
np.random.seed(SEED)
img = cv2.imread(path, -1) # (32,32,3)
noise = np.random.normal(loc=0, scale=20, size=(32,32,3)).astype(np.uint8)
temp = img + noise
# ignore the noise if value exceeds 255 or is below 0
temp = np.where(temp<255, temp, img)
temp = np.where(temp>0, temp, img)
cv2.imwrite(some_path_and_file_name, temp)
Then, I read the image file with Gaussian Noise in the same way. When I 'ignore' the noise, I keep track of the matrix index of when the 'ignoring' happened, and use this information to recover the original data:
img = cv2.imread(path_to_noise_img, -1)
SEED = 1234
np.random.seed(SEED)
noise = np.random.normal(loc=0, scale=20, size=(32,32,3)).astype(np.uint8)
temp = img - noise
# flag is the matrix with the indices of 'ignoring'
recovered_img = np.where(flag == 0, temp, img)
cv2.imwrite(some_path_and_file_name, recovered_img)
However, when I open the two images, they are different. I have checked that the Gaussian Noise is the same all the time, and it feels like something is going wrong (some sort of irreversible conversion is happening) when I read or write the image file.
However, I am having trouble debugging this since I am new to Python.
Any help would be appreciated. Thanks!
Edit
I tried saving the image and loading the image using OpenCV function calls without modifying anything, and compared the values. From what I see, the values being read in are different. What should I fix in my code to prevent this from happening?
Solution
The code below worked like a charm (Thanks to all the comments and the answer).
np.random.seed(SEED)
original = cv2.imread("C:/Users/user/project/1.png", cv2.IMREAD_UNCHANGED)
n = np.random.normal(loc=0, scale=20, size=original.shape).astype(np.int32)
n = original.astype(np.int32) + n.astype(np.int32)
floor = np.zeros_like(n)
ceil = np.zeros_like(n)
floor = np.where(n<255, 0, 1)
t = np.where(n < 255, n, original)
# record when rounding occurs
ceil = np.where(t>0, 0, 1)
arr = np.where(t>0, t, original)
flag = floor + ceil
arr = arr.astype(np.uint8)
cv2.imwrite("C:/Users/user/project/1-1.png", arr.astype(np.uint8))
np.random.seed(SEED)
recover = cv2.imread("C:/Users/user/project/1-1.png", cv2.IMREAD_UNCHANGED).astype(np.int32)
noise = np.random.normal(loc=0, scale=20, size=recover.shape).astype(np.int32)
ans = np.where(flag==0, recover-noise, recover)
assert(ans.astype(np.uint8) == original.astype(np.uint8)).all()

Multiple issues...
Compression
JPEG is a lossy compression, which means you will surely not be getting the exact same values back as you had before compression.
PNG is lossless, so it will give you the exact values you had before compression.
Integer math
Adding two uint8 values results in an uint8 again.
That means your math will always result in values in the range of 0..255. uint8 values can never be <0 or >255.
Your np.where checks are useless because the values are uint8, and even after addition they stay uint8, and they can never be <0 or >255.
Further, whenever you add/subtract values, if the result exceeds the range, that has to be handled in some way. Numpy simply wraps the values around, as is usual with integer math. Another option is to saturate, meaning to clip. OpenCV functions tend to do that. You can produce either with either library, with some care.
If there is any saturating math in your code, you will definitely not be able to subtract the noise and recover the original image. If there is merely wrapping math in your code, you can recover the original image by subtracting the noise.
Debugging
You are discarding so much information, reducing the answer to "are they bit-exact equal or not?"
You should use a small image, 3x3 pixels or something, and then look at the values, when you print those numpy arrays.
Demo
import numpy as np
import cv2 as cv
img = cv.imread("image.png", cv.IMREAD_UNCHANGED)
SEED = 1234
np.random.seed(SEED)
noise = np.random.normal(loc=0, scale=20, size=img.shape).astype(np.uint8)
img_with_noise = img + noise # this will wrap around
cv.imwrite("image_with_noise.png", img_with_noise)
import numpy as np
import cv2 as cv
img = cv.imread("image.png", cv.IMREAD_UNCHANGED)
img_with_noise = cv.imread("image_with_noise.png", cv.IMREAD_UNCHANGED)
# generate noise the exact same way
SEED = 1234
np.random.seed(SEED)
noise = np.random.normal(loc=0, scale=20, size=img.shape).astype(np.uint8)
img_recovered = img_with_noise - noise # wrapping around again, backwards
cv.imwrite("image_recovered.png", img_recovered)
# images should be equal in all values of all pixels
assert (img_recovered == img).all()

Related

How can i reproduce an image out of randomly shuffled pixels?

my output my input Hi I am using this python code to generate an shuffle pixel image is there any way to make this process opposite ? for example I give this code output's photo to the program and it reproduce the original photo again.
I am trying to generate an static style image and reverse it back into the original image and I am open into any other ideas for replacing this code
from PIL import Image
import numpy as np
orig = Image.open('lena.jpg')
orig_px = orig.getdata()
orig_px = np.reshape(orig_px, (orig.height * orig.width, 3))
np.random.shuffle(orig_px)
orig_px = np.reshape(orig_px, (orig.height, orig.width, 3))
res = Image.fromarray(orig_px.astype('uint8'))
res.save('out.jpg')
Firstly, bear in mind that JPEG is lossy - so you will never get back what you write with JPEG - it changes your data! So, use PNG if you want to read back losslessly exactly what you started with.
You can do what you ask like this:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
def shuffleImage(im, seed=42):
# Get pixels and put in Numpy array for easy shuffling
pix = np.array(im.getdata())
# Generate an array of shuffled indices
# Seed random number generation to ensure same result
np.random.seed(seed)
indices = np.random.permutation(len(pix))
# Shuffle the pixels and recreate image
shuffled = pix[indices].astype(np.uint8)
return Image.fromarray(shuffled.reshape(im.width,im.height,3))
def unshuffleImage(im, seed=42):
# Get shuffled pixels in Numpy array
shuffled = np.array(im.getdata())
nPix = len(shuffled)
# Generate unshuffler
np.random.seed(seed)
indices = np.random.permutation(nPix)
unshuffler = np.zeros(nPix, np.uint32)
unshuffler[indices] = np.arange(nPix)
unshuffledPix = shuffled[unshuffler].astype(np.uint8)
return Image.fromarray(unshuffledPix.reshape(im.width,im.height,3))
# Load image and ensure RGB, i.e. not palette image
orig = Image.open('lena.png').convert('RGB')
result = shuffleImage(orig)
result.save('shuffled.png')
unshuffled = unshuffleImage(result)
unshuffled.save('unshuffled.png')
Which turns Lena into this:
It's impossible to do that reliably as far as I know. Theoretically you could brute force it by shuffling the pixels over and over and feeding the result into Amazon Rekognition, but you would end up with a huge AWS bill and probably only something that is approximately the original picture.

skeletonization (thinning) of small images not giving expected results - python

I am trying to implement a skeletonization of small images. But I am not getting an expected results. I tried also thin() and medial_axis() but nothing seems to work as expected. I am suspicious that this problem occurs because of the small resolutions of images. Here is the code:
import cv2
from numpy import asarray
import numpy as np
# open image
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
# threshold the image
img_binary = cv2.threshold(afterMedian, thresh, 255, cv2.THRESH_BINARY)[1]
# make binary image
arr = asarray(img_binary)
binaryArr = np.zeros(asarray(img_binary).shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if arr[i][j] == 255:
binaryArr[i][j] = 1
else:
binaryArr[i][j] = 0
# perform skeletonization
from skimage.morphology import skeletonize
cv2.imshow("binary arr", binaryArr)
backgroundSkeleton = skeletonize(binaryArr)
# convert to non-binary image
bSkeleton = np.zeros(arr.shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if backgroundSkeleton[i][j] == 0:
bSkeleton[i][j] = 0
else:
bSkeleton[i][j] = 255
cv2.imshow("background skeleton", bSkeleton)
cv2.waitKey(0)
The results are:
I would expect something more like this:
This applies to similar shapes also:
Expectation:
Am I doing something wrong? Or it will truly will not be possible with such small pictures, because I tried skeletonization on bigger images and it worked just fine. Original images:
You could try the skeleton in DIPlib (dip.EuclideanSkeleton):
import numpy as np
import diplib as dip
import cv2
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
bin = afterMedian > thresh
sk = dip.EuclideanSkeleton(bin, endPixelCondition='three neighbors')
dip.viewer.Show(bin)
dip.viewer.Show(sk)
dip.viewer.Spin()
The endPixelCondition input argument can be used to adjust how many branches are preserved or removed. 'three neighbors' is the option that produces the most branches.
The code above produces branches also towards the corners of the image. Using 'two neighbors' prevents that, but produces fewer branches towards the object as well. The other way to prevent it is to set edgeCondition='object', but in this case the ring around the object becomes a square on the image boundary.
To convert the DIPlib image sk back to a NumPy array, do
sk = np.array(sk)
sk is now a Boolean NumPy array (values True and False). To create an array compatible with OpenCV simply cast to np.uint8 and multiply by 255:
sk = np.array(sk, dtype=np.uint8)
sk *= 255
Note that, when dealing with NumPy arrays, you generally don't need to loop over all pixels. In fact, it's worth trying to avoid doing so, as loops in Python are extremely slow.
It seems the scikit-image is much better choice than cv2 here.
since the package define Bit functions, if you are playing with BW images, then try this ready to use code:
skeletonize
note: if process pass the image details, then don’t upsample the input at first until you tried other functions:again use skimage morphology functions to enhance details which in such case your code will be work on bigger area of images too. You could look here.

How to read a hdr image quickly in the RGBE format in Python?

I would like to know how to read a HDR image (.hdr) by obtaining pixel values in the RGBE format quickly and efficiently in Python.
These are somethings I tried:
import imageio
img = imageio.imread(hdr_path, format="HDR-FI")
alternatively:
import cv2
img = cv2.imread(hdr_path, flags=cv2.IMREAD_ANYDEPTH)
This read the image, but gave the values in a RGB format.
How do you obtain the 4rth channel, the "E" channel for every pixel, without altered RGB values?
I would prefer a solution involving only imageio, as i am restricted to use only that module.
If you prefer the RGBE representation over the float representation you can convert between the two
def float_to_rgbe(image, *, channel_axis=-1):
# ensure channel-last
image = np.moveaxis(image, channel_axis, -1)
max_float = np.max(image, axis=-1)
scale, exponent = np.frexp(max_float)
scale *= 256.0/max_float
image_rgbe = np.empty((*image.shape[:-1], 4)
image_rgbe[..., :3] = image * scale
image_rgbe[..., -1] = exponent + 128
image_rgbe[scale < 1e-32, :] = 0
# restore original axis order
image_rgbe = np.moveaxis(image_rgbe, -1, channel_axis)
return image_rgbe
(Note: this is based on the RGBE reference implementation (found here) and can be further optimized if it actually is the bottleneck.)
In your comment, you mention "If i parse the numpy array manually and split the channels into an E channel, it takes too much time...", but it is hard to tell why that is the case without seeing the code. The above is O(height*width), which seems reasonable for a pixel-level image processing method.

contrast enhancement how linearly stretch the grey levels of an image?

a screenshot of the img values2[this is the original]3[this is the expected output]this is the output I getI'm trying to stretch the grey levels from 0-100 to 50-200 in python but the output image is not right.
I drew the straight line representing the linear relationship between the two ranges, and in line 8 I'm using this equation to get the output.
What's wrong with my code?
This is my first question, so sorry for mistakes.
def Contrast_enhancement(img):
newimg = img
height = img.shape[0]
width = img.shape[1]
for i in range(height):
for j in range(width):
if(img[i][j] * 255 >= 0 and img[i][j] * 255 <= 100):
newimg[i][j] = (((3/2) * (img[i][j] * 255)) + 50)/255
return newimg
import numpy as np
import copy
def Contrast_enhancement(img):
newimg = np.array(copy.deepcopy(img)) #this makes a real copy of img, if you dont, any change to img will change newimg too
temp_img=np.array(copy.deepcopy(img))*3/2+50/255
newimg = np.where(newimg<=100,temp_img,newimg)
return newimg
or shorter:
import numpy as np
import copy
def Contrast_enhancement(img):
newimg = np.array(copy.deepcopy(img)) #this makes a real copy of img, if you dont, any change to img will change newimg too
newimg = np.where(newimg<=100,newimg*3/2+50/255,newimg)
return newimg
The copy part should be solving your problem and the numpy part is just to speed things up. Np.where returns temp_img if newimg is <=100 and newimg if not.
There are two answers to your question:
The one is strictly technical (the one that #DonQuiKong tries to answer) referring to how to do the stretching you refer to simpler or correctly.
The other one is implicit and tries to answer you actual problem of image stretching.
I am focusing on the second case here. Judging from the image sample you provided you are not taking the correct approach. Let's consider the samples you provided indeed have all intensity values between 0-100 (from screen capturing in my pc they don't but that's screen dependent to a degree). Your method seems correct and should work with minor bugs.
1) A minor bug for example is that:
newimg = img
does not do what you think it does. It does creates an alias of the original variable. Use:
newimg = img.copy()
instead.
2) If an image with different boundaries come to you your code is broken. It will ignore some pixels for some reason and that is not you wanted I guess.
3) The stretching you want can be applied to the whole image in that case using something like:
newimg -= np.min(newimg)
newimg /= np.max(newimg)
which just stretches your intensities to the 0-255 boundary.
4) Judging from your sample images also you need a more radical stretching (which will sacrifice a bit of image information to increase image contrast). Instead of the above you can use a lower limit:
newimg -= np.min(newimg)
newimg /= (np.max(newimg) * 0.5)
This effectively "burns" some pixels but in your case the result looks more close to your desired one. Apart from that you can apply a non linear mapping (a logarithmic one for example) of old intensities to new ones and you won't get any "burned" pixels.
A sample with value 0.5:

Convert np.array of type float64 to type uint8 scaling values

I have a particular np.array data which represents a particular grayscale image.
I need to use SimpleBlobDetector() that unfortunately only accepts 8bit images, so I need to convert this image, obviously having a quality-loss.
I've already tried:
import numpy as np
import cv2
[...]
data = data / data.max() #normalizes data in range 0 - 255
data = 255 * data
img = data.astype(np.uint8)
cv2.imshow("Window", img)
But cv2.imshow is not giving the image as expected, but with strange distortion...
In the end, I only need to convert a np.float64 to np.uint8 scaling all the values and truncating the rest, eg. 65535 becomes 255, 65534 becomes 254 and so on.... Any help?
Thanks.
A better way to normalize your image is to take each value and divide by the largest value experienced by the data type. This ensures that images that have a small dynamic range in your image remain small and they're not inadvertently normalized so that they become gray. For example, if your image had a dynamic range of [0-2], the code right now would scale that to have intensities of [0, 128, 255]. You want these to remain small after converting to np.uint8.
Therefore, divide every value by the largest value possible by the image type, not the actual image itself. You would then scale this by 255 to produced the normalized result. Use numpy.iinfo and provide it the type (dtype) of the image and you will obtain a structure of information for that type. You would then access the max field from this structure to determine the maximum value.
So with the above, do the following modifications to your code:
import numpy as np
import cv2
[...]
info = np.iinfo(data.dtype) # Get the information of the incoming image type
data = data.astype(np.float64) / info.max # normalize the data to 0 - 1
data = 255 * data # Now scale by 255
img = data.astype(np.uint8)
cv2.imshow("Window", img)
Note that I've additionally converted the image into np.float64 in case the incoming data type is not so and to maintain floating-point precision when doing the division.
Considering that you are using OpenCV, the best way to convert between data types is to use normalize function.
img_n = cv2.normalize(src=img, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
However, if you don't want to use OpenCV, you can do this in numpy
def convert(img, target_type_min, target_type_max, target_type):
imin = img.min()
imax = img.max()
a = (target_type_max - target_type_min) / (imax - imin)
b = target_type_max - a * imax
new_img = (a * img + b).astype(target_type)
return new_img
And then use it like this
imgu8 = convert(img16u, 0, 255, np.uint8)
This is based on the answer that I found on crossvalidated board in comments under this solution https://stats.stackexchange.com/a/70808/277040
you can use skimage.img_as_ubyte(yourdata) it will make you numpy array ranges from 0->255
from skimage import img_as_ubyte
img = img_as_ubyte(data)
cv2.imshow("Window", img)

Categories

Resources