Image file to vector of pixels with CImg? - python

I have this in python:
import Image
import numpy as np
import random
img = Image.open('img.jpg')
#turn img to list of rgb tuples and scramble
pixels = list(img.getdata())
pixels.reverse()
random.shuffle(pixels)
#make new image using scrambled pixels
img2 = Image.new(img.mode, img.size)
img2.putdata(pixels)
img2.save('newimg.png')
I figured I should be working in c++ to keep stuff I learned last semester fresh in my head and to prepare for the class I have next semester which also revolves around c++. So, I found CImg and got a bit overwhelmed by the documentation. So, what would be CImg's equivalent of line 8?
My end goal is to be able to scramble an image using a known pattern, then use that pattern to unscramble later. I don't know if this is possible though. To me its a bit like asking the following:
given:
srand(x);
int rand_num = rand() % 10;
and
rand_num = 7
find x.

As far as know CImg provides iterators to loop through every pixel. As such and provided that your compiler support C++11, you could use std::shuffle to shuffle the pixels of your image (see example below).
CImg<float> img("lena.jpg"); // Load image from file.
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
std::shuffle(img.begin(), img.end(), std::default_random_engine(seed));

Related

How can i reproduce an image out of randomly shuffled pixels?

my output my input Hi I am using this python code to generate an shuffle pixel image is there any way to make this process opposite ? for example I give this code output's photo to the program and it reproduce the original photo again.
I am trying to generate an static style image and reverse it back into the original image and I am open into any other ideas for replacing this code
from PIL import Image
import numpy as np
orig = Image.open('lena.jpg')
orig_px = orig.getdata()
orig_px = np.reshape(orig_px, (orig.height * orig.width, 3))
np.random.shuffle(orig_px)
orig_px = np.reshape(orig_px, (orig.height, orig.width, 3))
res = Image.fromarray(orig_px.astype('uint8'))
res.save('out.jpg')
Firstly, bear in mind that JPEG is lossy - so you will never get back what you write with JPEG - it changes your data! So, use PNG if you want to read back losslessly exactly what you started with.
You can do what you ask like this:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
def shuffleImage(im, seed=42):
# Get pixels and put in Numpy array for easy shuffling
pix = np.array(im.getdata())
# Generate an array of shuffled indices
# Seed random number generation to ensure same result
np.random.seed(seed)
indices = np.random.permutation(len(pix))
# Shuffle the pixels and recreate image
shuffled = pix[indices].astype(np.uint8)
return Image.fromarray(shuffled.reshape(im.width,im.height,3))
def unshuffleImage(im, seed=42):
# Get shuffled pixels in Numpy array
shuffled = np.array(im.getdata())
nPix = len(shuffled)
# Generate unshuffler
np.random.seed(seed)
indices = np.random.permutation(nPix)
unshuffler = np.zeros(nPix, np.uint32)
unshuffler[indices] = np.arange(nPix)
unshuffledPix = shuffled[unshuffler].astype(np.uint8)
return Image.fromarray(unshuffledPix.reshape(im.width,im.height,3))
# Load image and ensure RGB, i.e. not palette image
orig = Image.open('lena.png').convert('RGB')
result = shuffleImage(orig)
result.save('shuffled.png')
unshuffled = unshuffleImage(result)
unshuffled.save('unshuffled.png')
Which turns Lena into this:
It's impossible to do that reliably as far as I know. Theoretically you could brute force it by shuffling the pixels over and over and feeding the result into Amazon Rekognition, but you would end up with a huge AWS bill and probably only something that is approximately the original picture.

How can I remove the effects of Vignetting from an image in Python?

I’m a level 2 student currently undertaking the research-led investigation labs. I am studying sunspots, and as part of the task I am coding a Python module that will (amongst other things) remove the effect of vignetting upon the image – that is, the gradual dimming of light towards the edges of the photo due to effects of the camera. I would like to try and remove this using Python.
My approach to this problem so far has been to take a photo of a background that should be uniform, correct the effect that Vignetting has caused, and divide the array of the image we actually care about by this ‘test’ image (please see the code for more detailed doc strings). However, this isn’t quite working and I’m not sure why. I would guess the problem lies in line 18 – I am simply ‘combining’ the image arrays incorrectly – but I am unsure of how to progress to solve this problem.
As you can probably guess, I’ve given this problem a fair old stab and am currently at a loss as to what to do next – so any advice, help, guidance or hints would be very much appreciated!
Here is my attempt so far:
from __future__ import division
import numpy
from PIL import Image
import matplotlib.pyplot as pyplot
IMAGE_1 = Image.open("Vignette test.jpg") #Import a grayscale test image of a uniform background.
ARRAY_1 = numpy.array(IMAGE_1) #Convert into array and slice into just 2 dimensions.
GRAYSCALE_ARRAY_1 = ARRAY_1[:,:,0]
MAX_PIXEL_1 = numpy.amax(GRAYSCALE_ARRAY_1) #Standardise grayscale array by setting max pixel value to 1.
STANDARDISED_ARRAY_1 = GRAYSCALE_ARRAY_1 / MAX_PIXEL_1
IMAGE_2 = Image.open("IMG_1982.jpg") #Import the image we wish to remove the vignette effect from.
ARRAY_2 = numpy.array(IMAGE_2) #Convert into array and slice into just 2 dimensions.
GRAYSCALE_ARRAY_2 = ARRAY_2[:,:,0]
MAX_PIXEL_2 = numpy.amax(GRAYSCALE_ARRAY_2) #Standardise grayscale array by setting max pixel value to 1.
STANDARDISED_ARRAY_2 = GRAYSCALE_ARRAY_2 / MAX_PIXEL_2
CORRECTED_ARRAY = STANDARDISED_ARRAY_2 / STANDARDISED_ARRAY_1 #Divide the two arrays to remove vignetting.
MAX_PIXEL_3 = numpy.amax(CORRECTED_ARRAY) #Standardise corrected array by setting max pixel value to 1.
STANDARDISED_ARRAY_3 = CORRECTED_ARRAY / MAX_PIXEL_3
ARRAY_3 = STANDARDISED_ARRAY_3 * MAX_PIXEL_2 #Ensure that the max pixel value does not exceed 255.
IMAGE_3 = Image.fromarray(ARRAY_3) #Convert into image
pyplot.figure(figsize=(10,12))
pyplot.subplot(211)
IMGPLOT = pyplot.imshow(IMAGE_2) #Represent orignal image graphically with colour bar
IMGPLOT.set_cmap('gray')
pyplot.colorbar()
pyplot.subplot(212)
IMGPLOT = pyplot.imshow(IMAGE_3) #Represent corrrected image graphically with colour bar
IMGPLOT.set_cmap('gray')
pyplot.colorbar()
pyplot.show()
Here's what is generated by pyplot - the original images are too large to attach. The top image is the original 'image of interest'; and the bottom one should be the 'corrected image'.
It turns out that the 'test' image and the image that we care about must have the same exposure time in order for this to work - nothing necessarily wrong with the code itself.

Create set of random JPGs

Here's the scenario, I want to create a set of random, small jpg's - anywhere between 50 bytes and 8k in size - the actual visual content of the jpeg is irrelevant as long as they're valid. I need to generate a thousand or so, and they all have to be unique - even if they're only different by a single pixel. Can I just write a jpeg header/footer and some random bytes in there? I'm not able to use existing photos or sets of photos from the web.
The second issue is that the set of images has to be different for each run of the program.
I'd prefer to do this in python, as the wrapping scripts are in Python.
I've looked for python code to generate jpg's from scratch, and didn't find anything, so pointers to libraries are just as good.
If the images can be only random noise, so you could generate an array using numpy.random and save them using PIL's Image.save.
This example might be expanded, including ways to avoid a (very unlikely) repetition of patterns:
import numpy
from PIL import Image
for n in range(10):
a = numpy.random.rand(30,30,3) * 255
im_out = Image.fromarray(a.astype('uint8')).convert('RGB')
im_out.save('out%000d.jpg' % n)
These conditions must be met in order to get jpeg images:
The array needs to be shaped (m, n, 3) - three colors, R G and B;
Each element (each color of each pixel) has to be a byte integer (uint, or unsigned integer with 8 bits), ranging from 0 to 255.
Additionaly, some other way besides pure randomness might be used in order to generate the images in case you don't want pure noise.
If you do not care about the content of a file, you can create valid JPEG using Pillow (PIL.Image.new [0]) this way:
from PIL import Image
width = height = 128
valid_solid_color_jpeg = Image.new(mode='RGB', size=(width, height), color='red')
valid_solid_color_jpeg.save('red_image.jpg')
[0] https://pillow.readthedocs.io/en/latest/reference/Image.html#PIL.Image.new
// EDIT: I thought OP wants to generate valid images and does not care about their content (that's why I suggested solid-color images). Here's a function that generates valid images with random pixels and as a bonus writes random string to the generated image. The only dependency is Pillow, everything else is pure Python.
import random
import uuid
from PIL import Image, ImageDraw
def generate_random_image(width=128, height=128):
rand_pixels = [random.randint(0, 255) for _ in range(width * height * 3)]
rand_pixels_as_bytes = bytes(rand_pixels)
text_and_filename = str(uuid.uuid4())
random_image = Image.frombytes('RGB', (width, height), rand_pixels_as_bytes)
draw_image = ImageDraw.Draw(random_image)
draw_image.text(xy=(0, 0), text=text_and_filename, fill=(255, 255, 255))
random_image.save("{file_name}.jpg".format(file_name=text_and_filename))
# Generate 42 random images:
for _ in range(42):
generate_random_image()
If you are looking for a way to do this without numpy this worked for me
(python 3.6 for bytes, you still need Pillow)
import random as r
from PIL import Image
dat = bytes([r.randint(1,3) for x in range(4500000)])
i = Image.frombytes('1', (200,200), dat)

Flipping a image vertically, relation ship between the original picture and the new one. [python]

I am trying to flip a picture on its vertical axis, I am doing this in python, and using the Media module.
like this:
i try to find the relationship between the original and the flipped. since i can't go to negative coordinates in python, what i decided to do is use the middle of the picture as the reference.
so i split the picture in half,and this is what i am going to do:
[note i create a new blank picture and copy each (x,y) pixel to the corresponding to (-x,y), if the original pixel is after the middle.
if its before the middle, i copy the pixel (-x,y) to (x,y)
so i coded it in python, and this is the result.
Original:
i got this:
import media
pic=media.load_picture(media.choose_file())
height=media.get_height(pic)
width=media.get_width(pic)
new_pic=media.create_picture(width,height)
for pixel in pic:
x_org=media.get_x(pixel)
y_org=media.get_y(pixel)
colour=media.get_color(pixel)
new_pixel_0=media.get_pixel(new_pic,x_org+mid_width,y_org) #replace with suggested
#answer below
media.set_color( new_pixel_0,colour)
media.show(new_pic)
this is not what i wanted, but i am so confused, i try to find the relationship between the original pixel location and its transformed (x,y)->(-x,y). but i think that's wrong. If anyone could help me with this method it would be great full.
at the end of the day i want a picture like this:
http://www.misterteacher.com/alphabetgeometry/transformations.html#Flip
Why not just use Python Imaging Library? Flipping an image horizontally is a one-liner, and much faster to boot.
from PIL import Image
img = Image.open("AFLAC.jpg").transpose(Image.FLIP_LEFT_RIGHT)
Your arithmetic is incorrect. Try this instead...
new_pixel_0 = media.get_pixel(new_pic, width - x_org, y_org)
There is no need to treat the two halves of the image separately.
This is essentially negating the x-co-ordinate, as your first diagram illustrates, but then slides (or translates) the flipped image by width pixels to the right to put it back in the range (0 - width).
Here is a simple function to flip an image using scipy and numpy:
import numpy as np
from scipy.misc import imread, imshow
import matplotlib.pyplot as plt
def flip_image(file_name):
img = imread(file_name)
flipped_img = np.ndarray((img.shape), dtype='uint8')
flipped_img[:,:,0] = np.fliplr(img[:,:,0])
flipped_img[:,:,1] = np.fliplr(img[:,:,1])
flipped_img[:,:,2] = np.fliplr(img[:,:,2])
plt.imshow(flipped_img)
return flipped_img

Finding number of colored shapes from picture using Python

My problem has to do with recognising colours from pictures. Doing microbiology I need to count the number of cell nuclei present on a picture taken with a microscope camera. I've used GIMP to tag the nuclei with dots of red colour. Now I'd need to make a script in python, which, given an image, would tell me how many red dots are present. There is no red in the picture except in the dots.
I've thought of a rather complicated solution which is probably not the best one: Take a picture and start iterating through pixels checking each one's colour. If that is red, check all 8 nearest pixels, recursively check each red one's neighbours again until no more neighbouring red pixels are found. Then increment nuclei count by one and mark traversed pixels so they won't be iterated through again. Then continue iteration from where it stopped. Seems kind of heavy so I thought I'd ask, maybe someone has already dealt with a similar problem more elegantly.
Regards,
Sander
Count nuclei
The code adapted from Python Image Tutorial. Input image with nuclei from the tutorial:
#!/usr/bin/env python
import scipy
from scipy import ndimage
# read image into numpy array
# $ wget http://pythonvision.org/media/files/images/dna.jpeg
dna = scipy.misc.imread('dna.jpeg') # gray-scale image
# smooth the image (to remove small objects); set the threshold
dnaf = ndimage.gaussian_filter(dna, 16)
T = 25 # set threshold by hand to avoid installing `mahotas` or
# `scipy.stsci.image` dependencies that have threshold() functions
# find connected components
labeled, nr_objects = ndimage.label(dnaf > T) # `dna[:,:,0]>T` for red-dot case
print "Number of objects is %d " % nr_objects
# show labeled image
####scipy.misc.imsave('labeled_dna.png', labeled)
####scipy.misc.imshow(labeled) # black&white image
import matplotlib.pyplot as plt
plt.imsave('labeled_dna.png', labeled)
plt.imshow(labeled)
plt.show()
Output
Number of objects is 17
I would do it like this:
use OpenCV (python bindings),
take only the R component of RGB image,
binary threshold the R component, so that it leaves only the reddest pixels,
use some object/feature detection to detect the dots, fe. ExtractSURF
Comments:
it will not be fastest, it will not be always accurate.
But it will be fun to do - as CV is always fun - and ready in 10 lines of code. Just a loose thought.
As for the more production-ready suggestions:
actually I think that your idea is very good, and it can be parallelized if given some thought;
use blob detection in OpenCV (cvBlobsLib).
But the most elegant solution would just be to count the tagged nuclei in GIMP, as Ocaso Protal has suggested above. Accurate and fastest. Everything else will be prone to mistakes and much much slower, hence mine are just loose ideas, more fun than anything.
A simple Numpy / Scipy solution would be something like:
import numpy, scipy
a = scipy.misc.imread("rgb.jpg") # Imports RGB to numpy array where a[0] is red, a[1] is blue, a[2] is green...
num_red = numpy.sum((a[:,:,0] == 255) * (a[:,:,1] == 0) * (a[:,:,2] == 0)) # Counts the number of pure red pixels
You could also use PIL to read the image.
EDIT: In light of comment, scipy.ndimage.measurements.label would be useful, and also returns a value num_features which gives you the count:
import numpy, scipy
from scipy import ndimage
a = scipy.misc.imread("rgb.jpg")
b = ((a[:,:,0] == 255) * (a[:,:,1] == 0) * (a[:,:,2] == 0))*1
labeled_array, num_features = scipy.ndimage.measurements.label(b.astype('Int8'))

Categories

Resources