I want to convert an image into a NumPy array to a PySide QPixmap, so I can display it (EDIT: in my PySide UI). I already found this tool: qimage2ndarray, but it only works for PyQt4. I tried to change it to get it working with PySide, but I would have to change the C part of the tool and I have no experience with C. How can I do this or are there any alternatives?
One alternative is to just use PIL library.
>>> import numpy as np
>>> import Image
>>> im = Image.fromarray(np.random.randint(0,256,size=(100,100,3)).astype(np.uint8))
>>> im.show()
You can look at the QPixmap constructor at http://www.pyside.org/docs/pyside/PySide/QtGui/QImage.html.
It looks like you should be able to use a numpy array directly in the constructor:
class PySide.QtGui.QImage(data, width, height, format)
where the format argument is one of these: http://www.pyside.org/docs/pyside/PySide/QtGui/QImage.html#PySide.QtGui.PySide.QtGui.QImage.Format.
So, for example you could do something like:
>>> a = np.random.randint(0,256,size=(100,100,3)).astype(np.uint32)
>>> b = (255 << 24 | a[:,:,0] << 16 | a[:,:,1] << 8 | a[:,:,2]).flatten() # pack RGB values
>>> im = PySide.QtGui.QImage(b, 100, 100, PySide.QtGui.QImage.Format_RGB32)
I don't have PySide installed so I haven't tested this. Chances are it won't work as is, but it might guide you in the right direction.
If you create the data yourself, using numpy for example, I think the fastest method is to directly access a QImage. You can create a ndarray from the buffer object QImage.bits(), do some work using the numpy methods and create a QPixmap from QImage when you are done. You can also read or modify existing QImages that way.
import numpy as np
from PySide.QtGui import QImage
img = QImage(30, 30, QImage.Format_RGB32)
imgarr = np.ndarray(shape=(30,30), dtype=np.uint32, buffer=img.bits())
# qt write, numpy read
img.setPixel(0, 0, 5)
print "%x" % imgarr[0,0]
# numpy write, qt read
imgarr[0,1] = 0xff000006
print "%x" % img.pixel(1,0)
Be sure that the array does not outlive the image object. If you want, you can use a more sophisticated dtype, like a record array for individual access to the alpha, red, green and blue bits (beware of endianess though).
In case there is no efficient way to calculate the pixel values using numpy, you can also use scipy.weave to inline some C/C++ code that operates on the array img.bits() points to.
If you already have an image in ARGB format, creating the QImage from data as suggested before is probably easier.
In addition to #user545424 answer about using PIL, if you didn't want to depend on PIL, you could manually construct your Image directly from your np array:
width = 100
height = 100
data = np.random.randint(0,256,size=(width,height,3)).astype(np.uint8)
img = QtGui.QImage(width, height, QtGui.QImage.Format_RGB32)
for x in xrange(width):
for y in xrange(height):
img.setPixel(x, y, QtGui.QColor(*data[x][y]).rgb())
pix = QtGui.QPixmap.fromImage(img)
I'm sure, using PIL, there is a way to read the actual image data into a QImage, but I will let #user545424 address that part since its from his answer. PIL comes with the ImageQt module which is convenient for directly converting an Image -> QPixmap, but unfortunately thats a PyQt4 QPixmap, which doesn't help you.
If the answer of user545424 does not work as expected: you see artifacts in the image, then I would suggest you change the parameter to
PySide.QtGui.QImage.Format_ARGB32
a = np.random.randint(0,256,size=(100,100,3)).astype(np.uint32)
b = (255 << 24 | a[:,:,0] << 16 | a[:,:,1] << 8 | a[:,:,2]).flatten() # pack RGB values
im = PySide.QtGui.QImage(b, 100, 100, PySide.QtGui.QImage.Format_ARGB32)
Related
I'm trying to solve this CTF challenge Image Magic:
https://ctflearn.com/challenge/89
My code:
from PIL import Image
import numpy as np
out=np.asarray(Image.open("out copy.jpg"))
original_pixels=np.zeros([92,304], dtype=tuple)
newcol=-1
for out_col in range(0,27968):
newrow=out_col%92
if out_col%92==0:
newcol+=1
original_pixels[newrow,newcol]=out[0,out_col]
original=Image.fromarray(original_pixels,"RGB")
original.save("original.png")
The logic and everything should be correct, as it matches with a solution I found on the Internet, but for some reason, my code is producing gibberish:
The solution I found ( https://medium.com/nerd-for-tech/write-up-ctflearn-image-magic-2caf5d651913 ):
from PIL import Image
img = Image.open('out copy.jpg')
new_img = Image.new('RGB', (400, 400))
n_x = n_y = 0
for i in range(0, img.size[0]):
r, g, b = img.getpixel((i, 0))
n_x = (i % 92)
if i % 92 == 0:
n_y = n_y + 1
new_img.putpixel((n_x, n_y), (r, g, b))
new_img.save('answer.jpg')
Which produces the correct image:
As you can see, the logic is basically the same, the only difference is that I used np arrays and the Image.fromarray method to shift the pixels and create the original image. The solution just created a blank image and used the putpixel method to fill the pixels in.
I think the problem is with the fromarray function, but I can't figure out why/how, and existing questions/answers are not helping. This seems like a simple solve, but I'm not really familiar with these stuff. Any suggestions?
The main reason this does not work, is that you are creating your original_pixels array as an array of Python objects. I’m frankly not sure what exactly this does in the end, but it is needlessly complicated. If you create your array as a (width, height, 3) array of bytes instead, it works:
original_pixels=np.zeros([92, 304, 3], dtype=np.uint8)
Now, as an "exercise for the reader": Have a look at the reshape method of numpy’s ndarray objects. Could that be put to good use here?
I am using skimage to do some image manipulations via their numpy manipulations. I am able to do the math on my pixels and then show the result using
def image_manip():
# do manipulations
return final_image
viewer = ImageViewer(image_manip())
viewer.show()
In parallel, in a different application, I'm able to show an image in QT using:
self.pixmap = QtGui.QPixmap('ImagePath.jpg')
So ideally, I'd like to combine the two into something like this:
def image_manip():
# do manipulations
return final_image
self.pixmap = QtGui.QPixmap(image_manip())
Obviously this doesn't work. I get an error TypeError: QPixmap(): argument 1 has unexpected type 'numpy.ndarray'
My guess is that viewer = ImageViewer(image_manip()) and viewer.show() has some magic to allow it to read the skimage/numpy objects directly. In my use case, I don't want to save a file out of skimage (I want to just keep it in memory), so I would imagine it needs to be 'baked out' so Qt can read it as a common format.
How do I go about doing this?
You can convert a uint8 numpy array (shape M, N, 3 RGB image) to QPixmap as follows:
from skimage import img_as_ubyte
arr = img_as_ubyte(arr)
img = QImage(arr.data, arr.shape[1], arr.shape[0],
arr.strides[0], QImage.Format_RGB888)
pixmap = QPixmap.fromImage(img)
I create an image and fill the pixels:
img = Image.new( 'RGB', (2000,2000), "black") # create a new black image
pixels = img.load() # create the pixel map
for i in range(img.size[0]): # for every pixel:
for j in range(img.size[1]):
#do some stuff that requires i and j as parameter
Can this be done more elegant (and may be faster, since theoretically the loops are parallelizable)?
Note: I will first answer the question, then propose an, in my opinion, better alternative
Answering the question
It is hard to give advice without knowing what changes you intend to apply and whether the loading of the image as a PIL image is part of the question or a given.
More elegant in Python-speak typically means using list comprehensions
For parallelization, you would look at something like the multiprocessing module or joblib
Depending on your method of creating / loading in images, the list_of_pixels = list(img.getdata()) and img.putdata(new_list_of_pixels) functions may be of interest to you.
An example of what this might look like:
from PIL import Image
from multiprocessing import Pool
img = Image.new( 'RGB', (2000,2000), "black")
# a function that fixes the green component of a pixel to the value 50
def update_pixel(p):
return (p[0], 50, p[2])
list_of_pixels = list(img.getdata())
pool = Pool(4)
new_list_of_pixels = pool.map(update_pixel, list_of_pixels)
pool.close()
pool.join()
img.putdata(new_list_of_pixels)
However, I don't think that is a good idea... When you see loops (and list comprehensions) over thousands of elements in Python and you have performance on your mind, you can be sure there is a library that will make this faster.
Better Alternative
First, a quick pointer to the Channel Operations module,
Since you don't specify the kind of pixel operation you intend to do and you clearly already know about the PIL library, I'll assume you're aware of it and it doesn't do what you want.
Then, any moderately complex matrix manipulation in Python will benefit from pulling in Pandas, Numpy or Scipy...
Pure numpy example:
import numpy as np
import matplotlib.pyplot as plt
#black image
img = np.zeros([100,100,3],dtype=np.uint8)
#show
plt.imshow(img)
#make it green
img[:,:, 1] = 50
#show
plt.imshow(img)
Since you are just working with a standard numpy.ndarray, you can use any of the available functionalities, such as np.vectorize, apply, map etc. To show a similar solution as above with the update_pixel function:
import numpy as np
import matplotlib.pyplot as plt
#black image
img = np.zeros([100,100,3],dtype=np.uint8)
#show
plt.imshow(img)
#make it green
def update_pixel(p):
return (p[0], 50, p[2])
green_img = np.apply_along_axis(update_pixel, 2, img)
#show
plt.imshow(green_img)
One more example, this time calculating the image content directly from the indexes, instead of from existing image pixel content (no need to create an empty image first):
import numpy as np
import matplotlib.pyplot as plt
def calc_pixel(x,y):
return np.array([100-x, x+y, 100-y])
img = np.frompyfunc(calc_pixel, 2, 1).outer(np.arange(100), np.arange(100))
plt.imshow(np.array(img.tolist()))
#note: I don't know any other way to convert a 2D array of arrays to a 3D array...
And, low and behold, scipy has methods to read and write images and inbetween, you can just use numpy to manipulate them as "classic" mult-dimensional arrays. (scipy.misc.imread depends on PIL, by the way)
More example code.
I am working on a sensor-based Python application built on a PyQt4 GUI. The sensor is generating 16-bit measurements... 256 16-bit "pixels" per "line". A square "image" is acquired by obtaining 256 lines, resulting in a (256,256) Numpy array of 16-bit numbers. I simply want to display this as a grayscale image. The sensor loop is running in a QThread and emits a QImage signal. The signal connects to a slot that renders the data in the main GUI by packing it into a 32-bit RGB image. Of course, to pack the 16bit grayscale pixels into a 32-bit RGB image, I am forced to scale the 16-bit pixels to 8-bit and a substantial amount of dynamic range is lost. A MWE is provided that shows my current strategy (this is obviously not my larger threaded sensor-based application... it simply extracts the salient portions). Please note that I am a Python beginner and I'm doing my best to keep up...
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Grayscale to RGB32 QPixmap tests
"""
import sys
import numpy as np
from PyQt4 import QtGui, QtCore
class PixmapTest(QtGui.QWidget):
def __init__(self):
super(PixmapTest, self).__init__()
self.initUI()
def initUI(self):
imglayout = QtGui.QHBoxLayout(self)
img_16bit = np.random.randint(0,65535,size=(256,256)).astype(np.uint32)
img_16bit_to_8bit = (img_16bit / 65535.0 * 255).astype(np.uint32)
packed_img_array = (255 << 24 | (img_16bit_to_8bit) << 16 | (img_16bit_to_8bit) << 8 | (img_16bit_to_8bit)).flatten()
img = QtGui.QImage(packed_img_array, 256, 256, QtGui.QImage.Format_RGB32)
pixmap = QtGui.QPixmap(img.scaledToWidth(img.width()*2))
imglabel = QtGui.QLabel(self)
imglabel.setPixmap(pixmap)
imglayout.addWidget(imglabel)
self.setLayout(imglayout)
self.move(300, 200)
self.setWindowTitle('QPixmap Test')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
form = PixmapTest()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
Specifically, my questions are:
Is there a better way? The solution has to remain "lightweight" (i.e., PyQt4 QImage/QPixmap). I can't use Matplotlib or anything heavyweight as it is too slow. The closer to native Python/Numpy the better. I realize this is ultimately a limitation of the QImage class, but I was hoping there was a clever solution I'm just not seeing that lets me keep the current signal/slot "wiring" I have.
Through experimentation, I've found that I have to declare all arrays that ultimately get processed to end up in the QImage as np.uint32 (though np.int32 seems to work as well). It doesn't work if I simply declare the penultimate array as uint32/int32. I don't understand why.
I've played around with altering luminosity with Y' = 0.2126 * R + 0.7152 * G + 0.0722 * B and other similar conversions. Probably "polishing a turd" here, but I thought I'd include this because other answers on SX seem to indicate this is important. Notwithstanding the loss of dynamic range, it seems to work to simply assign the same value to R, G, B as in my MWE.
As requested in a comment below, here is a histogram of some sample data from the sensor to illustrate the dynamic range:
Here I use some function data for demo:
y, x = np.mgrid[-10:10:256j, -10:10:256j]
data = ((np.sin(y**2 + x**2) + 2) * 1000).astype(np.uint16)
img_8bit = (data / 256.0).astype(np.uint8) # use the high 8bit
img_8bit = ((data - data.min()) / (data.ptp() / 255.0)).astype(np.uint8) # map the data range to 0 - 255
img = QtGui.QImage(img_8bit.repeat(4), 256, 256, QtGui.QImage.Format_RGB32)
When use the high 8bit, it looks like:
When map min & max value to (0, 255), it looks like:
To convert the 8bit image to 32bit, you can just call img_8bit.repeat(4), this will repeat every byte 4 times, so the memory can be viewed as an uint32 buffer. Since you create the QImage by Format_RGB32 not Format_ARGB32, the most significant byte is not used.
Here's the scenario, I want to create a set of random, small jpg's - anywhere between 50 bytes and 8k in size - the actual visual content of the jpeg is irrelevant as long as they're valid. I need to generate a thousand or so, and they all have to be unique - even if they're only different by a single pixel. Can I just write a jpeg header/footer and some random bytes in there? I'm not able to use existing photos or sets of photos from the web.
The second issue is that the set of images has to be different for each run of the program.
I'd prefer to do this in python, as the wrapping scripts are in Python.
I've looked for python code to generate jpg's from scratch, and didn't find anything, so pointers to libraries are just as good.
If the images can be only random noise, so you could generate an array using numpy.random and save them using PIL's Image.save.
This example might be expanded, including ways to avoid a (very unlikely) repetition of patterns:
import numpy
from PIL import Image
for n in range(10):
a = numpy.random.rand(30,30,3) * 255
im_out = Image.fromarray(a.astype('uint8')).convert('RGB')
im_out.save('out%000d.jpg' % n)
These conditions must be met in order to get jpeg images:
The array needs to be shaped (m, n, 3) - three colors, R G and B;
Each element (each color of each pixel) has to be a byte integer (uint, or unsigned integer with 8 bits), ranging from 0 to 255.
Additionaly, some other way besides pure randomness might be used in order to generate the images in case you don't want pure noise.
If you do not care about the content of a file, you can create valid JPEG using Pillow (PIL.Image.new [0]) this way:
from PIL import Image
width = height = 128
valid_solid_color_jpeg = Image.new(mode='RGB', size=(width, height), color='red')
valid_solid_color_jpeg.save('red_image.jpg')
[0] https://pillow.readthedocs.io/en/latest/reference/Image.html#PIL.Image.new
// EDIT: I thought OP wants to generate valid images and does not care about their content (that's why I suggested solid-color images). Here's a function that generates valid images with random pixels and as a bonus writes random string to the generated image. The only dependency is Pillow, everything else is pure Python.
import random
import uuid
from PIL import Image, ImageDraw
def generate_random_image(width=128, height=128):
rand_pixels = [random.randint(0, 255) for _ in range(width * height * 3)]
rand_pixels_as_bytes = bytes(rand_pixels)
text_and_filename = str(uuid.uuid4())
random_image = Image.frombytes('RGB', (width, height), rand_pixels_as_bytes)
draw_image = ImageDraw.Draw(random_image)
draw_image.text(xy=(0, 0), text=text_and_filename, fill=(255, 255, 255))
random_image.save("{file_name}.jpg".format(file_name=text_and_filename))
# Generate 42 random images:
for _ in range(42):
generate_random_image()
If you are looking for a way to do this without numpy this worked for me
(python 3.6 for bytes, you still need Pillow)
import random as r
from PIL import Image
dat = bytes([r.randint(1,3) for x in range(4500000)])
i = Image.frombytes('1', (200,200), dat)