I am working on a sensor-based Python application built on a PyQt4 GUI. The sensor is generating 16-bit measurements... 256 16-bit "pixels" per "line". A square "image" is acquired by obtaining 256 lines, resulting in a (256,256) Numpy array of 16-bit numbers. I simply want to display this as a grayscale image. The sensor loop is running in a QThread and emits a QImage signal. The signal connects to a slot that renders the data in the main GUI by packing it into a 32-bit RGB image. Of course, to pack the 16bit grayscale pixels into a 32-bit RGB image, I am forced to scale the 16-bit pixels to 8-bit and a substantial amount of dynamic range is lost. A MWE is provided that shows my current strategy (this is obviously not my larger threaded sensor-based application... it simply extracts the salient portions). Please note that I am a Python beginner and I'm doing my best to keep up...
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Grayscale to RGB32 QPixmap tests
"""
import sys
import numpy as np
from PyQt4 import QtGui, QtCore
class PixmapTest(QtGui.QWidget):
def __init__(self):
super(PixmapTest, self).__init__()
self.initUI()
def initUI(self):
imglayout = QtGui.QHBoxLayout(self)
img_16bit = np.random.randint(0,65535,size=(256,256)).astype(np.uint32)
img_16bit_to_8bit = (img_16bit / 65535.0 * 255).astype(np.uint32)
packed_img_array = (255 << 24 | (img_16bit_to_8bit) << 16 | (img_16bit_to_8bit) << 8 | (img_16bit_to_8bit)).flatten()
img = QtGui.QImage(packed_img_array, 256, 256, QtGui.QImage.Format_RGB32)
pixmap = QtGui.QPixmap(img.scaledToWidth(img.width()*2))
imglabel = QtGui.QLabel(self)
imglabel.setPixmap(pixmap)
imglayout.addWidget(imglabel)
self.setLayout(imglayout)
self.move(300, 200)
self.setWindowTitle('QPixmap Test')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
form = PixmapTest()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
Specifically, my questions are:
Is there a better way? The solution has to remain "lightweight" (i.e., PyQt4 QImage/QPixmap). I can't use Matplotlib or anything heavyweight as it is too slow. The closer to native Python/Numpy the better. I realize this is ultimately a limitation of the QImage class, but I was hoping there was a clever solution I'm just not seeing that lets me keep the current signal/slot "wiring" I have.
Through experimentation, I've found that I have to declare all arrays that ultimately get processed to end up in the QImage as np.uint32 (though np.int32 seems to work as well). It doesn't work if I simply declare the penultimate array as uint32/int32. I don't understand why.
I've played around with altering luminosity with Y' = 0.2126 * R + 0.7152 * G + 0.0722 * B and other similar conversions. Probably "polishing a turd" here, but I thought I'd include this because other answers on SX seem to indicate this is important. Notwithstanding the loss of dynamic range, it seems to work to simply assign the same value to R, G, B as in my MWE.
As requested in a comment below, here is a histogram of some sample data from the sensor to illustrate the dynamic range:
Here I use some function data for demo:
y, x = np.mgrid[-10:10:256j, -10:10:256j]
data = ((np.sin(y**2 + x**2) + 2) * 1000).astype(np.uint16)
img_8bit = (data / 256.0).astype(np.uint8) # use the high 8bit
img_8bit = ((data - data.min()) / (data.ptp() / 255.0)).astype(np.uint8) # map the data range to 0 - 255
img = QtGui.QImage(img_8bit.repeat(4), 256, 256, QtGui.QImage.Format_RGB32)
When use the high 8bit, it looks like:
When map min & max value to (0, 255), it looks like:
To convert the 8bit image to 32bit, you can just call img_8bit.repeat(4), this will repeat every byte 4 times, so the memory can be viewed as an uint32 buffer. Since you create the QImage by Format_RGB32 not Format_ARGB32, the most significant byte is not used.
Related
I am using skimage to do some image manipulations via their numpy manipulations. I am able to do the math on my pixels and then show the result using
def image_manip():
# do manipulations
return final_image
viewer = ImageViewer(image_manip())
viewer.show()
In parallel, in a different application, I'm able to show an image in QT using:
self.pixmap = QtGui.QPixmap('ImagePath.jpg')
So ideally, I'd like to combine the two into something like this:
def image_manip():
# do manipulations
return final_image
self.pixmap = QtGui.QPixmap(image_manip())
Obviously this doesn't work. I get an error TypeError: QPixmap(): argument 1 has unexpected type 'numpy.ndarray'
My guess is that viewer = ImageViewer(image_manip()) and viewer.show() has some magic to allow it to read the skimage/numpy objects directly. In my use case, I don't want to save a file out of skimage (I want to just keep it in memory), so I would imagine it needs to be 'baked out' so Qt can read it as a common format.
How do I go about doing this?
You can convert a uint8 numpy array (shape M, N, 3 RGB image) to QPixmap as follows:
from skimage import img_as_ubyte
arr = img_as_ubyte(arr)
img = QImage(arr.data, arr.shape[1], arr.shape[0],
arr.strides[0], QImage.Format_RGB888)
pixmap = QPixmap.fromImage(img)
Can any tell me how to change image color using PyGI (or PyGTK)?
I need method or property, like "ImageColour" in CEGUI, what changing non-alpha channels of image. For example:
I have one picture, its just white round. I need to use this round in a different places of interface, with different colors. And I won't to create another dublicates of this round, bcs, for example, I need 256 different colors.
And Pictures Example:
This is picture with white round, what I've got
This is picture with round, what color I want to see
Here is functions, what I'm using to change color:
image = gtk.Image()
image.set_from_file("images/button.png")
pix_buffer = image.get_pixbuf()
pix_buffer.fill(0xA32432FF)
image.set_from_pixbuf(pix_buffer)
Thats doesn't work correctly. Thats fill full image to quad of red color.
Another idea is modify_fg/modify_base, but here works only modify_bg what changing only background (and doesn't changing white color)
I've been playing with this for the last days, and it is not entirely easy to treat pixbuf as a immediate representation of the pixels. One of the reasons is that the GdkPixbuf software determines a 'rowstride' which causes 'jumps' in the addressing of the image.
Until I can investigate more, the simplest solution I've found is to convert the pixbuf to a PIL.Image, do the operations there, and convert back to pixbuf. These are the two function that do the conversion:
def pixbuf2image(self, pxb):
""" Convert GdkPixbuf.Pixbuf to PIL image """
data = pxb.get_pixels()
w = pxb.get_width()
h = pxb.get_height()
stride = pxb.get_rowstride()
mode = "RGB"
if pxb.get_has_alpha():
mode = "RGBA"
img = Image.frombytes(mode, (w, h), data, "raw", mode, stride)
return img
def image2pixbuf(self, img):
""" Convert PIL or Pillow image to GdkPixbuf.Pixbuf """
data = img.tobytes()
w, h = img.size
data = GLib.Bytes.new(data)
pxb = GdkPixbuf.Pixbuf.new_from_bytes(data, GdkPixbuf.Colorspace.RGB,
False, 8, w, h, w * 3)
return pxb
Luckily, new_from_bytes takes into account the rowstride automatically, and saves the continuous bytes in data in the correct manner in memory.
In PIL (Pillow for Python3) you can do many operations on the image, including pixel-by-pixel access. Do note that pixbuf is always using RGB(A) components, so you have to be careful with the conversions and operations!
In any case, the latter function shows how to convert a memory (bytes) array to a GdkPixbuf, if you want to construct the image directly.
I find myself using ScatterLayout a lot (Scatter inside of Scatter and so on...). One feature that escapes me is applying some kind of alpha mask to a Scatter canvas. The idea would be to perform simple shape knockouts that correspond to a background image or other simple depth tricks.
I feel like this is doable with some basic OpenGL or maybe the kivy.graphics.stencil_instructions. I am not particularly comfortable with heavy OpenGL commands (I don't know how to debug them), but wrapping some simple commands into a Widget Class is just fine.
So here is what I am getting from other sources, but I want to take this beyond primitive solids (like Rectangle):
Updated with a texture based direction (code is untested!)
with self.canvas:
# Hopefully I can build my "transparency mask" manually with a Texture
texture = Texture.create(size=(64, 64))
size = 64 * 64 * 3
buf = [int(x * 255 / size) for x in range(size)]
buf = b''.join(map(chr, buf))
texture.blit_buffer(buf, colorfmt='rgb', bufferfmt='ubyte')
StencilPush()
Rectangle(texture=texture, pos=self.pos, size=(64, 64))
#use mask
StencilUse()
"""
Will we get an image based mask on all drawing commands placed here?
"""
StencilPop()
MultiTexture Canvas is exactly the answer. Try using your own custom PNG and you will see, make them as complex as you like and it works beautifully. I am going to include the important pieces of code that make it work, and consider the question Answered, although I may yet reopen if I run into rendering order problems (since I am not using a static PNG for my application.
The very simple shader definition you need to include as a docstring (it will be autoloaded, see the link above if I have stripped this too far:
fs_multitexture = '''
$HEADER$
// New uniform that will receive texture at index 1
uniform sampler2D texture1;
void main(void) {
// multiple current color with both texture (0 and 1).
// currently, both will use exactly the same texture coordinates.
gl_FragColor = frag_color * \
texture2D(texture0, tex_coord0) * \
texture2D(texture1, tex_coord0);
}
'''
And a bare minimum Widget class that demonstrates the texture filtering process that creates this simple magic:
class MultitextureWidget(Widget):
def __init__(self, **kwargs):
self.canvas = RenderContext()
self.canvas.shader.fs = fs_multitexture
with self.canvas:
Color(1, 1, 1)
BindTexture(source='mtexture2.png', index=1)
Rectangle(size=(150, 150), source='mtexture1.png', pos=(500, 200))
self.canvas['texture1'] = 1
super(MultitextureWidget, self).__init__(**kwargs)
Clock.schedule_interval(self.update_glsl, 0)
def update_glsl(self, *largs):
self.canvas['projection_mat'] = Window.render_context['projection_mat']
self.canvas['modelview_mat'] = Window.render_context['modelview_mat']
Here's the scenario, I want to create a set of random, small jpg's - anywhere between 50 bytes and 8k in size - the actual visual content of the jpeg is irrelevant as long as they're valid. I need to generate a thousand or so, and they all have to be unique - even if they're only different by a single pixel. Can I just write a jpeg header/footer and some random bytes in there? I'm not able to use existing photos or sets of photos from the web.
The second issue is that the set of images has to be different for each run of the program.
I'd prefer to do this in python, as the wrapping scripts are in Python.
I've looked for python code to generate jpg's from scratch, and didn't find anything, so pointers to libraries are just as good.
If the images can be only random noise, so you could generate an array using numpy.random and save them using PIL's Image.save.
This example might be expanded, including ways to avoid a (very unlikely) repetition of patterns:
import numpy
from PIL import Image
for n in range(10):
a = numpy.random.rand(30,30,3) * 255
im_out = Image.fromarray(a.astype('uint8')).convert('RGB')
im_out.save('out%000d.jpg' % n)
These conditions must be met in order to get jpeg images:
The array needs to be shaped (m, n, 3) - three colors, R G and B;
Each element (each color of each pixel) has to be a byte integer (uint, or unsigned integer with 8 bits), ranging from 0 to 255.
Additionaly, some other way besides pure randomness might be used in order to generate the images in case you don't want pure noise.
If you do not care about the content of a file, you can create valid JPEG using Pillow (PIL.Image.new [0]) this way:
from PIL import Image
width = height = 128
valid_solid_color_jpeg = Image.new(mode='RGB', size=(width, height), color='red')
valid_solid_color_jpeg.save('red_image.jpg')
[0] https://pillow.readthedocs.io/en/latest/reference/Image.html#PIL.Image.new
// EDIT: I thought OP wants to generate valid images and does not care about their content (that's why I suggested solid-color images). Here's a function that generates valid images with random pixels and as a bonus writes random string to the generated image. The only dependency is Pillow, everything else is pure Python.
import random
import uuid
from PIL import Image, ImageDraw
def generate_random_image(width=128, height=128):
rand_pixels = [random.randint(0, 255) for _ in range(width * height * 3)]
rand_pixels_as_bytes = bytes(rand_pixels)
text_and_filename = str(uuid.uuid4())
random_image = Image.frombytes('RGB', (width, height), rand_pixels_as_bytes)
draw_image = ImageDraw.Draw(random_image)
draw_image.text(xy=(0, 0), text=text_and_filename, fill=(255, 255, 255))
random_image.save("{file_name}.jpg".format(file_name=text_and_filename))
# Generate 42 random images:
for _ in range(42):
generate_random_image()
If you are looking for a way to do this without numpy this worked for me
(python 3.6 for bytes, you still need Pillow)
import random as r
from PIL import Image
dat = bytes([r.randint(1,3) for x in range(4500000)])
i = Image.frombytes('1', (200,200), dat)
I want to convert an image into a NumPy array to a PySide QPixmap, so I can display it (EDIT: in my PySide UI). I already found this tool: qimage2ndarray, but it only works for PyQt4. I tried to change it to get it working with PySide, but I would have to change the C part of the tool and I have no experience with C. How can I do this or are there any alternatives?
One alternative is to just use PIL library.
>>> import numpy as np
>>> import Image
>>> im = Image.fromarray(np.random.randint(0,256,size=(100,100,3)).astype(np.uint8))
>>> im.show()
You can look at the QPixmap constructor at http://www.pyside.org/docs/pyside/PySide/QtGui/QImage.html.
It looks like you should be able to use a numpy array directly in the constructor:
class PySide.QtGui.QImage(data, width, height, format)
where the format argument is one of these: http://www.pyside.org/docs/pyside/PySide/QtGui/QImage.html#PySide.QtGui.PySide.QtGui.QImage.Format.
So, for example you could do something like:
>>> a = np.random.randint(0,256,size=(100,100,3)).astype(np.uint32)
>>> b = (255 << 24 | a[:,:,0] << 16 | a[:,:,1] << 8 | a[:,:,2]).flatten() # pack RGB values
>>> im = PySide.QtGui.QImage(b, 100, 100, PySide.QtGui.QImage.Format_RGB32)
I don't have PySide installed so I haven't tested this. Chances are it won't work as is, but it might guide you in the right direction.
If you create the data yourself, using numpy for example, I think the fastest method is to directly access a QImage. You can create a ndarray from the buffer object QImage.bits(), do some work using the numpy methods and create a QPixmap from QImage when you are done. You can also read or modify existing QImages that way.
import numpy as np
from PySide.QtGui import QImage
img = QImage(30, 30, QImage.Format_RGB32)
imgarr = np.ndarray(shape=(30,30), dtype=np.uint32, buffer=img.bits())
# qt write, numpy read
img.setPixel(0, 0, 5)
print "%x" % imgarr[0,0]
# numpy write, qt read
imgarr[0,1] = 0xff000006
print "%x" % img.pixel(1,0)
Be sure that the array does not outlive the image object. If you want, you can use a more sophisticated dtype, like a record array for individual access to the alpha, red, green and blue bits (beware of endianess though).
In case there is no efficient way to calculate the pixel values using numpy, you can also use scipy.weave to inline some C/C++ code that operates on the array img.bits() points to.
If you already have an image in ARGB format, creating the QImage from data as suggested before is probably easier.
In addition to #user545424 answer about using PIL, if you didn't want to depend on PIL, you could manually construct your Image directly from your np array:
width = 100
height = 100
data = np.random.randint(0,256,size=(width,height,3)).astype(np.uint8)
img = QtGui.QImage(width, height, QtGui.QImage.Format_RGB32)
for x in xrange(width):
for y in xrange(height):
img.setPixel(x, y, QtGui.QColor(*data[x][y]).rgb())
pix = QtGui.QPixmap.fromImage(img)
I'm sure, using PIL, there is a way to read the actual image data into a QImage, but I will let #user545424 address that part since its from his answer. PIL comes with the ImageQt module which is convenient for directly converting an Image -> QPixmap, but unfortunately thats a PyQt4 QPixmap, which doesn't help you.
If the answer of user545424 does not work as expected: you see artifacts in the image, then I would suggest you change the parameter to
PySide.QtGui.QImage.Format_ARGB32
a = np.random.randint(0,256,size=(100,100,3)).astype(np.uint32)
b = (255 << 24 | a[:,:,0] << 16 | a[:,:,1] << 8 | a[:,:,2]).flatten() # pack RGB values
im = PySide.QtGui.QImage(b, 100, 100, PySide.QtGui.QImage.Format_ARGB32)