Pyqt: How to use QPixmap.loadFromData? - python

THIS link describes a way to set a QPixmap without using QImage.
How can I reproduce this in python?
what I have is a buffer, which represents an image.
what I need is to call something like
label.setPixmap(QPixmap.loadFromData(buffer))
to display the image.
But before that, a format header has to be inserted, according to the link for a 300x300 grayscale image the header is something like "P6 300 300 255". I have no idea how to do that.
Here is the whole script:
import numpy as np
import cv2
import sys
from PySide.QtGui import QApplication,QLabel,QPixmap
if QApplication.instance() is not None:
a=QApplication.instance()
else:
a = QApplication(sys.argv)
lbl=QLabel()
header = bytearray(b"P6 20 20 255")
cvImg=np.zeros((20,20),dtype=np.uint8)
cv2.circle(cvImg,(9,9),9,49,-1)
buff=cvImg.data
ppm=header+buff
pixmap=QPixmap()
lbl.setPixmap(pixmap.loadFromData(ppm,40,format="PPM"))
sys.exit(a.exec_())
It says:
TypeError: 'PySide.QtGui.QPixmap.loadFromData' called with wrong argument types:
PySide.QtGui.QPixmap.loadFromData(bytearray, int, str)
Supported signatures:
PySide.QtGui.QPixmap.loadFromData(PySide.QtCore.QByteArray, str = None, PySide.QtCore.Qt.ImageConversionFlags = Qt.AutoColor)
PySide.QtGui.QPixmap.loadFromData(PySide.QtCore.uchar, unsigned int, str = None, PySide.QtCore.Qt.ImageConversionFlags = Qt.AutoColor)

Related

How to display images in python simple gui from a api url

I want to read a image from api, but I am getting a error TypeError: 'module' object is not callable. I am trying to make a random meme generator
import PySimpleGUI as sg
from PIL import Image
import requests, json
cutURL = 'https://meme-api-python.herokuapp.com/gimme'
imageURL = json.loads(requests.get(cutURL).content)["url"]
img = Image(requests.get(imageURL).content)
img_box = sg.Image(img)
window = sg.Window('', [[img_box]])
while True:
event, values = window.read()
if event is None:
break
window.close()
Here is the response of the api
postLink "https://redd.it/yyjl2e"
subreddit "dankmemes"
title "Everything's fixed"
url "https://i.redd.it/put9bi0vjp0a1.jpg"
I tried using python simple gui module, IS there alternative way to make a random meme generator.
PIL.Image is a module, you can not call it by Image(...), maybe you need call it by Image.open(...). At the same, tkinter/PySimpleGUI cannot handle JPG image, so conversion to PNG image is required.
from io import BytesIO
import PySimpleGUI as sg
from PIL import Image
import requests, json
def image_to_data(im):
"""
Image object to bytes object.
: Parameters
im - Image object
: Return
bytes object.
"""
with BytesIO() as output:
im.save(output, format="PNG")
data = output.getvalue()
return data
cutURL = 'https://meme-api-python.herokuapp.com/gimme'
imageURL = json.loads(requests.get(cutURL).content)["url"]
data = requests.get(imageURL).content
stream = BytesIO(data)
img = Image.open(stream)
img_box = sg.Image(image_to_data(img))
window = sg.Window('', [[img_box]], finalize=True)
# Check if the size of the window is greater than the screen
w1, h1 = window.size
w2, h2 = sg.Window.get_screen_size()
if w1>w2 or h1>h2:
window.move(0, 0)
while True:
event, values = window.read()
if event is None:
break
window.close()
You need to use Image.open(...) - Image is a module, not a class. You can find a tutorial in the official PIL documentation.
You may need to put the response content in a BytesIO object before you can use Image.open on it. BytesIO is a file-like object that exists only in memory. Most functions like Image.open that expect a file-like object will also accept BytesIO and StringIO (the text equivalent) objects.
Example:
from io import BytesIO
def get_image(url):
data = BytesIO(requests.get(url).content)
return Image.open(data)
I would do it with tk its simple and fast
def window():
root = tk.Tk()
panel = Label(root)
panel.pack()
img = None
def updata():
response = requests.get(https://meme-api-python.herokuapp.com/gimme)
img = Image.open(BytesIO(response.content))
img = img.resize((640, 480), Image.ANTIALIAS) #custom resolution
img = ImageTk.PhotoImage(img)
panel.config(image=img)
panel.image = img
root.update_idletasks()
root.after(30, updata)
updata()
root.mainloop()

Reading QAudioProbe buffer

The Qt documentation (https://doc.qt.io/qtforpython-5/PySide2/QtMultimedia/QAudioBuffer.html) says that we should read the buffer from QAudioProbe like this:
// With a 16bit sample buffer:
quint16 *data = buffer->data<quint16>(); // May cause deep copy
This is C++, but I need to write this in Python.
I am not sure how to use the Qt quint16 data type or even how to import it.
Here is my full code:
#!/bin/python3
from PySide2.QtMultimedia import QMediaPlayer, QMediaContent, QAudioProbe, QAudioBuffer
from PySide2.QtCore import QUrl, QCoreApplication, QObject, Signal, Slot
import sys
def main():
app = QCoreApplication()
player = QMediaPlayer()
url = QUrl.fromLocalFile("/home/ubuntu/sound.wav")
content = QMediaContent(url)
player.setMedia(content)
player.setVolume(50)
probe = QAudioProbe()
probe.setSource(player)
probe.audioBufferProbed.connect(processProbe)
player.play()
def processProbe(probe):
print(probe.data())
if __name__ == "__main__":
main()
Output:
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
...
I ran into the same issue with a fresh PySide2 5.13.2 environment, and running print(probe.data().toBytes()) returned chunks of size 0 which I knew couldn't be the case because other built-in functionality was accessing the data.
I hate this hack as much as anyone else, but if you want to test things it is possible to access the buffer contents this way (please do not use this in production code):
Find out about the datatype, endian-ness etc of your buffer via format, and infer the proper C type that you'll need (e.g. signed int 16).
Extract the printed address from the VoidPtr printout, and convert it to an integer
Create a numpy array by reading at the given address, with the given type, and by the given amount of frames.
Code:
First of all, somewhere in your app, you'll be connecting your QAudioProbe to your source via setSource, and then the audioBufferProbed signal to a method e.g.:
self.audio_probe.audioBufferProbed.connect(self.on_audio_probed)
Then, the following on_audio_probed functionality will fetch the numpy array and print its norm, which should increase in presence of sound:
import numpy as np
import ctypes
def get_buffer_info(buf):
"""
"""
num_bytes = buf.byteCount()
num_frames = buf.frameCount()
#
fmt = buf.format()
sample_type = fmt.sampleType() # float, int, uint
bytes_per_frame = fmt.bytesPerFrame()
sample_rate = fmt.sampleRate()
#
if sample_type == fmt.Float and bytes_per_frame == 4:
dtype = np.float32
ctype = ctypes.c_float
elif sample_type == fmt.SignedInt and bytes_per_frame == 2:
dtype = np.int16
ctype = ctypes.c_int16
elif sample_type == fmt.UnsignedInt and bytes_per_frame == 2:
dtype = np.uint16
ctype = ctypes.c_uint16
#
return dtype, ctype, num_bytes, num_frames, bytes_per_frame, sample_rate
def on_audio_probed(audio_buffer):
"""
"""
cdata = audio_buffer.constData()
(dtype, ctype, num_bytes, num_frames,
bytes_per_frame, sample_rate) = get_buffer_info(audio_buffer)
pointer_addr_str = str(cdata).split("Address ")[1].split(", Size")[0]
pointer_addr = int(pointer_addr_str, 16)
arr = np.array((ctype * num_frames).from_address(pointer_addr))
print(np.linalg.norm(arr)) # should increase in presence of sound
I just tested it with a QAudioRecorder using 16-bit unsigned wavs, and it worked "fine" (audio looked and sounded good, see screenshot below). Again, this is basically a meme code so anything above showing your fancy audio buffered app to your cousins will be extremely risky, do not use in serious code. But in any case let me know if any other workarounds worked for you, or if this also worked in a different context! Hopefully if the devs see that people are actually using this approach they'll fix the issue much sooner :)
Cheers!
Andres

Cannot get tiff image resolution

I'm trying to read 16 bit .tif microscope images from
https://data.broadinstitute.org/bbbc/BBBC006/
and analyze them using
https://github.com/sakoho81/pyimagequalityranking/tree/master/pyimq
however I got an error in the part of the code that loads the tif image.
It uses the PIL tiffimageplugin:
https://pillow.readthedocs.io/en/3.0.0/_modules/PIL/TiffImagePlugin.html
and when it tries to get the resolution tag, it gives me a keyerror
Any ideas why? Advice? Fixes?
Thanks!
import os
import numpy
import scipy.ndimage.interpolation as itp
import argparse
from PIL import Image
from PIL.TiffImagePlugin import X_RESOLUTION, Y_RESOLUTION
from matplotlib import pyplot as plt
from math import log10, ceil, floor
def get_image_from_imagej_tiff(cls, path):
"""
A class method for opening a ImageJ tiff file. Using this method
will enable the use of correct pixel size during analysis.
:param path: Path to an image
:return: An object of the MyImage class
"""
assert os.path.isfile(path)
assert path.endswith(('.tif', '.tiff'))
print(path) #my own little debug thingamajig
image = Image.open(path)
xresolution = image.tag.tags[X_RESOLUTION][0][0] #line that errors out
yresolution = image.tag.tags[Y_RESOLUTION][0][0]
#data = utils.rescale_to_min_max(numpy.array(image), 0, 255)
if data.shape[0] == 1:
data = data[0]
return cls(images=data, spacing=[1.0/xresolution, 1.0/yresolution])
terminal input:
pyimq.main --mode=directory --mode=analyze --mode=plot --working-directory=/home/myufa/predxion/BBBC/a_1_s1 --normalize-power --result=fstd --imagej
output:
Mode option is ['directory', 'analyze', 'plot']
/home/myufa/predxion/BBBC/a_1_s1/z0_a_1_s1_w1.tif
Traceback (most recent call last):
File "/home/myufa/.local/bin/pyimq.main", line 11, in <module>
load_entry_point('PyImageQualityRanking==0.1', 'console_scripts', 'pyimq.main')()
File "/home/myufa/anaconda3/lib/python3.7/site-packages/PyImageQualityRanking-0.1-py3.7.egg/pyimq/bin/main.py", line 148, in main
File "/home/myufa/anaconda3/lib/python3.7/site-packages/PyImageQualityRanking-0.1-py3.7.egg/pyimq/myimage.py", line 81, in get_image_from_imagej_tiff
KeyError: 282
Edit: Here's what I got when I tried some suggestions/indexed the tag, which makes even less sense
I guess the tiff in question isn't following the normal image conventions. The [XY]Resolution tags, number 282 and 283, are mandatory or required in a whole bunch of specifications, but none the less may not be present in all applications. I have some TIFFs (DNG format) that wouldn't load with PIL (Pillow) at all; that prompted me to write a script to dump the primary tag structure:
# TIFF structure program
import struct
import PIL.TiffTags
class DE:
def __init__(self, tiff):
self.tiff = tiff
(self.tag, self.type, self.count, self.valueoroffset) = struct.unpack(
tiff.byteorder+b'HHI4s', self.tiff.file.read(12))
# TODO: support reading the value
def getstring(self):
offset = struct.unpack(self.tiff.byteorder+b'I', self.valueoroffset)[0]
self.tiff.file.seek(offset)
return self.tiff.file.read(self.count)
class IFD:
def __init__(self, tiff):
self.tiff = tiff
self.offset = tiff.file.tell()
(self.len,) = struct.unpack(self.tiff.byteorder+b'H', self.tiff.file.read(2))
def __len__(self):
return self.len
def __getitem__(self, index):
if index>=self.len or index<0:
raise IndexError()
self.tiff.file.seek(self.offset+2+12*index)
return DE(self.tiff)
def nextoffset(self):
self.tiff.file.seek(self.offset+2+12*self.len)
(offset,) = struct.unpack(self.tiff.byteorder+b'I', self.tiff.file.read(4))
return (offset if offset!=0 else None)
class TIFF:
def __init__(self, file):
self.file = file
header = self.file.read(8)
self.byteorder = {b'II': b'<', b'MM': b'>'}[header[:2]]
(magic, self.ifdoffset) = struct.unpack(self.byteorder+b'HI', header[2:])
assert magic == 42
def __iter__(self):
offset = self.ifdoffset
while offset:
self.file.seek(offset)
ifd = IFD(self)
yield ifd
offset = ifd.nextoffset()
def main():
tifffile = open('c:/users/yann/pictures/img.tiff', 'rb')
tiff = TIFF(tifffile)
for ifd in tiff:
print(f'IFD at {ifd.offset}, {ifd.len} entries')
for entry in ifd:
print(f' tag={entry.tag} {PIL.TiffTags.lookup(entry.tag).name}')
if __name__=='__main__':
main()
A quicker way, since you at least have the image object, might be:
import pprint, PIL.TiffTags
pprint.pprint(list(map(PIL.TiffTags.lookup, img.tag)))
One of these might give you a clue what the actual contents of the TIFF are. Since PIL could load it, it probably has pixel counts but not physical resolution.
Figured out a quick fix, writing
image.tag[X_RESOLUTION]
before
xresolution = image.tag.tags[X_RESOLUTION][0][0]
made the info available in the tag.tags dictionary for some reason. Can anyone chime in and explain why this might be? Would love to learn/make sure I didn't mess it up

Python : ValueError: could not convert string to float in real time data

I want to build an ecg. the filter is built in udoo, then I want to plot the signal in python. however it keeps getting this while I run my code:
ValueError: could not convert string to float.
import serial
import sys
import time
from pyqtgraph.Qt import QtGui, QtCore
import numpy as np
import pyqtgraph as pg
# constants
BAUDE_RATE = 9600
ARDUINO_MAX_INT = 2 ** 10
ARDUINO_MAX_VOLTAGE = 3.3
WINDOW_SIZE = 30
MAX_DATA_SIZE = 1024
# declare the Window
app = QtGui.QApplication([])
win = pg.GraphicsWindow(title="Arduino Analog Plotter")
win.resize(1000, 600)
# initialize plots
raw_plot = win.addPlot(title="Raw Pin Data")
raw_curve = raw_plot.plot(pen='y')
raw_plot.addLegend()
raw_plot.showGrid(True, True)
raw_plot.setYRange(0, 1200)
raw_plot.setXRange(0, 1024)
# disable auto size of the x-y axis
raw_plot.enableAutoRange('xy', False)
raw_data = np.zeros(1024)
# open serial
ser = serial.Serial('COM10', 115200, timeout=1)
line = pg.InfiniteLine(pos=1024, angle=0, pen=(24, 215, 248))
raw_plot.addItem(line)
ser.flushInput()
def gettemp(ser):
ser.write('t')
ser.flush()
return ser.readline().strip('\r').strip('\n').split(' ').pop(7)
def update():
global raw_data
# open serial port
raw_capture = []
for x in range(WINDOW_SIZE):
sensoroutput=gettemp()
r=sensoroutput
ser.readline().strip('\r').strip('\n').split(' ').pop(7)
raw_capture.append(float(r).pop(7))
raw_data = np.concatenate([raw_data, raw_capture])
# remove first bin to make room for new bin
if len(raw_data) > MAX_DATA_SIZE:
raw_data = raw_data[WINDOW_SIZE:]
# plot data
raw_curve.setData(raw_data)
def savecounter():
ser.close()
import atexit
atexit.register(savecounter)
timer = QtCore.QTimer()
timer.timeout.connect(update)
timer.start(0)
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
Does anybody know how to fix this?
Your sensoroutput seems to be a string.
You cannot convert it directly with float() :
raw_capture.append(float(r).pop(7))
Can you post what is the output of sensoroutput?
I'm just taking a wild stab here, but usually if you have both \r and \n for line endings the \r comes first and the \n comes second. The way you're stripping off those characters, the \r will remain because you try to strip it first, before the \n has been removed; float() will fail on the non-numeric character in the string. Try this instead, it will remove both end-of-line characters at the same time:
ser.readline().strip('\r\n').split(' ').pop(7)

cannot convert PIL thumbnails to PYQt4 icons

i have a problem when converting some Qimages to thumbnails using PIL.
to be used in a list widget , check the image below
where the image should look like :
please note that i use horizontal flow and the text of item is an empty text
one more thing : this only happens when i put more than 1 image
for i in listOfImages:
picture = Image.open(i)
picture.thumbnail((50,50), Image.ANTIALIAS )
qimage = QtGui.QImage(ImageQt.ImageQt(picture))
icon = QtGui.QIcon(QtGui.QPixmap.fromImage(qimage))
item = QtGui.QListWidgetItem(str(path))
item.setIcon(icon)
self.listWidget.addItem(item)
any idea what is going on ? and why images are being pixlated ?.. any better solutions
EDIT : using
pix = QtGui.QPixmap(path)
pix = pix.scaled(50,50,QtCore.Qt.KeepAspectRatio)
icon = QtGui.QIcon(pix)
will be very problematic (needed 10 seconds to run) while the code above needed 1 second.
thanks
from io import BytesIO
qimage = QtGui.QImage()
fp = BytesIO()
picture.save(fp, "BMP")
qimage.loadFromData(fp.getvalue(), "BMP")
icon ...
I had tried ImageQt, but the performance is not good.
I reference http://doloopwhile.hatenablog.com/entry/20100305/1267782841
Because I use python 3.3, cStringIO is replaced by BytesIO
I've not used PIL with PyQt. Have you tried using a QImageReader?
item = QListWidgetItem(image_path)
imageReader = QImageReader()
imageReader.setFileName(image_path)
size = imageReader.size()
size.scale(50, 50, Qt.KeepAspectRatio)
imageReader.setScaledSize(size)
image = imageReader.read()
pix = QPixmap.fromImage(image)
icon = QIcon(pix)
item.setIcon(icon)
self.listWidget.addItem(item)

Categories

Resources