I'm trying to show a splash screen as soon as possible while my app is importing by doing something like this
import wx, wx.lib.agw.advancedsplash as AS, sys, os, matplotlib
matplotlib.use('WXAgg')
from threading import Thread
#some function definitions
class application(wx.Frame):
#the UI code here
class mod(Thread):
#blah blah
if __name__ == "__main__":
app = wx.App(redirect=True, filename="logfile.txt")
image = wx.Image("splash.png")
image.ConvertAlphaToMask()
bitmap = wx.BitmapFromImage(image)
splash = AS.AdvancedSplash(None, bitmap=bitmap, timeout=4000, agwStyle=AS.AS_TIMEOUT | AS.AS_CENTER_ON_SCREEN)
import time
import telnetlib
import ownmodule
import matplotlib.pyplot as plt
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
from matplotlib.backends.backend_wx import NavigationToolbar2Wx as NavigationToolbar
from matplotlib.figure import Figure
frame = application(None, -1, "AdvancedSplash Test")
app.MainLoop()
but the only thing I get is the shape in gray of the image I chose, then the UI loads and the actual image fills up the gray shape in a flash and disappears timed out. It's like the splash screen shows but the image doesn't load until the main frame does.
What it should happen is to show immediately the image, not only the shape. Somebody know what is happening and how to solve it?
Also it would be great to not time out the splash screen but destroy it right before the main UI appears.
Since the MainLoop is not running then there is no way for events to be delivered to the window, including the splashscreen's EVT_PAINT event that will draw the bitmap.
The wx.Yield function will run a temporary event loop so adding a call to that after creating the splashscreen will let it paint itself and may be sufficient for your needs. One downside is that if there needs to be another paint event while you are doing your other startup code then it won't happen unless you do another wx.Yield, and depending on what other kinds of events may be triggered in those yields, you may have to worry about possible reentrancy problems.
Another approach would be to go ahead and jump into the MainLoop and then run the rest of your startup code in a CallAfter, or another thread, or whatever makes sense for your application.
Related
I have the following piece of code
import skimage.color
import skimage.io
import skimage.viewer
import skimage.filters
fname = "/Users/harryhat/Desktop/Code/Experimental/Frames/frame00055.png"
# read image
image = skimage.io.imread(fname, as_gray=True)
# display the image
viewer = skimage.viewer.ImageViewer(image)
viewer.show()
However when I run the code, firstly the command won't stop running and secondly when I interrupt the command I have to restart the kernel to be able to type in the console. I was wondering why this would be the case / any other ways to do this. Any help would be much appreciated.
The skimage viewer is a Qt application. To run these in a notebook, you need to enable the Qt event loop integration by typing %gui qt in its own cell at the start of the notebook.
Just by the way, the scikit-image viewer is going to be deprecated. I recommend trying out https://napari.org as an alternative. (But the advice above still applies!)
Considering a very basic HelloWorld PyQt5 application like:
app = QApplication(sys.argv)
window = QWidget()
window.setWindowTitle('PyQt5 app')
window.setGeometry(100, 100, 280, 80)
window.move(60, 15)
helloMsg = QLabel('<h1>Hello World!</h1>', parent=window)
helloMsg.move(60, 15)
window.show()
sys.exit(app.exec_())
It constructs a QApplication, a parent-less QWidget becoming the main window, adds a QLabel and shows it.
My question is: how does the QApplication know about the main window?
There is nothing in this code connecting the two.
Perhaps it is a naive question but just looking at this, it seems like magic.
How is the main window's paint event added to the application's event queue without telling so in the source code ? How does the QApplication instance know what is going to be added below in the source code?
tl;dr
There's no "magic" involved: sub-modules can access their "main" modules, and each module of Qt can know if a QApplication instance is running.
Long version
I think that's an interesting question, especially for those who are not that into low level programming. For instance, I've always given the QApplication as some sort of a "cartesian" assumption: «it exists».
As a premise, I'm not going to give you a very technical and low-level explanation: I don't have enough skills to do so (and I really welcome any other answer or edit to this), but I'm assuming that's not what you're looking for.
[Almost] technically speaking, you've to remember that Qt - and PyQt along with it - is an environment (the exact term is framework). As such, each one of its sub elements (classes, and eventually instances of them) "know" about that environment.
QApplication (and its base classes QGuiApplication and QCoreApplication) is a class that is internally accessible from any "sub" Qt module.
It's something like the builtin types (str, int, bool, etc.) that are accessible to any module. For example, the os.path is a python module that you can import as standalone, but it knows what the main os module is, and each function of os.path actually uses some part of that main module.
Like most frameworks, Qt has what is called called an event loop, which is usually run as soon as you call Q[*]Application.exec(). An event loop is something that generally blocks itself waiting for something to happen (an event) and eventually react to it.
Whenever a Qt class needs it, it internally calls the Q[*]Application.instance() method to ensure that an instance of the application is running, meaning that an event loop is active and running. For example Qt widgets need that to be able to show the interface and interact with it: tell the operating system that a new window has been created, therefore it has to be drawn on the screen, so the OS will say "ok, let's show it" by sending Qt an event requesting the drawing, then Qt will "send" that event to that window that will finally draw itself by telling Qt how it's being painted; finally Qt will "tell" the OS what's going to be shown. At the same time, that window might need to know if some keyboard or mouse event has been sent to it and react in some way.
You can see this in the Qt sources: whenever a new QWidget is created, it ensures that a QApplication exists by calling QCoreApplication.instance().
The same happens for other Qt objects that require an application event loop running. This is the case of QTimer (that doesn't require a graphical interface, but has to interface with the system for correct timing) and QPixmap (which needs to know about the graphical environment to correctly show its image), but in some specific cases it also depends on the platform (for example, creation of a QIcon on MacOS requires a running event loop, while that's not necessary on Linux and Windows).
So, finally, that's what (roughly) happens when you run your code:
# create an application instance; at this point the loop is not "running"
# (but that might be enough to let know most classes about the current system
# environment, such as available desktop geometries or cursor position)
app = QApplication(sys.argv)
# create a widget; an application exists and the widget can "begin" to create its
# interface using the information provided by it, like the system default font
# (it's actually a bit more complicated due to cross-platform issues, but let's
# ignore those things now)
window = QWidget()
window.setWindowTitle('PyQt5 app')
window.setGeometry(100, 100, 280, 80)
window.move(60, 15)
helloMsg = QLabel('<h1>Hello World!</h1>', parent=window)
helloMsg.move(60, 15)
# "ask Qt to prepare" the window that is going to be shown; at this point the
# widget's window is not shown yet even if it's "flagged as shown" to Qt, meaning
# that "window.isVisible()" will return True even if it's not actually visible yet
window.show()
# start the event loop by running app.exec(); sys.exit will just "wait" for the
# application to return its value as soon as it actually exits, while in the
# meantime the "exec" function will run its loop almost as a "while True" cycle
# would do; at this point the loop will start telling the OS that a new window
# has to be mapped and wait from the system to tell what to do: it will probably
# "answer" that it's ok to show that window, then Qt will tell back the widget
# that it can go on by "polishing" (use the current style and app info to finally
# "fix" its size) and begin drawing itself, then Qt will give back those drawing
# information allowing the OS to actually "paint" it on the screen; then it will
# be probably waiting for some user (keyboard/mouse) interaction, but the event
# loop might also tell the OS that the window is willing to close itself (as a
# consequence of a QTimer calling "widget.close", for instance) which could
# possibly end with ending the whole event loop, which is the case of
# https://doc.qt.io/qt-5/qguiapplication.html#quitOnLastWindowClosed-prop
# which would also cause the application to, finally, return "0" to sys.exit()
sys.exit(app.exec_())
I'm writing a GUI using python and pyQt to read in data packets through UDP socket, processing it through OpenCV, then finally showing it as real time images using Qt. I created a UDP socket outside the while loop, while using the sock.recvfrom method to read in data packets inside the while loop. Within the same while loop, i processed the data and put it into OpenCV format and use OpenCV imshow() method to show the real time video for experimenting. Everything's great and working smooth, but when i try to show the video through QLabel using QImage and QPixmap, things went bizarre. If OpenCV imshow() exist, the code works fine with additional QPixmap shown in the QLabel on top of the OpenCV cv2.imshow() window. However, if i take out the OpenCV imshow(), the UI will freeze and nothing showed leading "python not responding". I've not yet come up with a good reason why this is happening, and i also tried keeping/changing cv2.waitkey() time without succeeding. Any help would be appreciated.
import socket
import cv2
from PyQt4 import QtCore, QtGui, uic
while True:
data, addr = self.sock.recvfrom(10240)
# after some processing on data to get im_data ...
self.im_data_color_resized = cv2.resize(im_data, (0, 0), interpolation = True)
# using OpenCV to show the video (the entire code works with cv2.imshow but not without it)
cv2.imshow('Real Time Image', self.im_data_color_resized)
cv2.waitKey(10)
# using QLabel to show the video
qtimage = cv2.cvtColor(self.im_data_color_resized, cv2.COLOR_BGR2RGB)
height, width, bpc = qtimage.shape
bpl = bpc * width
qimage = QtGui.QImage(qtimage.data, width, height, bpl, QtGui.QImage.Format_RGB888)
self.imageViewer_label.setPixmap(QtGui.QPixmap.fromImage(qimage))
You need to refresh the event queue, so that your GUI can be updated. Add QtGui.QApplication.processEvents() after the setPixamp function.
It works with cv2.waitKey() because it internally already refreshes the painting events allowing the Qt GUI to be refreshed. But I recommend not to rely on this hack, and explicitly refresh the Qt events with processEvents.
You may also want to put this processing loop in its own thread to leave the GUI/Main thread responsive.
I am running some code using PyQt4, and I would like to plot a figure using its data. But when I try to do that, it will report
QPixmap: Must construct a QGuiApplication before a QPixmap
Below is the code:
from PyQt4 import QtCore
import sys
import matplotlib.pyplot as plt
import numpy as np
def run():
#here is some code, I delete them since they are useless for this question
return data1 #data1 is a list with 30 elements
app = QtCore.QCoreApplication(sys.argv)
client.finished.connect(app.quit)
QtCore.QTimer().singleShot(0,lambda:client.timed_range_stream(5000))
app.exec_()
fig = plt.figure()
ax1 = fig.add_subplot(111)
data2 = run()
datalen = np.linspace(0,10,len(data2))
ax1.plot(datalen,data2,lw = 2)
plt.show()
Since the matplotlib is using pyqt4 as backend, I am so confused why this error happened. It should create a QGuiApplication automatically. I mean whether I use pyqt4 before or not, the code below 'app.exec_()' should create a QGuiApplication automatically. Please point out if I am wrong.
Really appreciate your help! Please give me some advice.
The complaint by PyQt is that you are not running a Gui EventLoop. app.exec_() sure starts an event loop, but that depends on what app is. In your case its QCoreApplication object. How do you expect it to start a Gui EventLoop? It's like buying a saucepan and expecting it to cook pizza.
matplotlib is based on PyQt for sure. I'm sure you can use it in console only applications as well. Hence PyQt will not be able to tell if you want a gui or a console app.
QCoreApplication is used when you are writing a console-based application. Fewer events and processes to manage. If you want to show a window, even a simple one, it takes much more work. And the beast to handle that extra work in QGuiApplication
Now to the Qt version. You are using PyQt4, but the complaint says you need to create a QGuiApplication. However, there is no QGuiApplication or any reference to it in Qt4/PyQt4. This leads me to believe that, your matplotlib copy might be using PyQt5, or PyQt5 dependency comes in from some obscure source, I'm not sure. Check the details of the PyQt version used.
If you are using PyQt4 add from PyQt4 import QtGui in the beginning.
Then change the app = QtCore.QCoreApplication(...) to app = QtGui.QApplication(...).
In case of PyQt5 add from PyQt5 import QtGui, QtWidgets in the beginning.
Then change the app = QtCore.QCoreApplication(...) to app = QtWidgets.QApplication(...).
That'll solve your problem.
PS: Remember, you cannot mix PyQt4 and PyQt5.
I wanted to write a program in Python for Windows that would act as a clicker, in which according to a key the user presses a click is made at a known location on the screen. This is used for an automated option selection from a list in a webpage. I have the clicking part working, but I wanted to be able to make several clicks during execution, as if there is a quiz with multiple lists one after another.
One option is to make a while loop with getch() from msvcrt. The thing is after a click outside the cmd its window is no longer selected, but rather the window where the destination point is located. Therefore, the script stops being active and the user cannot choose another location. A workaround is to click the cmd window to return the focus to it and be able to do any more clicks. To solve this, it would be necessary to create a service or, according to #Sanju, a thread.
The other option is to use a keylogger such as PyHook, which seems like the way to go. However, the problem is that the window where I want to use it in, a webpage in flash or another animations engine, causes an error that some users have found using this keylogger for example in Skype and is being described here. In my case, it also happens with this webpage and either when the click is made on the window itself or when the key is pressed with the window selected.
My base code is presented below, where click(...) would normally contain the coordinates as argument but they are being omitted for simplicity. In this case, 0 ends the program and there are three options being chosen with the numbers 1-3.
import msvcrt, win32api, win32con
def click(x,y):
win32api.SetCursorPos((x,y))
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
key=0
while key!=b'0':
key=msvcrt.getch()
if key==b'1':
click(...)
elif key==b'2':
click(...)
elif key==b'3':
click(...)
The attempts below try to implement #Sanju's suggestion, first with the whole while inside the thread and then with the queue, both not working as expected...
import threading, msvcrt, win32api, win32con
def MyThread():
key=0
while key!=b'0':
key=msvcrt.getch()
if key==b'1':
...
def click(x,y):
...
threading.Thread(target=MyThread,args=[]).start()
.
import queue, threading, msvcrt, win32api, win32con
def MyThread(key):
while key.get()!=b'0':
key.put(msvcrt.getch())
if key.get()==b'1':
...
def click(x,y):
...
key=queue.Queue()
key.put(0)
threading.Thread(target=MyThread,args=[key]).start()
The other attempt uses PyHook, but it's still facing the aforementioned issue.
import pyHook, pythoncom, win32api, win32con
def OnKeyboardEvent(event):
if event.Key=='Numpad1':
...
def click(x,y):
...
hm=pyHook.HookManager()
hm.KeyDown=OnKeyboardEvent
hm.HookKeyboard()
pythoncom.PumpMessages()
All you need here is move your click part to a thread and share the user input using a shareble object such as queue. It sounds like a overkill , but that's the way to keep your tasks in background.
And BTW, you have many GUI application frameworks available in Python like tkinter ,wxpython which can ease your objective.