How to add functions to Qt buttons from .ui file? - python

I have just started using and discovering PyQt5 and Qt. I installed it with
pip install pyqt5. Then I opened the designer.exe that comes with pyqt-tools and made my design.
After that I converted it from .ui to .py using pyuic design.ui > dialog.py.
When I run app.py it runs fine and the ui is working fine. But now my problem is: How do I add functions that get called when a button is pressed?
Here is my app.py code:
import sys
from PyQt5.QtWidgets import QDialog, QApplication
from dialog import Ui_Form
class AppWindow(QDialog)
def __init__(self):
super().__init__()
self.ui = Ui_Form()
self.ui.setupUi(self)
self.show()
app = QApplication(sys.argv)
w = AppWindow()
w.show()
sys.exit(app.exec_())

Your approach is actually correct (at least, one of the correct ones): the pyuic generated files should never be modified unless you really (REALLY) know what you're doing and why. Also, their behavior should never be mimicked, as there's no reason to do that.
As a (I'd say, mandatory) reference, you can read more about this topic on the official PyQt guidelines about using Designer.
Let me do an important premise about this topic.
What you're doing is known as the single inheritance approach, meaning that you're creating a subclass that only inherits from the QWidget subclass you're using (QDialog, in this case) and you're building the ui on top of it using what's referred to as the form class. The form class is a standard Python object type that is responsible of creating all child widgets on top of the main subclass instance when its setupUi function is called.
The result is that the subclassed instance will be always referred to self within its scope, while all its children are available through the self.ui object.
A similar (and more common) approach is the multiple inheritance approach. In this way you inherit from both the QWidget subclass (QDialog) and the ui object. The UI result will be the same, but the widgets will be more directly available using self.objectName instead of self.ui.objectName.
Whether you use one or the other is a matter of choice, just remember that whenever you use the multiple inheritance approach the setupUi will potentially overwrite any previously set instance attribute, and if you create a new attribute for an already existing object name, that object will not be (directly) accessible anymore.
Finally, let me give you another suggestion: while not completely wrong, the link posted in the other answer not only gives a very misleading suggestion, but doesn't even dig more in why the previous post it refers to was wrong about. The bottom line is that pyuic created files should never be modified (nor you should try to mimic their behavior) for lots of reasons: not only if you need to change your ui you'll need to merge the already modified code with the new one (hoping that you've not overwritten it using pyuic), but it also leads to misconceptions about the whole object structure.
So, in your case (with the single inheritance approach), all objects created within Designer are actually accessible using self.ui.objectName, and that objectName is what is shown in the object inspector of Designer.
Let's assume that you created a UI with a single push button and you want to print something when that button is pressed. If you didn't do too many tests and modifications to your ui, that button's object name is probably "pushButton".
Then, your code will probably look like this:
import sys
from PyQt5.QtWidgets import QDialog, QApplication
from dialog import Ui_Form
class AppWindow(QDialog)
def __init__(self):
super().__init__()
self.ui = Ui_Form()
self.ui.setupUi(self)
self.ui.pushButton.clicked.connect(self.buttonClicked)
def buttonClicked(self):
print('button clicked!')
app = QApplication(sys.argv)
w = AppWindow()
w.show()
sys.exit(app.exec_())
Note that Qt object names are not unique. You could theoretically set the same object name for more than one QObject (or QWidget), but there's no use in that. Consider that from the python perspective: each variable should have its own name, if you overwrite an already existing variable name, the previous one will become unaccessible (if not even garbage collected).
Designer is smart enough (not that it requires so much...) to prevent creating objects sharing the same object name, and that's why if you add another button, it will be named pushButton_2, pushButton_3, etc. PyQt takes benefit from that by creating unique instance attributes for all object names whether you're using the pyuic file or uic.loadUi().
For the sake of completeness, here's how a multiple inheritance approach would look, while behaving as the example above:
import sys
from PyQt5.QtWidgets import QDialog, QApplication
from dialog import Ui_Form
class AppWindow(QDialog, Ui_Form)
def __init__(self):
super().__init__()
self.setupUi(self)
self.pushButton.clicked.connect(self.buttonClicked)
def buttonClicked(self):
print('button clicked!')
app = QApplication(sys.argv)
w = AppWindow()
w.show()
sys.exit(app.exec_())
In light of what explained above, consider the attribute name issue: if you create an attribute named self.pushButton before calling self.setupUi, after that call self.pushButton will be a reference to the push button and the previously set value will be lost; if you create a new attribute named self.pushButton after calling self.setupUi you won't be able to access the QPushButton instance anymore.
Now, while for normal usage they usually behave in the same way, there are small but still important differences between using the pyuic generated files, or uic.loadUiType(), and the more direct uic.loadUi, and unfortunately those differences are not officially documented (and, frankly, I don't remember all of them).
An example has been previously reported by eyllanesc in the question Size of verticalLayout is different in Qt Designer and PyQt program.
Consider that those are very specific exceptions (which might even be addressed in future releases of PyQt, especially now that Qt6 is on the way) and they normally don't create major issues for standard programming.
Finally (at last!), while Designer is usually smart, is not quite perfect. For example, "technically unimportant" objects as QAction separators are considered as "anonymous" and designer doesn't even apply their object names: if you add a separator to a menu, there's no way to directly access it, even if you actively name it.

I would recommend that you don't programm in that converted pythonscript. Just because if you want to change your layout and convert it again you overwrite everything programmed there.
Import this converted ui into a new pythonscript. There are tons of introductions like this (google them)
https://nitratine.net/blog/post/how-to-import-a-pyqt5-ui-file-in-a-python-gui/
Then you should read about slots and signals (you can also add them in the qt designer) or add the functions in pythonscript:
https://www.tutorialspoint.com/pyqt/pyqt_signals_and_slots.htm
its the easiest way to learn about them when following the tutorials ...

Related

QApplication and main window connection

Considering a very basic HelloWorld PyQt5 application like:
app = QApplication(sys.argv)
window = QWidget()
window.setWindowTitle('PyQt5 app')
window.setGeometry(100, 100, 280, 80)
window.move(60, 15)
helloMsg = QLabel('<h1>Hello World!</h1>', parent=window)
helloMsg.move(60, 15)
window.show()
sys.exit(app.exec_())
It constructs a QApplication, a parent-less QWidget becoming the main window, adds a QLabel and shows it.
My question is: how does the QApplication know about the main window?
There is nothing in this code connecting the two.
Perhaps it is a naive question but just looking at this, it seems like magic.
How is the main window's paint event added to the application's event queue without telling so in the source code ? How does the QApplication instance know what is going to be added below in the source code?
tl;dr
There's no "magic" involved: sub-modules can access their "main" modules, and each module of Qt can know if a QApplication instance is running.
Long version
I think that's an interesting question, especially for those who are not that into low level programming. For instance, I've always given the QApplication as some sort of a "cartesian" assumption: «it exists».
As a premise, I'm not going to give you a very technical and low-level explanation: I don't have enough skills to do so (and I really welcome any other answer or edit to this), but I'm assuming that's not what you're looking for.
[Almost] technically speaking, you've to remember that Qt - and PyQt along with it - is an environment (the exact term is framework). As such, each one of its sub elements (classes, and eventually instances of them) "know" about that environment.
QApplication (and its base classes QGuiApplication and QCoreApplication) is a class that is internally accessible from any "sub" Qt module.
It's something like the builtin types (str, int, bool, etc.) that are accessible to any module. For example, the os.path is a python module that you can import as standalone, but it knows what the main os module is, and each function of os.path actually uses some part of that main module.
Like most frameworks, Qt has what is called called an event loop, which is usually run as soon as you call Q[*]Application.exec(). An event loop is something that generally blocks itself waiting for something to happen (an event) and eventually react to it.
Whenever a Qt class needs it, it internally calls the Q[*]Application.instance() method to ensure that an instance of the application is running, meaning that an event loop is active and running. For example Qt widgets need that to be able to show the interface and interact with it: tell the operating system that a new window has been created, therefore it has to be drawn on the screen, so the OS will say "ok, let's show it" by sending Qt an event requesting the drawing, then Qt will "send" that event to that window that will finally draw itself by telling Qt how it's being painted; finally Qt will "tell" the OS what's going to be shown. At the same time, that window might need to know if some keyboard or mouse event has been sent to it and react in some way.
You can see this in the Qt sources: whenever a new QWidget is created, it ensures that a QApplication exists by calling QCoreApplication.instance().
The same happens for other Qt objects that require an application event loop running. This is the case of QTimer (that doesn't require a graphical interface, but has to interface with the system for correct timing) and QPixmap (which needs to know about the graphical environment to correctly show its image), but in some specific cases it also depends on the platform (for example, creation of a QIcon on MacOS requires a running event loop, while that's not necessary on Linux and Windows).
So, finally, that's what (roughly) happens when you run your code:
# create an application instance; at this point the loop is not "running"
# (but that might be enough to let know most classes about the current system
# environment, such as available desktop geometries or cursor position)
app = QApplication(sys.argv)
# create a widget; an application exists and the widget can "begin" to create its
# interface using the information provided by it, like the system default font
# (it's actually a bit more complicated due to cross-platform issues, but let's
# ignore those things now)
window = QWidget()
window.setWindowTitle('PyQt5 app')
window.setGeometry(100, 100, 280, 80)
window.move(60, 15)
helloMsg = QLabel('<h1>Hello World!</h1>', parent=window)
helloMsg.move(60, 15)
# "ask Qt to prepare" the window that is going to be shown; at this point the
# widget's window is not shown yet even if it's "flagged as shown" to Qt, meaning
# that "window.isVisible()" will return True even if it's not actually visible yet
window.show()
# start the event loop by running app.exec(); sys.exit will just "wait" for the
# application to return its value as soon as it actually exits, while in the
# meantime the "exec" function will run its loop almost as a "while True" cycle
# would do; at this point the loop will start telling the OS that a new window
# has to be mapped and wait from the system to tell what to do: it will probably
# "answer" that it's ok to show that window, then Qt will tell back the widget
# that it can go on by "polishing" (use the current style and app info to finally
# "fix" its size) and begin drawing itself, then Qt will give back those drawing
# information allowing the OS to actually "paint" it on the screen; then it will
# be probably waiting for some user (keyboard/mouse) interaction, but the event
# loop might also tell the OS that the window is willing to close itself (as a
# consequence of a QTimer calling "widget.close", for instance) which could
# possibly end with ending the whole event loop, which is the case of
# https://doc.qt.io/qt-5/qguiapplication.html#quitOnLastWindowClosed-prop
# which would also cause the application to, finally, return "0" to sys.exit()
sys.exit(app.exec_())

PyQt5: create multiple views with designer and connect them in one app

I have completed a simple app in PyQt5, where I did design the UI in QT designer, converted in py code with pyuic5and ran it via python interpreter.
Now I would like to add another UI view, although I am not familiar with PyQt5, and most of the tutorial I found are only mentioning one view.
If I was using Visual Studio for example, I could create a new form, and use show and hide methods to display them, when I press a button for example, but I am not sure how to do the same with PyQt5.
Converted code from pyuic5 include also the if __name__ == "__main__" function, which create the instance and run the app, so is it enough to just take anything above it, to get the UI data only? And how do I create a view from that, so I can show and hide it as needed? Thanks.
EDIT:
Got a bit further, since I found a different way to load UI files. Seems that PyQt has a method that is able to load a UI file directly, instead of convert it in python code. This means that I can create a class that is a subclass of the type of window that I am using (example: QApplication, QMainWindow, QWidget and so on), and I can access that object as if it was a form in Visual Studio.
from PyQt5.QtWidgets import QApplication, QWidget, QMainWindow
from PyQt5.uic import loadUI
class UIObject(QMainWindow):
def __init__(self):
super(UIObject, self).__init__()
loadUI('mainapp.ui', self)
self.mybutton1.clicked.connect(self.printhello)
def printhello():
print("hello")
app = QApplication(sys.argv)
mainview = UIObject()
mainview.show()
sys.exit(app.exec_())
This will load the UI file and show it on screen; I assume that I can use the same construct to load multiple ui files and show then or hide them as I do in Visual studio? Seems straightforward but not knowing much about QT or PyQT, I am not sure why this way to handle ui files is not more commonly advertised in tutorials; I found it by chance while reading the docs.
Found the solution, mixing up various answers and posts from different forums.
You create a first class as QMainWindow, in the __init__ you use loadUi to load the QT designer file. Then you create a second class, which is the one that hold your second form/view, and in the __init__ you pass as parameter the parent view (your first class, or whatever other you may need); so you can hide the main view and show the second view when clicking a button. When you close the secondary view, the previous view will show up again.
You can add as many different windows you want; the trick is to always pass the parent on each of them and remember to show/hide them accordingly. Much more complex than Visual Studio forms, but it is doable.
class FirstForm(QMainWindow):
def __init__(self):
super(FirstForm, self).__init__()
loadUi('firstform.ui', self)
self.button1.clicked.connect(self.openOtherForm)
def openOtherForm(self):
self.hide()
otherview = SecondForm(self)
otherview.show()
class SecondForm(QDialog):
def __init__(self, parent=None):
super(SecondForm, self).__init__(parent)
loadUi('secondform.ui', self)
self.button2.clicked.connect(self.goBackToOtherForm)
def openOtherForm(self):
self.parent().show()
self.close()
app = QApplication(sys.argv)
main = FirstForm()
main.show()
sys.exit(app.exec_())

Getting unittest PySide and Maya on commandline to work

I have a Maya environment with a PySide window which gets generated on the fly with whatever is in that Maya scene. I'm trying to now take that to command-line and make unittests out of it.
I have everything working, minus one problem.
Most PyQt/PySide unittest documentation state to create a QApplication like this:
app = QApplication(sys.argv)
win = SomeWindow()
sys.exit(app.exec_())
This doesn't work because there's already a QApplication instance, built from Maya.
RuntimeError: A QApplication instance already exists.
Excluding these steps though yields this error and the tests fail:
QWidget: Cannot create a QWidget when no GUI is being used
I know that there's a QApplication instance in the scene, because this command yields a QApplication instance:
QApplication.instance()
So how do I associate the GUI that I want to create with that instance? You can't exec_() Maya's running QApplication so I'm not sure how to get my GUI to see the QApplication.
_
Found the solution. The issue was that I wasn't holding a global reference to the same instance across all of my tests (multiple Maya files were being created/destroyed over and over).
So somewhere at the top of the file, you just write
APP = None
Then in each test, import another file that keeps a link to that QApplication instance as a singleton and set APP equal to it

Communicate between two windows without violating class encapsulation

I have created a little pyqt5 project. Here is a printscreen of the application while running:
When the user clicks on the QPushButton from the main window, the dialog window appears and the user writes something in the QlineEdit. Then while clicking on the QPushButton of the dialog window, the dialog window sends a signal to the main window and is deleted. The signal contains the text typed by the user.
Here are the descriptions of my two classes which are very simple:
The MainWindow class.
The DialogWindow class (I want to make my own Dialog Class without using the pre existing Dialog windows).
My main script
I have several questions:
Is it the right way of using signals in order to communicate between windows? I do not think that I violate the class encapsulation. However I do not like to connect the signal on the child class by writing:
self.mySignal.connect(parent.updatelabelAnswer)
In this line I use the attribute parent - is it okay? It seems to me that it is not a good way to use signals.
My second question is:
Am I right to call self.deleteLater() in the on_pushButton_clicked slot of DialogWindow? It seems not, as I have checked with the python interactive shell and the object myDialogWindow is still accessible.
Generally, the parent should always be the one performing the signal connecting. Having the child widget make connections on the parent is problematic because it places limitations on the parent and causes side effects, and completely breaks in cases where parent ownership is transfrerred for the child widget.
In your example, there are two options I would consider "correct". If the dialog was meant to be at least somewhat persistent, and not meant to be run modally, then it should define a signal that the parent class connects to. The dialog should not delete itself, that should be the responsibility of the parent class after the signal is received.
MainWindow
def on_pushbutton_clicked(self):
if not self.dlg:
self.dlg = DialogWindow(self)
self.dlg.mySignal.connect(self.on_mySignal)
self.dlg.show()
def on_mySignal(value):
self.dlg.mySignal.disconnect()
self.dlg.close()
self.dlg.deleteLater()
self.dlg = None
self.updateLabelAnswer(value)
Your dialog seems to be a temporary dialog that exists just to gather input and should probably be run modally. In that case, you don't even have to define any signals. Just create the class and provide an API to get the value of the text box.
DialogWindow
class DialogWindow(...)
...
def on_pushbutton_clicked(self):
self.accept()
def getValue(self):
return self.lineEdit.text()
In MainWindow
def on_pushbutton_clicked(self):
dlg = DialogWindow(self)
if dlg.exec_():
value = dlg.getValue()
Okay so I guess I should post an answer instead of writing bloated comments :P
About the deletion I will quote the Qt documentation:
As with QWidget::close(), done() deletes the dialog if the
Qt::WA_DeleteOnClose flag is set. If the dialog is the application's
main widget, the application terminates. If the dialog is the last
window closed, the QApplication::lastWindowClosed() signal is emitted.
However if you want to handle the closing (and deletion) of the dialog window from your other widget that opens it, slots and signals should be used. Simply connect a button or whatever from your main widget and its clicked() signal to the done() slot of your dialog and you are good to go.
At this point I would also like to point out that deleting a dialog may not be necessary. Based on the memory footprint of the dialog (how much memory is used to create and run it) you may wish to consider creating the dialog at the beginning and leaving it in your memory until the main application is closed. In addition to that you can use hide() and show() to display it on the screen. This is actually a generally good practice for things that are small enough since the deletion and then creation of a window takes more time compared to simply hiding and showing it.
Now about the signals and slots these have pretty straight-forward semantics. As I've posted in the comments and my other answer in order to connect a slot to a signal you need to have them present in the same scope. If that's not the case pass one (or both) to a place where the situation is fixed. In your case you have to have a common place for both. If both are top-level widgets you have to do the connections inside your main(). I would rather add the dialog as an extension to your MainWindow class (as a class member) and to the instantiation plus the connections there - for example in the constructor of your MainWindow:
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
self.setupUi(self)
self.dialog = DialogWindow(self)
# Connect mainwindow's signals to dialog's slots
# Connect dialog's signals to mainwindow's slots
# And even connect dialog's signals to dialog's slots

PyQt4 Regarding Class Documentation

I'm not sure if this question has been asked before or not, but I have searched for it a bunch, perhaps I'm using the wrong keywords. I've also googled for it, but I think I'd like a more specific answer.
My question relates to the following line seen in pretty much every PyQt4 class documentation
The parent argument, if not None, causes self to be owned by Qt
instead of PyQt.
Constructs a dialog with parent parent.
A dialog is always a top-level widget, but if it has a parent, its
default location is centered on top of the parent. It will also share
the parent's taskbar entry.
I want to understand exactly what this means. From what I understand it has to do with how the object lives in memory and how it interacts with Python's garbage collection.
To be a little bit more concrete, say I have a QMainWindow and I pass it as the parent to a QDialog. ie somewhere I have a function like this
def ShowFooDialog():
dlg = FooDialog()
if dlg.exec_():
--doSomeStuff--
what would be the difference between that and the following
def ShowFooDialog():
dlg = FooDialog(parent=MyMainWindow)
if dlg.exec_():
--doSomeStuff--
Thanks! If this is something that could have been found out by better reading of the PyQt4 docs I apologize in advance!

Categories

Resources