How to reimplement triggered() signal in pyqt for a QAction? - python

I'm using Python with PyQt, for my interface, and Yapsi, for add plugins. Yapsy found all my plugins and add all to a menu in my mainWindow. Each plugin, is activated with triggered signal. This signal for a QAction don't have params and I need know what plugin was emit the signal.
This is the relevant code:
pluginMenu = self.menuBar().addMenu("P&lugins")
# Create plugin manager
self.manager = PluginManager(categories_filter={ "Formatters": Formatter})
self.manager.setPluginPlaces(["plugins"])
# Load plugins
self.manager.locatePlugins()
self.manager.loadPlugins()
# A do-nothing formatter by default
self.formatters = {}
for plugin in self.manager.getPluginsOfCategory("Formatters"):
# plugin.plugin_object is an instance of the plugin
print(plugin.plugin_object.name)
# The method to create action associated each input to default triggered() signal
newAction = self.createAction(plugin.plugin_object.name, slot=self.updatePreview())
self.addActions(pluginMenu, (newAction, None))
self.formatters[plugin.plugin_object.name] = (plugin.plugin_object, newAction)
def updatePreview(self):
# Here I need know what plugin emit the signal
#===================================================================
I thought to conect the signal with other signal with some params but I don't know how to do it.

I don't know what's Yapsi, but there is QObject.sender method:
QObject QObject.sender (self)
Returns a pointer to the object that sent the signal, if called in a
slot activated by a signal; otherwise it returns 0. The pointer is
valid only during the execution of the slot that calls this function
from this object's thread context.
The pointer returned by this function becomes invalid if the sender is
destroyed, or if the slot is disconnected from the sender's signal.
Warning: This function violates the object-oriented principle of
modularity. However, getting access to the sender might be useful when
many signals are connected to a single slot.
Warning: As mentioned above, the return value of this function is not
valid when the slot is called via a Qt.DirectConnection from a thread
different from this object's thread. Do not use this function in this
type of scenario.
Some more tips here: http://blog.odnous.net/2011/06/frequently-overlooked-and-practical.html

The correct way to do this is with a QSignalMapper.
Example code:
signalmap = QSignalMapper(self)
signalmap.mapped[QString].connect(self.handler)
...
signalmap.setMapping(action, name)
action.triggered[()].connect(signalmap.map)
This will re-emit the triggered signal with a string "name" parameter. It's also possible to re-emit signals with an int, QWidget or QObject parameter.

Related

Python waiting until condition is met

I've created a GUI that ask the user an User/Password. The class that creates the GUI calls another class that creates a web browser and try to login in a website using a method of the same class. If the login is successful, a variable of the GUI's object becomes 'True'
My main file is the next one :
from AskUserPassword import AskGUI
from MainInterfaceGUI import MainGUI
Ask = AskGUI()
Ask = Ask.show()
MainInterface = MainGUI()
if Ask.LoginSuccesful == True:
Ask.close()
MainInterface.show()
If the login is successful, I would like to hide the User/Password GUI and show the main GUI. The code above clearly doesn't work.
How can I make Python to wait until some condition of this type is met ?
Instead of constantly checking for the condition to be met, why not supply what you want upon login to do as a callback to your AskGUI object, and then have the AskGUI object call the callback when login is attempted? Something like:
def on_login(ask_gui):
if ask_gui.LoginSuccesful:
ask_gui.close()
MainInterface.show()
Ask = AskGUI()
Ask.login_callback = on_login
Then, in AskGUI, when the login button is clicked and you check the credentials, you do:
def on_login_click(self):
...
# Check login credentials.
self.LoginSuccessful = True
# Try to get self.login_callback, return None if it doesn't exist.
callback_function = getattr(self, 'login_callback', None)
if callback_function is not None:
callback_function(self)
Re.
I prefer to have all the structure in the main file. This is a reduced example but If I start to trigger from a method inside a class that is also inside another class... it's going to be hard to understand
I recommend this way because all the code to handle something that happens upon login is included in the class that needs to do the logging in. The code to handle which UI elements to display (on_login()) is included in the class that handles that.
You don't need anything in the background that keeps checking to see if Ask.LoginSuccessful has changed.
When you use a decent IDE, it's pretty easy to keep track of where each function is defined.

How to design python code for telegram (telebot lib)?

I am developing a telegram bot, and I have many handlers for responses from the user.
Since there are many handlers and many different dialogs are also possible. I moved some of the handlers into different classes (Dialogs)
import telebot
bot = telebot.TeleBot(tg_api_key)
#bot.message_handler(commands=['buy_something'])
def buy(message):
from dialogs.buy_something
import Buy_Dialog
w = Buy_Dialog(bot, message)
and:
#bot.message_handler(commands=['sell_something'])
def sell(message):
from dialogs.sell_something
import Sell_Dialog
w = Sell_Dialog(bot, message)
Inside the dialog classes I can send questions to the user and get answers from them by using:
self.m = self.bot.send_message(message.chat.id, "Some question")
self.bot.register_next_step_handler(self.m, self.enter_your_name)
But now I need to get from user callback from button click:
#bot.callback_query_handler(func=lambda call: True)
def button_click_yes_or_no(self, call):
So I can catch them only from main.py, not inside the dialog.
How to redesign code to get clear logic and code with the ability to catch button_callback?
maybe the function can't see your variable try to make your variable global in the beginning of the code :
global your variable, another variable
You cannot catch callbacks from inside the class instance.
But you can follow the same logic as for commands: create a decorated general function for catch all calls and pass each one to proper class instance based on call's content (call.data or call.message.chat.id).

Gtk gobject.timeout_add() callback get data from another thread

I am not skilled in Gtk programming and I am trying to modify some existing code so that the gobject.timeout_add() callback function can use data from another thread.
The existing code creates a GUI for displaying a DOT graph by periodically reading a file but I want the DOT data to be read from a string that can be obtained from a subscriber defined as follows:
import rospy
from std_msgs.msg import String
rospy.Subscriber('/dot_string', String, self.update_dot_graph)
def update_dot_graph(self, msg)
self.dot_string = msg.data
The rospy.Subscriber runs in its own thread and so it is unclear to me how to get access to self.dot_string when the Gtk.main() function appears to block any other code from running.
The existing GUI code defines the following callback for gobject.timeout_add():
def update(self):
if self.openfilename is not None:
current_mtime = os.stat(self.openfilename).st_mtime
if current_mtime != self.last_mtime:
self.last_mtime = current_mtime
self.reload()
return True
Here you can see that the DOT code is read from a file if it has changed since the last read and then the GUI window is reloaded. I would like instead to use the data in self.dot_string from the other thread to be used in this update function. Alternatively, I would like to trigger the data update and reload() directly in the update_dot_graph() callback that is attached to the other thread.
Please let me know if more details are needed for this to make sense.

post_save.disconnect does not work at all

I'm trying to make Django not to send signal in one case. When adding a new instance of model Delivery (right after creating a Job) as an attribute of model Job, I don't want to send signal because the signal should alert admin that Job has been edited.
Unfortunately I can't make it work.
#receiver(post_save,sender=Job) # When Job is created or edited
def alert_admin(sender,instance,created,**kwargs):
if created:
email.AdminNotifications.new_order(instance)
else:
email.AdminNotifications.edited_order(instance)
#receiver(post_save,sender=Job) # When job is created, I want to create a delivery object as an attribute of Job
def create_delivery(sender,instance,created,**kwargs):
if created:
delivery,created_delivery = Delivery.objects.get_or_create(job=instance)
instance.delivery = delivery
delivery.save()
post_save.disconnect(alert_admin)
instance.save() # I DONT WANT TO SEND SIGNAL IN THIS CASE
post_save.connect(alert_admin)
Where is the problem? I did this but I still recieve two alerts - New Order and Edited Order.
The problem is that you are listening to the same signal twice.
#receiver(post_save,sender=Job) # When Job is created or edited
def alert_admin(sender,instance,created,**kwargs):
###
#receiver(post_save,sender=Job):
def create_delivery(sender,instance,created,**kwargs):
###
You are assuing that create_delivery will be called first. But that does not seem to happen. alert_admin appears to be called first. So what ever signal disabling that you do in create_delivery just goes waste.
Django does not provide any guarantees or controls over the order in which signals are fired (what's the order of post_save receiver in django?)
You can add a simple flag to your instance to tell the signal processor that this signal does not need further processing.
if hasattr(instance,'signal_processed'):
return
else:
# do whatever processing
instance.signal_processed = True

What is the use of 'self.update' in Roger Stuckey's wxPython Multiprocessing Example code

I was reading Roger Stuckey's wxPython Multiprocessing code to try to make a similar program myself. Full code can be found here.
The code runs fine without any modification. However, I found a parameter self.update been pass around between the GUI class MyFrame to the processing class TaskSErverMP. I have been searched throughout the entire code snippet and couldn't figure out what it is doing in the code -- it has never been initialized and used anyhow.
In the class MyFrame:
def OnStart(self, event):
...
self.taskserver.processTasks(self.update)
...
def OnStop(self, event):
...
self.taskserver.processStop(self.update)
...
def update(self, output):
"""
Get and print the results from one completed task.
"""
self.output_tc.AppendText('%s [%d] calculate(%d) = %.2f\n'...
...
# Give the user an opportunity to interact
wx.YieldIfNeeded()
In the class TaskServerMP:
def run(self):
...
self.processTasks(self.update)
...
def processTasks(self, resfunc=None):
...
def processStop(self, resfunc=None):
...
def update(self, output):
"""
Get and print the results from one completed task.
"""
sys.stdout.write('%s [%d] calculate(%d) = %.2f' % ....
So I thought that is a dependency injection practice but nothing more. I then removed it from the code and the strangest thing happened -- the program doesn't work anymore! I have the GUI displayed and I was able to start the processing. However, the GUI just hanged and then later Windows reported that the program is not responding. I have end up kill all the pythonw.exe processes manually from the Windows Task Manager.
Then I start to think if there is anything to do with the signature of the functions processTasks and processStop in the class TaskServerMP. But I really have no idea how I can associate the parameter self.update to the optional argument resfunc.
I don't think there is anything wrong in Roger's code. But it bothers me if I cannot twisted around the source to test out my understanding of the code.
I use Python 2.7 in Windows 7.
MyFrame.update is a method. You can see its definition on line 365.
So self.update is a bound method, meaning it can be called as if it were a regular function.
You can see that processTasks takes a resfunc parameter; then, at least 165, if it got a function or method as that resfunc parameter, it calls it.
The idea here is that processTasks leaves it up to the caller to decide how to print out progress updates as each task completes. One class might do it by writing them to stdout. A different class might instead update a GUI progress bar.
This is a pretty typical way to pass callbacks around in Python code.
So, why does the program hang if you take out the self.update? Well, look what's inside it, at line 372:
# Give the user an opportunity to interact
wx.YieldIfNeeded()
In wx, as in most GUI frameworks, the main thread is running an "event loop", something which has to process each event (a mouse move, a keypress, whatever) as it comes in, and then wait for the next one. You write your code as a bunch of event handlers—when someone clicks this button, run that function; etc. Your event handlers have to return quickly. Otherwise, the event loop doesn't get to pick up and dispatch the next event, so your GUI isn't responding. In wx, the Yield family of functions make life easier. As long as you Yield often enough, you don't have to return quickly. But you still have to do one or the other—either return early, or Yield—or the GUI will hang.
Here's a very simple example showing how to use bound methods:
class Foo(object):
def __init__(self, name):
self.name = name
def print_name(self):
print(self.name)
def give_me_a_printer_function(self):
return self.print_name
spam = Foo('Spam')
my_function1 = spam.print_name
my_function2 = spam.give_me_a_printer_function()
my_function1()
my_function2()
This will print Spam twice.
Functions and methods are first class values in Python—you can pass them around just like you can pass around numbers, strings, lists, and class instances. You can even print them out (although you'll get something ugly like <bound method Foo.print_name of <__main__.Foo object at 0x104629190>>).

Categories

Resources