Gtk gobject.timeout_add() callback get data from another thread - python

I am not skilled in Gtk programming and I am trying to modify some existing code so that the gobject.timeout_add() callback function can use data from another thread.
The existing code creates a GUI for displaying a DOT graph by periodically reading a file but I want the DOT data to be read from a string that can be obtained from a subscriber defined as follows:
import rospy
from std_msgs.msg import String
rospy.Subscriber('/dot_string', String, self.update_dot_graph)
def update_dot_graph(self, msg)
self.dot_string = msg.data
The rospy.Subscriber runs in its own thread and so it is unclear to me how to get access to self.dot_string when the Gtk.main() function appears to block any other code from running.
The existing GUI code defines the following callback for gobject.timeout_add():
def update(self):
if self.openfilename is not None:
current_mtime = os.stat(self.openfilename).st_mtime
if current_mtime != self.last_mtime:
self.last_mtime = current_mtime
self.reload()
return True
Here you can see that the DOT code is read from a file if it has changed since the last read and then the GUI window is reloaded. I would like instead to use the data in self.dot_string from the other thread to be used in this update function. Alternatively, I would like to trigger the data update and reload() directly in the update_dot_graph() callback that is attached to the other thread.
Please let me know if more details are needed for this to make sense.

Related

How to write to a file the messages a ROS node is subscribed to?

I know about ROS publishing and subscribing nodes. This question is specifically about nodes that are subscribed to a message.
Now according to the documentation a simple subscriber would be like
#!/usr/bin/env python
import rospy
from std_msgs.msg import String
def callback(data):
rospy.loginfo(rospy.get_caller_id() + "I heard %s", data.data)
#write to the text file here
def listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber("chatter", String, callback)
#Open a text file here?
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
listener()
Here the callback simply logs the message (data).
However, what would happen if I want to save this message to a file?
The standard way to do this would be to open a text file after defining the node subscription (in the place there is a comment) with something like f = open('path/to/csv_file', 'w') and then writing on the text file in the callback function (also a comment in the code above)
However if I do this, the opened file is not assured to be closed anywhere, right? Is there a chance that if I Ctrl-C this node when running, there could be some data that is not written to the file since it is not closed properly?
In few words: what is the correct way to code a ROS node that is subscribed to a message and write this message on a file?
(Note: I don't think -or have not figure it out- if I can use with to handle files here)
ROS is callbacks are called as long as the blocking call rospy.spin() runs. This function is left if the process is cancelled. This means you can do use the with statement to ensure your file is closed properly afterwards:
with open('path/to/csv_file', 'w') as f:
rospy.spin()
and in your callback you can then write the message to the file:
def callback(data):
f.write(data.data)

Pythoncom - Passing same COM object to multiple threads

Hello :) I´m a complete beginner when it comes to COM objects, any help is appreciated!
I´m working on a Python program supposed to read incoming MS-Word documents in a client/server fashion, i.e. the client sends a request (one or multiple MS-Word documents) and the server reads specific content from those requests using pythoncom and win32com.
Because I want to minimize waiting time for the client (client needs a status message from server, I do not want to open an MS-Word instance for every request. Hence, I intend to have a pool of running MS-Word instances from which the server can pick and choose. This, in turn, means I have to reuse those instances from the pool in different threads and this is what causes trouble right now. After I read Using win32com with multithreading, my dummy code for the server looks like this:
import pythoncom, win32com.client, threading, psutil, os, queue, time, datetime
appPool = {'WINWORD.EXE': queue.Queue()}
def initAppPool():
global appPool
wordApp = win32com.client.DispatchEx('Word.Application')
appPool["WINWORD.EXE"].put(wordApp) # For testing purpose I only use one MS-Word instance currently
def run_in_thread(appid, path):
#open doc, read do some stuff, close it and reattach MS-Word instance to pool
pythoncom.CoInitialize()
wordApp = win32com.client.Dispatch(pythoncom.CoGetInterfaceAndReleaseStream(appid, pythoncom.IID_IDispatch))
doc = wordApp.Documents.Open(path)
time.sleep(3) # read out some content ...
doc.Close()
appPool["WINWORD.EXE"].put(wordApp)
if __name__ == '__main__':
initAppPool()
pathOfFile2BeRead1 = r'C:\Temp\file4.docx'
pathOfFile2BeRead2 = r'C:\Temp\file5.doc'
#treat first request
wordApp = appPool["WINWORD.EXE"].get(True, 10)
pythoncom.CoInitialize()
wordApp_id = pythoncom.CoMarshalInterThreadInterfaceInStream(pythoncom.IID_IDispatch, wordApp)
readDocjob1 = threading.Thread(target=run_in_thread,args=(wordApp_id,pathOfFile2BeRead1), daemon=True)
readDocjob1.start()
#wait here until readDocjob1 is done
wait = True
while wait:
try:
wordApp = appPool["WINWORD.EXE"].get(True, 1)
wait = False
except queue.Empty:
print(f"[{datetime.datetime.now()}] error: appPool empty")
except BaseException as err:
print(f"[{datetime.datetime.now()}] error: {err}")
So far everything works as expected, but when I start a second request similar to the first one:
(x) wordApp_id = pythoncom.CoMarshalInterThreadInterfaceInStream(pythoncom.IID_IDispatch, wordApp)
readDocjob2 = threading.Thread(target=run_in_thread,args=(wordApp_id,pathOfFile2BeRead2), daemon=True)
readDocjob2.start()
I receive the following error message: "The application called an interface that was marshaled for a different thread" for the (x) marked line.
I thought that is why I have to use pythoncom.CoGetInterfaceAndReleaseStream to jump between threads with the same COM object? And besides that why does it work the first time but not the second time?
I searched for different solutions on StackOverflow which use CoMarshalInterface instead of CoMarshalInterThreadInterfaceInStream, but they all gave me the same error. I´m really confused right now.
EDIT:
After fixing the error as mentioned in the comments, I ran into a mysterious behavior.
When the second job is executed:
wordApp_id = pythoncom.CoMarshalInterThreadInterfaceInStream(pythoncom.IID_IDispatch, wordApp)
readDocjob2 = threading.Thread(target=run_in_thread,args=(wordApp_id,pathOfFile2BeRead2), daemon=True)
readDocjob2.start()
The function run_in_thread terminates immediately without executing any line, respectively it seems that the pythoncom.CoInitialize() is not working properly.
The script finishes without any error messages though.
def run_in_thread(instance,appid, path):
#open doc, read do some stuff, close it and reattach MS-Word instance to pool
pythoncom.CoInitialize()
wordApp = win32com.client.Dispatch(pythoncom.CoGetInterfaceAndReleaseStream(appid, pythoncom.IID_IDispatch))
doc = wordApp.Documents.Open(path)
time.sleep(3) # read out some content ...
doc.Close()
instance.flag = True
What happens is you put back in the "activePool" a COM reference that you got from CoGetInterfaceAndReleaseStream. But this reference was created specially for this new thread and then you call CoMarshalInterThreadInterfaceInStream on this new reference.
That's what is wrong.
You must always use the original COM reference you got from the thread that created it, to be able to call CoMarshalInterThreadInterfaceInStream repeatedly.
So, to solve the problem, you must change how your apppool works, use some kind of a "in use" flag but don't touch the original COM reference.

tda-api assigning streaming endpoints to variables

I am new to pyhton APIs. I can not get the script to return a value. Could anyone give me a direction please. I can not get the lambda function to work properly. I am trying to save the streamed data into variables to use with a set of operations.
from tda.auth import easy_client
from tda.client import Client
from tda.streaming import StreamClient
import asyncio
import json
import config
import pathlib
import math
import pandas as pd
client = easy_client(
api_key=config.API_KEY,
redirect_uri=config.REDIRECT_URI,
token_path=config.TOKEN_PATH)
stream_client = StreamClient(client, account_id=config.ACCOUNT_ID)
async def read_stream():
login = asyncio.create_task(stream_client.login())
await login
service = asyncio.create_task(stream_client.quality_of_service(StreamClient.QOSLevel.EXPRESS))
await service
book_snapshots = {}
def my_nasdaq_book_handler(msg):
book_snapshots.update(msg)
stream_client.add_nasdaq_book_handler(my_nasdaq_book_handler)
stream = stream_client.nasdaq_book_subs(['GOOG','AAPL','FB'])
await stream
while True:
await stream_client.handle_message()
print(book_snapshots)
asyncio.run(read_stream())
Callbacks
This (wrong) assumption
stream_client.add_nasdaq_book_handler() contains all the trade data.
shows difficulties in understanding the callback concept. Typically the naming pattern add handler indicates that this concept is being used. There is also the comment in the boiler plate code from the Streaming Client docs
# Always add handlers before subscribing because many streams start sending
# data immediately after success, and messages with no handlers are dropped.
that consistently talks about subscribing - also this word is a strong indicator.
The basic principle of a callback is that instead you pull the information from a service (and being blocked until it's available), you enable that service to push that information to you when it's available. You do this typically be first registering one (or more) interest(s) with the service and after then wait for the things to come.
In section Handling Messages they give an example for function (to provide by you) as follows:
def sample_handler(msg):
print(json.dumps(msg, indent=4))
which takes a str argument which is dumped in JSON format to the console. The lambda in your example does exactly the same.
Lambdas
it's not possible to return a value from a lambda function because it is anonymous
This is not correct. If lambda functions wouldn't be able to return values, they wouldn't play such an important role. See 4.7.6. Lambda Expressions in the Python 3 docs.
The problem in your case is that both functions don't do anything you want, both just print to console. Now you need to get into these functions to tell what to do.
Control
Actually, your program runs within this loop
while True:
await stream_client.handle_message()
each stream_client.handle_message() call finally causes a call to the function you registered by calling stream_client.add_nasdaq_book_handler. So that's the point: your script defines what to do when messages arrive before it gets waiting.
For example, your function could just collect the arriving messages:
book_snapshots = []
def my_nasdaq_book_handler(msg):
book_snapshots.append(msg)
A global object book_snapshots is used in the implementation. You may expand/change this function at will (of course translating the information into JSON format will help you accessing it in a structured way). This line will register your function:
stream_client.add_nasdaq_book_handler(my_nasdaq_book_handler)

Robot Framework: Do something after the execution ends using a library as a listener

I am trying to gather the files: output.xml, report.html and log.html when the whole execution ends.
I can use a listener for this purpose but I want to avoid write a line like:
robot --listener "SomeListener.py" YourTestSuite.robot
Then, I looked at the documentation and I found that I can use a Test Library as a Listener and import it in my Test Suite, for example:
class SomeLibrary(object):
ROBOT_LISTENER_API_VERSION = 2 # or 3
ROBOT_LIBRARY_SCOPE = "GLOBAL" # or TEST SUITE
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
def start_suite(self, data, result):
pass
def close(self):
pass
My problem is that the close method is called when the library goes out of the scope. So, I cannot gather the report files because they don't exist in that moment and I receive an Error. I also tried with the method:
def report_file(self, path):
pass
But, nothing happens, I guess that it could be because a Library as a Listener cannot use this methods from the Listener API or because the file is still not created.
Any idea about How can I gather those files using a library as a Listener?
I am open to ideas or suggestions.
Thanks.
The Listener specification actually specifies three methods that are triggered when the three files are generated:
startSuite
endSuite
startTest
endTest
startKeyword
endKeyword
logMessage
message
outputFile
logFile
reportFile
debugFile
Have a look at the last 4 ones, which should be helpful. For more details look at the listener API specification. 4.3.3 Listener interface methods

Python 3 (Bot) script stops working

I'm trying to connect to a TeamSpeak server using the QueryServer to make a bot. I've taken advice from this thread, however I still need help.
This is The TeamSpeak API that I'm using.
Before the edits, this was the summary of what actually happened in my script (1 connection):
It connects.
It checks for channel ID (and it's own client ID)
It joins the channel and starts reading everything
If someone says an specific command, it executes the command and then it disconnects.
How can I make it so it doesn't disconnect? How can I make the script stay in a "waiting" state so it can keep reading after the command is executed?
I am using Python 3.4.1.
I tried learning Threading but either I'm dumb or it doesn't work the way I thought it would. There's another "bug", once waiting for events, if I don't trigger anything with a command, it disconnects after 60 seconds.
#Librerias
import ts3
import threading
import datetime
from random import choice, sample
# Data needed #
USER = "thisisafakename"
PASS = "something"
HOST = "111.111.111.111"
PORT = 10011
SID = 1
class BotPrincipal:
def __init__(self, manejador=False):
self.ts3conn = ts3.query.TS3Connection(HOST, PORT)
self.ts3conn.login(client_login_name=USER, client_login_password=PASS)
self.ts3conn.use(sid=SID)
channelToJoin = Bot.GettingChannelID("TestingBot")
try: #Login with a client that is ok
self.ts3conn.clientupdate(client_nickname="The Reader Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
self.suscribirEvento("textchannel", ChannelToJoin)
self.ts3conn.on_event = self.manejadorDeEventos
self.ts3conn.recv_in_thread()
except ts3.query.TS3QueryError: #Name already exists, 2nd client connect with this info
self.ts3conn.clientupdate(client_nickname="The Writer Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
def __del__(self):
self.ts3conn.close()
def GettingMyData(self):
respuesta = self.ts3conn.whoami()
return respuesta.parsed[0]
def GettingChannelID(self, nombre):
respuesta = self.ts3conn.channelfind(pattern=ts3.escape.TS3Escape.unescape(nombre))
return respuesta.parsed[0]["cid"]
def MoveUserToChannel(self, idCanal, idUsuario, passCanal=None):
self.ts3conn.clientmove(cid=idCanal, clid=idUsuario, cpw=passCanal)
def suscribirEvento(self, tipoEvento, idCanal):
self.ts3conn.servernotifyregister(event=tipoEvento, id_=idCanal)
def SendTextToChannel(self, idCanal, mensajito="Error"):
self.ts3conn.sendtextmessage(targetmode=2, target=idCanal, msg=mensajito) #This works
print("test") #PROBLEM HERE This doesn't work. Why? the line above did work
def manejadorDeEventos(sender, event):
message = event.parsed[0]['msg']
if "test" in message: #This works
Bot.SendTextToChannel(ChannelToJoin, "This is a test") #This works
if __name__ == "__main__":
Bot = BotPrincipal()
threadprincipal = threading.Thread(target=Bot.__init__)
threadprincipal.start()
Prior to using 2 bots, I tested to launch the SendTextToChannel when it connects and it works perfectly, allowing me to do anything that I want after it sends the text to the channel. The bug that made entire python code stop only happens if it's triggered by the manejadorDeEventos
Edit 1 - Experimenting with threading.
I messed it up big time with threading, getting to the result where 2 clients connect at same time. Somehow i think 1 of them is reading the events and the other one is answering. The script doesn't close itself anymore and that's a win, but having a clone connection doesn't looks good.
Edit 2 - Updated code and actual state of the problem.
I managed to make the double connection works more or less "fine", but it disconnects if nothing happens in the room for 60 seconds. Tried using Threading.timer but I'm unable to make it works. The entire question code has been updated for it.
I would like an answer that helps me to do both reading from the channel and answering to it without the need of connect a second bot for it (like it's actually doing...) And I would give extra points if the answer also helps me to understand an easy way to make a query to the server each 50 seconds so it doesn't disconnects.
From looking at the source, recv_in_thread doesn't create a thread that loops around receiving messages until quit time, it creates a thread that receives a single message and then exits:
def recv_in_thread(self):
"""
Calls :meth:`recv` in a thread. This is useful,
if you used ``servernotifyregister`` and you expect to receive events.
"""
thread = threading.Thread(target=self.recv, args=(True,))
thread.start()
return None
That implies that you have to repeatedly call recv_in_thread, not just call it once.
I'm not sure exactly where to do so from reading the docs, but presumably it's at the end of whatever callback gets triggered by a received event; I think that's your manejadorDeEventos method? (Or maybe it's something related to the servernotifyregister method? I'm not sure what servernotifyregister is for and what on_event is for…)
That manejadorDeEventos brings up two side points:
You've declared manejadorDeEventos wrong. Every method has to take self as its first parameter. When you pass a bound method, like self.manejadorDeEventos, that bound self object is going to be passed as the first argument, before any arguments that the caller passes. (There are exceptions to this for classmethods and staticmethods, but those don't apply here.) Also, within that method, you should almost certainly be accessing self, not a global variable Bot that happens to be the same object as self.
If manejadorDeEventos is actually the callback for recv_in_thread, you've got a race condition here: if the first message comes in before your main threads finishes the on_event assignment, the recv_on_thread won't be able to call your event handler. (This is exactly the kind of bug that often shows up one time in a million, making it a huge pain to debug when you discover it months after deploying or publishing your code.) So, reverse those two lines.
One last thing: a brief glimpse at this library's code is a bit worrisome. It doesn't look like it's written by someone who really knows what they're doing. The method I copied above only has 3 lines of code, but it includes a useless return None and a leaked Thread that can never be joined, not to mention that the whole design of making you call this method (and spawn a new thread) after each event received is weird, and even more so given that it's not really explained. If this is the standard client library for a service you have to use, then you really don't have much choice in the matter, but if it's not, I'd consider looking for a different library.

Categories

Resources