I've written a program that uses tkinter to create a GUI, and in the GUI I have a button that starts a program that connects to a socket and reads in messages with signal information. I needed this to happen constantly in the background, because I had other functionality I needed accessible on the GUI, but the GUI would be locked.
So I wrote code that would run that button in a new thread.
# Run everything after connect in a separate thread, so the GUI is not locked
def _start_connect_thread(self, event):
HOST = self.ip_e.get()
PORT = int(self.port_e.get())
global connect_thread
connect_thread = threading.Thread(target=self.connect, kwargs={'host': HOST, 'port': PORT})
connect_thread.daemon = True
connect_thread.start()
# Connect TaskTCS and StreamingDataService to AIMS
def connect(self, host=None, port=None):
print("Connecting sensor tasking program to AIMS...")
self.tt = TaskTCS(host, port)
print("Connecting streaming data program to AIMS...")
self.sd = StreamingData(host, port)
# Run Streaming Data Service, which will pull all streaming data from sensor
self.sd.run()
With this code, my GUI is free to perform other tasks. Most importantly, I can press a button that plots the data coming in from the sensor. When I press the plot button, a flag is toggled in the sd class, and it uses the information coming from the sensor to plot it with matplotlib. Inside the sd class is a function that is running on a while loop, unpacking information from the sensor and checking if the flag is toggled in order to know when to plot it.
Is this not thread safe?
The reason I ask is this program works perfectly fine on the machine I'm working on. However, when I try to run this with anaconda3 python, I get these errors.
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
QObject::setParent: Cannot set parent, new parent is in a different thread
QObject::setParent: Cannot set parent, new parent is in a different thread
I'm not sure if these errors are from anaconda, or if it's from non-thread-safe coding.
When this program was attempted to run on a machine that had python 2.6, it got this error when clicking the connect button:
Exception in thread Thread-1:
Trackback (most recent call last):
File .usr/lib64/python2.6/threading.py line 532 in _bootstart_inner self.run()
File /usr/lib64/python2.6/threading.py, line 484, in run self._target(*self._args, **self._kwargs)
File “WaterfallGUI.py”, line 282 in connect HOST = self.ip_e.get()
File “/usr/lib64/python2.6/lib-tk/Tkinter.py”, line 2372, in get return self.tk.call(self._w,’get’)
TclError: out of stack space (infinite loop?)
So can a program somehow not have issues with threads on one machine but it can on others?
Note: In an attempt to solve the second error, I moved the .get() functions in the _start_connect_thread function to before actually starting the thread. Before, I was calling those functions in connect. Because I'm calling tkinter getters in a different thread, could that have been the issue in that case? If so, why wouldn't it cause an error on my machine on python 2.7? This was the old code:
def _start_connect_thread(self, event):
global connect_thread
connect_thread = threading.Thread(target=self.connect)
connect_thread.daemon = True
connect_thread.start()
def connect(self):
HOST = self.ip_e.get()
PORT = int(self.port_e.get())
...
I don't believe I'm calling anything tkinter GUI related outside of the main loop in the rest of my code. I see stuff about queues but I can't tell if I need to implement that in my code.
A program can work on one machine and not on another, but "thread safety" these days means that the program provably does not invoke any "undefined behaviors" of the language that it's written in or the libraries that it uses.
If there's some machine where the program does not "work," and some other machine where it does work, then that pretty much proves that the program is not "thread safe."
The reason I ask is...when I try to run this with anaconda3 python, I get these errors...
Oh. Python.
That's not really a well specified language. It's more like a family of similar languages. You're not necessarily just running the program on a different machine, you're also porting it to what may be a subtly different language.
No, this is not possible. Either code is thread-safe or it isn't. Thread-safety is a property of the algorithms/code, not the target machine. As indicated in the comments, this is far more likely due to an environment setup difference than something about the machine.
That being said, I'm not convinced that this is exactly a thread-safety issue at all. I'm admittedly not terribly familiar with this particular GUI framework, so I could be wrong here, but based on references like this, it seems like you're trying to "directly" update the GUI from another thread, which isn't permitted. (This is actually a very common restriction; WPF, for example, has the exact same rule).
Related
When declaring a class attribute as a multiprocess.Process instance, the attribute isn't accessible to the class.
Background
I'm working on a free web development desktop application in Python for people new to coding. It downloads all the tools necessary to begin web development and sets up the system in a single install. It will set up and manage MongoDB and NodeJS instances, push and pull projects to and from a Github repository, build the application, and export a package that can be uploaded to a server, all from a single GUI. I'm currently having some issues managing the NodeJS instances. The first issue I ran into is piping multiple commands into the CLI, as Node doesn't play well without user intervention, but I figured out a work around by writing out the commands in at most 2 lines.
Current Issue
My issue now is shutting down the NodeJS server. The GUI is built using customTkinter, and to avoid locking up the UI, I have to start Node by using threading.Thread which doesn't have a method to stop the thread available, so I tried setting up subprocess.run and Popen in a while loop so I could pass a termination flag and break the process, but that just continued to spawn NodeJS servers until all system resources were consumed. My next attempt used threading.Thread to wrap multiprocessing.Process which then wraps subprocess.run since multiprocess.Process has a built in terminate method. (I tried subprocess.Popen but that doesn't work when wrapped in multiprocess.Process as it returns a pickling error.) I stored the resulting multiprocessing.Process in a class attribute called NPM, however, when I call self.NPM.terminate(), the program returns an attribute error stating that the the attribute doesn't exist.
Code
from subprocess import run
from multiprocessing import Process
from threading import Thread
...
self.startbtn=ctkButton(command=Thread(target=lambda:self.NPMStart(self.siteDir)))
def NPMStart( self, siteDir ):
self.stopbtn = ctk.ctkButton(command=self.NPMStop)
self.NPM = Process(target = run(['powershell', 'npm', 'run', 'dev'], cwd=siteDir))
self.admin.start()
def NPMStop( self ):
self.startbtn=ctkButton(command=Thread(target=lambda:self.NPMStart(self.siteDir)))
self.NPM.terminate()
Closing Notes
I have no idea what I'm doing wrong here as from everything I've read this SHOULD work. Any explanation as to what I'm doing that is preventing the class from accessing the self.NPM attribute outside the NPMStart method would be greatly appreciated.
If you want to see the full code I currently have, feel free to check out my Github repository:
https://github.com/ToneseekerMusical/PPIM
I'm writing application that uses python Twisted API (namely WebSocketClientProtocol, WebSocketClientFactory, ReconnectiongClientFactory. I want to wrap client factory into reader with following interface
class Reader:
def start(self):
pass
def stop(self):
pass
Start function will be used to open connection (i.e. connect on ws api and start reading data), while stop will stop such connection.
My issue is that if I use reactor.run() inside start, connection starts and everything is OK, but my code never goes pass that line (looks like blocking call to me) and I cannot execute subsequent lines (include .stop in my tests).
I have tried using variants such as reactor.callFromThread(reactor.run) and reactor.callFromThread(reactor.stop) or even excplicity calling Thread(target=...) but none seems to work (they usually don't build protocol or open connection at all).
Any help or guidelines on how to implement Reader.start and Reader.stop are welcome.
If you put reactor.run inside Reader.start then Reader will be a difficult component to use alongside other code. Your difficulties are just the first symptom of this.
Calling reactor.run and reactor.stop are the job of code responsible for managing the lifetime of your application. Put those calls somewhere separate from your WebSocket application code. For example:
r = Reader()
r.start()
reactor.run()
Or better yet, implement a twist(d) plugin and let twist(d) manage the reactor for you.
I want to trace thread by log all the symbol it call, so I found tow method
1、the lldb settings list:
'target.process.thread' variables:
trace-thread -- If true, this thread will single-step and log execution.
it means the lldb will log execution, but I can't find where is the log
2、lldb python SBThread has a event eBroadcastBitSelectedFrameChanged, I think it will callback when thread frame change, but why SBThread has no broadcaster?
1) This setting was put in mostly to help diagnose problems with lldb's stepping algorithms. Since it causes all execution to go by instruction single step, it's going to make your program execute very slowly, so it hasn't been used for anything other than that purpose (and I haven't used it for that purpose in a good while, so it might have bit-rotted.) The output is supposed to go to the debugger's stdout.
2) eBroadcastBitSelectedFrameChanged is only sent when the user changes the selected frame with command line commands. It's meant to allow a GUI like Xcode that also allows command line interaction to keep the GUI sync'ed with user commands in the console. There isn't a GetBroadcaster for threads, because threads come and go and you generally want to listen to ALL the threads, not just a particular one. To do that, call SBThread.GetBroadcasterClassName and then sign your listener up for events by class name (StartListeningForEventClass).
If you have a need to listen to a particular thread, file an enhancement request to the bug tracker at http://lldb.llvm.org.
I need to be able to take a screenshot (of a vnc session, if putting this in the title and tags wasn't clear enough) within a python script under OSX. The remote system is already running a vnc server which I am using for other purposes, and will eventually cover the full range of common desktop operating systems, so I would prefer to keep using vnc as opposed to some other solution.
I do not have a vnc window open on my test server, as it runs headless. I have tried using vncdotool, but I'd prefer not to have to shell out, and trying to mimic the control flow causes problems because Twisted does not allow you to restart the reactor, but if you leave it running it blocks the main thread, and there seem to be problems trying to run the reactor in a separate Thread or Process...
Does anyone have any ideas?
Building upon what tangentStorm suggested, using selenium to take a screenshot. Try doing this. Open up src/Selenium2Library/keywords/_screenshot.py and look at lines 24-30.
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
self._current_browser().save_screenshot(path)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
Delete the line self._current_browser().save_screenshot(path) and add directly in its place
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
So in all it should look like:
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
Then try using selenium to take the screenshot.
Reference: Fix
After reading your comments, it seems what you actually want to do is take screenshots of remote web browsers running your flash game.
... And you're using selenium to test those remote web browsers.
... Why don't you just have selenium take the screenshots for you?
http://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/TakesScreenshot.html
I don't know of any library that does this in python for OSX.
However, there are at least three other ways to get the screenshot:
Use the java.awt.Robot class from jython. (Except twisted probably won't run on jython.)
Port Apple's ScreenSnapshot example to Cython and compile it into a python module. (Of course you can do the same thing in C, but Cython makes it much more fun.)
If you can move your server to win32, or just run win32 on your mac via parallels, then you can use the python imaging library's ImageGrab module.
However, I think shelling out to the OS is still the easiest answer. Instead of trying to get it all to run in a single process, just have two processes running: your main twisted process, and some other server that uses threads or whatever.
Then just pass messages back and forth when you want to take a screenshot. You can do this with a simple socket connection (just write another handler to in your twisted server, and have the screenshot server connect as a client)...
If it were me, I'd probably use an AMQP server like RabbitMQ to handle the message-passing, but that may be overkill for what you're doing.
Depending on your code, you might be able to use deferToThread to run the call to screencapture and return the filepath or a pil.Image instance (or whatever you need).
Using the example at http://twistedmatrix.com/documents/current/core/howto/gendefer.html#auto5 it might look something like...
from subprocess import call
import tempfile
from twisted.internet import reactor, threads
import Image ## pip install pil
## Blocking code that takes the screenshot and saves to file
def take_screenshot():
tmp_file_path = tempfile.mktemp(suffix='.png')
# os.system('screencapture %s' % tmp_file_path)
retcode = call(['screencapture', tmp_file_path])
if retcode < 0:
img = Image.open(tmp_file_path)
return img
else:
return None
## Callback fired by the deferToThread
def do_something_with_screenshot(img):
print img.filename, img.format, img.size, img.mode
reactor.stop() ## just here for this example
def run():
# get our Deferred which will be called with the largeFibonnaciNumber result
d = threads.deferToThread(take_screenshot)
# add our callback to print it out
d.addCallback(do_something_with_screenshot)
if __name__ == '__main__':
run()
reactor.run()
Perhaps you can convince the robotframework or Selenium to send a CaptureScreen Sensetalk command to Eggplant Drive.
The Taking a Screenshot post in the TestPlant forums mentions this command.
I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original instance before the new instance commits suicide. How can I do this in a cross-platform way?
Specifically, I'd like to enable the following behavior:
"foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it.
every few minutes the same script is launched again, but with different command-line parameters
when launched, the script should see if any other instances are running.
if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit.
instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform.
So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another?
Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible.
I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option.
More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them.
This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing.
But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.
BTW, this is not something I want to do on a one-script basis. Instead, I want to package this behavior into a library which many script authors can leverage-- my goal is to enable script authors to write simple, single-threaded scripts which are unaware of multi-instance issues, and to handle the multi-threading and single-instancing under the covers.
The Alex Martelli approach of setting up a communications channel is the appropriate one. I would use a multiprocessing.connection.Listener to create a listener, in your choice. Documentation at:
http://docs.python.org/library/multiprocessing.html#multiprocessing-listeners-clients
Rather than using AF_INET (sockets) you may elect to use AF_UNIX for Linux and AF_PIPE for Windows. Hopefully a small "if" wouldn't hurt.
Edit: I guess an example wouldn't hurt. It is a basic one, though.
#!/usr/bin/env python
from multiprocessing.connection import Listener, Client
import socket
from array import array
from sys import argv
def myloop(address):
try:
listener = Listener(*address)
conn = listener.accept()
serve(conn)
except socket.error, e:
conn = Client(*address)
conn.send('this is a client')
conn.send('close')
def serve(conn):
while True:
msg = conn.recv()
if msg.upper() == 'CLOSE':
break
print msg
conn.close()
if __name__ == '__main__':
address = ('/tmp/testipc', 'AF_UNIX')
myloop(address)
This works on OS X, so it needs testing with both Linux and (after substituting the right address) Windows. A lot of caveats exists from a security point, the main one being that conn.recv unpickles its data, so you are almost always better of with recv_bytes.
The general approach is to have the script, on startup, set up a communication channel in a way that's guaranteed to be exclusive (other attempts to set up the same channel fail in a predictable way) so that further instances of the script can detect the first one's running and talk to it.
Your requirements for cross-platform functionality strongly point towards using a socket as the communication channel in question: you can designate a "well known port" that's reserved for your script, say 12345, and open a socket on that port listening to localhost only (127.0.0.1). If the attempt to open that socket fails, because the port in question is "taken", then you can connect to that port number instead, and that will let you communicate with the existing script.
If you're not familiar with socket programming, there's a good HOWTO doc here. You can also look at the relevant chapter in Python in a Nutshell (I'm biased about that one, of course;-).
Perhaps try using sockets for communication?
Sounds like your best bet is sticking with a pid file but have it not only contain the process Id - have it also include the port number that the prior instance is listening on. So when starting up check for the pid file and if present see if a process with that Id is running - if so send your data to it and quit otherwise overwrite the pid file with the current process's info.