I am running python 2.7 on Ubuntu in Eclipse
I am trying to call subprocess.Popen from a thread other than the main thread.
When I run this code from Eclipse:
#lsbt.py
class someThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
p = subprocess.Popen(["ls", "/usr"], stdout=subprocess.PIPE)
out = p.communicate()
print "Done" + out[0]
def main():
test = someThread()
test.daemon = True
test.start()
while True:
time.sleep(3600)
The whole python program seems to exit at the subprocess.Popen() line.
Here is what eclipse says the call stack looks like:
<terminated>lsbt_1 lsbt.py [Python Run]
<terminated>lsbt.py
lsbt.py
<terminated, exit value: 137>lsbt.py
All debugging seems to stop as well and nothing is printed to the console.
When I run the subprocess code from the main thread in Eclipse, it seems to work well.
It does not seem to matter what command the subprocess.Popen runs, then only thing that seems to matter is that it is not being run from the main thread.
When I run the python code from the terminal, it works.
Could it be a problem with Eclipse?
#aabarnert commented: IIRC, errno 137 on linux is ENOTTY
One way to do it is to set:
daemon = False
I'm not sure why this works for Eclipse, but it does.
From Python Documentation:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property
Related
I have a script that is supposed to run 24/7 unless interrupted. This script is script A.
I want script A to call Script B, and have script A exit while B is running. Is this possible?
This is what I thought would work
#script_A.py
while(1)
do some stuff
do even more stuff
if true:
os.system("python script_B.py")
sys.exit(0)
#script_B.py
time.sleep(some_time)
do something
os.system("python script_A.py")
sys.exit(0)
But it seems as if A doesn't actually exit until B has finished executing (which is not what I want to happen).
Is there another way to do this?
What you are describing sounds a lot like a function call:
def doScriptB():
# do some stuff
# do some more stuff
def doScriptA():
while True:
# do some stuff
if Your Condition:
doScriptB()
return
while True:
doScriptA()
If this is insufficient for you, then you have to detach the process from you python process. This normally involves spawning the process in the background, which is done by appending an ampersand to the command in bash:
yes 'This is a background process' &
And detaching said process from the current shell, which, in a simple C program is done by forking the process twice. I don't know how to do this in python, but would bet, that there is a module for this.
This way, when the calling python process exits, it won't terminate the spawned child, since it is now independent.
It seems you want to detach a system call to another thread.
script_A.py
import subprocess
import sys
while(1)
do some stuff
do even more stuff
if true:
pid = subprocess.Popen([sys.executable, "python script_B.py"]) # call subprocess
sys.exit(0)
Anyway it does not seem a good practice at all. Why do you not try the script A listens the Process Stack and if it finds script B running then stops. This is another example how you could do it.
import subprocess
import sys
import psutil
while(1)
#This sections queries the current processes running
for proc in psutil.process_iter():
pinfo = proc.as_dict(attrs=['pid', 'name'])
if pinfo[ 'name' ] == "script_B.py":
sys.exit(0)
do some stuff
do even more stuff
if true:
pid = subprocess.Popen([sys.executable, "python script_B.py"]) # call subprocess
sys.exit(0)
Let's say that I have this simple line in python:
os.system("sudo apt-get update")
of course, apt-get will take some time untill it's finished, how can I check in python if the command had finished or not yet?
Edit: this is the code with Popen:
os.environ['packagename'] = entry.get_text()
process = Popen(['dpkg-repack', '$packagename'])
if process.poll() is None:
print "It still working.."
else:
print "It finished"
Now the problem is, it never print "It finished" even when it really finish.
As the documentation states it:
This is implemented by calling the Standard C function system(), and
has the same limitations
The C call to system simply runs the program until it exits. Calling os.system blocks your python code until the bash command has finished thus you'll know that it is finished when os.system returns. If you'd like to do other stuff while waiting for the call to finish, there are several possibilities. The preferred way is to use the subprocessing module.
from subprocess import Popen
...
# Runs the command in another process. Doesn't block
process = Popen(['ls', '-l'])
# Later
# Returns the return code of the command. None if it hasn't finished
if process.poll() is None:
# Still running
else:
# Has finished
Check the link above for more things you can do with Popen
For a more general approach at running code concurrently, you can run that in another thread or process. Here's example code:
from threading import Thread
...
thread = Thread(group=None, target=lambda:os.system("ls -l"))
thread.run()
# Later
if thread.is_alive():
# Still running
else:
# Has finished
Another option would be to use the concurrent.futures module.
os.system will actually wait for the command to finish and return the exit status (format dependent format).
os.system is blocking; it calls the command waits for its completion, and returns its return code.
So, it'll be finished once os.system returns.
If your code isn't working, I think that could be caused by one of sudo's quirks, it refuses to give rights on certain environments(I don't know the details tho.).
I am trying to automate the installation of a specific program using Sikuli and scripts on Windows 7. I needed to start the program installer and then used Siluki to step through the rest of the installation. I did this using Python 2.7
This code works as expected by creating a thread, calling the subprocess, and then continuing the main process:
import subprocess
from threading import Thread
class Installer(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
subprocess.Popen(["msiexec", "/i", "c:\path\to\installer.msi"], shell=True)
i = Installer()
i.run()
print "Will show up while installer is running."
print "Other things happen"
i.join()
This code does not operate as desired. It will start the installer but then hang:
import subprocess
from threading import Thread
class Installer(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
subprocess.call("msiexec /i c:\path\to\installer.msi")
i = Installer()
i.run()
print "Will not show up while installer is running."
print "Other things happen"
i.join()
I understand that subprocess.call will wait for the process to terminate. Why does that prevent the main thread from continuing on? Should the main continue execution immediately after the process call?
Why is there such a difference in behaviors?
I have only just recently started using threads C.
You're calling i.run(), but what you should be calling is i.start(). start() invokes run() in a separate thread, but calling run() directly will execute it in the main thread.
First.
you need to add the command line parameters to your install command to make it a silent install..
http://msdn.microsoft.com/en-us/library/aa372024%28v=vs.85%29.aspx
the subprocess is probably hung waiting for an install process that will never end because it is waiting for user input.
Second.
if that doesn't work.. you should be using popen and communicate
How to use subprocess popen Python
Third.
if that still didn't work, your installer is hanging some where and you should debug the underlying process there.
I have some Python code that creates a demon thread. The parent thread ends almost immediately, but the daemon thread keeps printing sleep.
import threading
import time
def int_sleep():
for _ in range(1, 600):
time.sleep(1)
print("sleep")
def main():
thread = threading.Thread(target=int_sleep)
thread.daemon = True
thread.start()
time.sleep(2)
print("main thread end...")
thread = threading.Thread(target=main)
thread.start()
sys.version:
'3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:19:30) [MSC v.1600 64 bit (AMD64)]'
Prints:
sleep
main thread end...
sleep
sleep
sleep
Why doesn't the Python daemon thread exit when parent thread exits?
If you specify thread.daemon = True for your python thread, then the program will halt immediately when only the daemon is left. The the commands sent to stdout are lost.
Add this to a file called main.py
import threading
import time
def int_sleep():
for _ in range(1, 600):
time.sleep(1)
print("sleep")
def main():
thread = threading.Thread(target=int_sleep)
thread.daemon = True
thread.start()
time.sleep(2)
print("main thread end...")
thread = threading.Thread(target=main)
thread.daemon = True
thread.start()
Run it like this:
el#apollo:~/code/python/run01$ python --version
Python 2.7.6
el#apollo:~$ python main.py
el#apollo:~$
See it prints nothing because the thread started. You set it to be a daemon and started it. Then the program ended.
Extra notes: If you paste this code into a python interpreter, all the print statements will appear on the terminal because the daemon never loses hold of its connection to stdout.
Read more: http://docs.python.org/2/library/threading.html
I can only reproduce the behavior described by OP (unending output of 'sleep') if done from the python shell. If run from a file it works as expected (a few lines of 'sleep' and a single line of 'main thread end ...' )
Similarly, the second program exits immediately if run as a file, BUT also prints unending 'sleep' statements when run from the python shell.
My conclusion: since the thread that is the python shell continues to run even after "main" finishes, preventing the daemon(s) from being terminated when run from the python shell.
Could this be considered a bug (i.e that Python's behavior is different depending on how the script is run) or is it expected ? I defer to more experienced Pythonistas...
BTW - tested with Python 3.2.3
For completeness check out this article.
https://joeshaw.org/2009/02/24/605/
The monitoring was done inside a daemon thread. The Python docs say
only:
A thread can be flagged as a “daemon thread”. The significance
of this flag is that the entire Python program exits when only
daemon threads are left.
Which sounds pretty good, right? This thread is just occasionally
grabbing some data, and we don’t need to do anything special when the
program shuts down. Yeah, I remember when I used to believe in things
too.
Despite a global interpreter lock that prevents Python from being
truly concurrent anyway, there is a very real possibility that the
daemon threads can still execute after the Python runtime has started
its own tear-down process. One step of this process appears to be to
set the values inside globals() to None, meaning that any module
resolution results in an AttributeError attempting to dereference
NoneType. Other variations on this cause TypeError to be thrown.
I'm not sure whether that's a bug that's been fixed or a bug still in existence or behaviour as per design. But if you see weirdness keep this in the back of your head.
So an alternative is to loop in the child thread on an exit flag which you can set in the main when you're done. Then wait in the main for the child thread to die and then clean up.
I am trying to write a python program to test a server written in C. The python program launches the compiled server using the subprocess module:
pid = subprocess.Popen(args.server_file_path).pid
This works fine, however if the python program terminates unexpectedly due to an error, the spawned process is left running. I need a way to ensure that if the python program exits unexpectedly, the server process is killed as well.
Some more details:
Linux or OSX operating systems only
Server code can not be modified in any way
I would atexit.register a function to terminate the process:
import atexit
process = subprocess.Popen(args.server_file_path)
atexit.register(process.terminate)
pid = process.pid
Or maybe:
import atexit
process = subprocess.Popen(args.server_file_path)
#atexit.register
def kill_process():
try:
process.terminate()
except OSError:
pass #ignore the error. The OSError doesn't seem to be documented(?)
#as such, it *might* be better to process.poll() and check for
#`None` (meaning the process is still running), but that
#introduces a race condition. I'm not sure which is better,
#hopefully someone that knows more about this than I do can
#comment.
pid = process.pid
Note that this doesn't help you if you do something nasty to cause python to die in a non-graceful way (e.g. via os._exit or if you cause a SegmentationFault or BusError)