I'm on Ubuntu 16.04.6 LTS with python-2.7.12. I'm not an expert in python, but I have to maintain some code. Here is snippet:
from threading import Thread
...
class Shell(cmd.Cmd):
...
def do_start(self, line):
threads = []
t = Thread(target=traffic(line, arg1, arg2, arg3)
threads.append(t)
t.start()
t.join()
...
if __name__ == '__main__':
global config
global args
args = parse_args()
config = configparser.ConfigParser()
config.read(args.FILE)
s = Shell()
...
So it starts a small command-line shell, where I can execute some commands. It does work, however it blocks the CLI, as the threads starts, so I googled and thought that adding t.setDaemon(True) would help. I tried it before t.start() or after, and it didn't take any effect. Is it not supported in this version, or I'm doing something wrong?
Thanks.
The t.join() makes the main thread to wait for the one created, so the CLI is blocked.
If you want to run your CLI and not block the terminal you need to run it in the background.
If you run on Linux you can simply use the & sign
Related
EDIT: I found the issue. It was a problem with PyCharm. I ran the .py outside of PyCharm and it worked as expected. In PyCharm I enabled "Emulate terminal in output console" and it now also works there...
Expectations:
Apscheduler spawns a thread that checks a website for something.
If the something was found (or multiple of it), the thread spawns (multiple) processes to download it/them.
After five seconds the next check thread spawns. While the other downloads may continue in the background.
Problem:
The spawned processes never stop to exist, which makes other parts of the code (not included) not work, because I need to check if the processes are done etc.
If I use a simple time.sleep(5) instead (see code), it works as expected.
No I cannot set max_instances to 1 because this will stop the scheduled job from running if there is one active download process.
Code:
import datetime
import multiprocessing
from apscheduler.schedulers.background import BackgroundScheduler
class DownloadThread(multiprocessing.Process):
def __init__(self):
super().__init__()
print("Process started")
def main():
print(multiprocessing.active_children())
# prints: [<DownloadThread name='DownloadThread-1' pid=3188 parent=7088 started daemon>,
# <DownloadThread name='DownloadThread-3' pid=12228 parent=7088 started daemon>,
# <DownloadThread name='DownloadThread-2' pid=13544 parent=7088 started daemon>
# ...
# ]
new_process = DownloadThread()
new_process.daemon = True
new_process.start()
new_process.join()
if __name__ == '__main__':
sched = BackgroundScheduler()
sched.add_job(main, 'interval', args=(), seconds=5, max_instances=999, next_run_time=datetime.datetime.now())
sched.start()
while True:
# main() # works. Processes despawn.
# time.sleep(5)
input()
I am testing Python threading with the following script:
import threading
class FirstThread (threading.Thread):
def run (self):
while True:
print 'first'
class SecondThread (threading.Thread):
def run (self):
while True:
print 'second'
FirstThread().start()
SecondThread().start()
This is running in Python 2.7 on Kubuntu 11.10. Ctrl+C will not kill it. I also tried adding a handler for system signals, but that did not help:
import signal
import sys
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
To kill the process I am killing it by PID after sending the program to the background with Ctrl+Z, which isn't being ignored. Why is Ctrl+C being ignored so persistently? How can I resolve this?
Ctrl+C terminates the main thread, but because your threads aren't in daemon mode, they keep running, and that keeps the process alive. We can make them daemons:
f = FirstThread()
f.daemon = True
f.start()
s = SecondThread()
s.daemon = True
s.start()
But then there's another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:
import time
while True:
time.sleep(1)
Now it will keep print 'first' and 'second' until you hit Ctrl+C.
Edit: as commenters have pointed out, the daemon threads may not get a chance to clean up things like temporary files. If you need that, then catch the KeyboardInterrupt on the main thread and have it co-ordinate cleanup and shutdown. But in many cases, letting daemon threads die suddenly is probably good enough.
KeyboardInterrupt and signals are only seen by the process (ie the main thread)... Have a look at Ctrl-c i.e. KeyboardInterrupt to kill threads in python
I think it's best to call join() on your threads when you expect them to die. I've taken the liberty to make the change your loops to end (you can add whatever cleanup needs are required to there as well). The variable die is checked on each pass and when it's True, the program exits.
import threading
import time
class MyThread (threading.Thread):
die = False
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run (self):
while not self.die:
time.sleep(1)
print (self.name)
def join(self):
self.die = True
super().join()
if __name__ == '__main__':
f = MyThread('first')
f.start()
s = MyThread('second')
s.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
f.join()
s.join()
An improved version of #Thomas K's answer:
Defining an assistant function is_any_thread_alive() according to this gist, which can terminates the main() automatically.
Example codes:
import threading
def job1():
...
def job2():
...
def is_any_thread_alive(threads):
return True in [t.is_alive() for t in threads]
if __name__ == "__main__":
...
t1 = threading.Thread(target=job1,daemon=True)
t2 = threading.Thread(target=job2,daemon=True)
t1.start()
t2.start()
while is_any_thread_alive([t1,t2]):
time.sleep(0)
One simple 'gotcha' to beware of, are you sure CAPS LOCK isn't on?
I was running a Python script in the Thonny IDE on a Pi4. With CAPS LOCK on, Ctrl+Shift+C is passed to the keyboard buffer, not Ctrl+C.
I am trying to create a module init and a module mydaemon in Python 2.7 under Debian 7.
The module init checks the requirements such as db connections etc. Then, mydaemon runs in a thread and uses the database to do things and write a logfile.
The problem when setting up the thread daemon is that the logging and function call fails.
But if the thread not daemon working fine...
Where am I wrong or what will be a better approach?
init.py
import mydaemon, threading
print 'start'
t = threading.Thread( target = mydaemon.start, args = () )
t.daemon = True # error here
t.start()
mydaemon.py
import logging
def start():
work()
return
def work():
logging.basicConfig( filename = 'mylog.log', level = logging.DEBUG )
logging.info('foo log')
print 'foo console'
return
My collage found another method with external Daemon module (python-daemon)
http://www.gavinj.net/2012/06/building-python-daemon-process.html
In the tutorial have some error but read comments ;-)
Making it as a deamon means the background thread dies as soon as the main app closes. Your code 'works' as is, simply add a pause to init.py to model this behavior:
...
t.start()
import time
time.sleep(1)
This is discussed in more detail at http://pymotw.com/2/threading/#daemon-vs-non-daemon-threads.
The simply way to fix this is to join the thread.
import mydaemon, threading
print 'start'
t = threading.Thread( target = mydaemon.start, args = () )
t.daemon = True # error here
t.start()
t.join()
I'm using Ubuntu 12.04 Server x64, Python 2.7.3, futures==2.1.5, eventlet==0.14.0
Did anybody hit the same problem?
import eventlet
import futures
import random
# eventlet.monkey_patch() # Uncomment this line and it will hang!
def test():
if random.choice((True, False)):
raise Exception()
print "OK this time"
def done(f):
print "done callback!"
def main():
with futures.ProcessPoolExecutor() as executor:
fu = []
for i in xrange(6):
f = executor.submit(test)
f.add_done_callback(done)
fu.append(f)
futures.wait(fu, return_when=futures.ALL_COMPLETED)
if __name__ == '__main__':
main()
This code, if you uncomment the line, will hang. I can only stop it by pressing Ctrl+C. In this case the following KeyboardInterrupt traceback is printed: https://gist.github.com/max-lobur/8526120#file-traceback
This works fine with ThreadPoolExecutor.
Any feedback is appreciated
I think this KeyboardInterrupt reaches the started process because under ubuntu and most linux systems the new process is created with fork - starting a new process from the current one with the same stdin and stdout and stderr. So the KeyboardInterrupt can reach the child process. Maybe someone who know something more can add thoughts.
I have a wrapper script, inside that there are many other test script. Inside of one of the test script I make a subprocess using Popen class. The problem is that I don't know how to terminate that child process and return to main process and continue with the next test script. My wrapper stops at the test script that has the child process and never continue. Can you give a hint? Thx.
P.S. kill() or terminate() or anyother function that I consider usefull, doesn't put me back to the main process. I want to terminate the subprocess and continue with the main process.
Keep a reference to the child in the main script. With that reference call the terminate()
from subprocess import Popen
class TestApp(object):
app = None
def start(self):
self.app = Popen(['your command'])
def stop(self):
self.app.terminate()
In the main script:
app1 = TestApp()
app1.start()
app2 = TestApp()
app2.start()
#do something here
app1.stop()
app2.stop()
#do more here