Python end parent process from thread - python

My aim is to create a new process which will take a number as input from user in a loop and display the square of it. If the user doesn't enter a number for 10 seconds, the process should end (not the main process). I'm using threading.Timer with multiprocessing.
Tried Code
from threading import Timer
import multiprocessing as mp
import time
def foo(stdin):
while True:
t = Timer(10, exit, ())
t.start()
print 'Enter a No. in 10 Secs: ',
num = int(stdin.readline())
t.cancel()
print 'Square of', num, 'is', num*num
if __name__ == '__main__':
newstdin = os.fdopen(os.dup(sys.stdin.fileno()))
proc1 = mp.Process(target = foo, args = (newstdin, ))
proc1.start()
proc1.join()
print "Exit"
newstdin.close()
But it's not working. Instead of exit I tried with sys.exit, Taken proc1 as global and tried proc1.terminate, But still no solution.

Replaced exit with os._exit and it's working as expected.
t = Timer(10, os._exit, [0])
os._exit takes exactly 1 argument, hence we have to mention 0 in a list. (In threading.Timer to pass arguments to the function, the arguments should be in a list.)

Related

How do you share data between functions in threads with python

I Have a counting function that I would like to start and restart while getting the live variables to use in another function my problem is while using threading it seams like even global variables don't seem to work to pass variables around. What I want the code to do is have a counter be triggered as needed or maybe free running I'm not sure yet. To be able to reset the counter and get the value of the counter.
Right now the counter will start and run fine but the print_stuff function keeps telling me that there is no attribute countval.
The count thread gets started at startup but I don't necessarily want it to start up immediately, I would like to trigger it as needed but I cant put count_thread.start() twice or it will through a error so I'm calling the thread at startup and then calling the function again to restart it as needed. Maybe there is a more elegant way of doing that.?
import threading
import time
def count():
global countval
for countval in range(3):
print('looping')
time.sleep(1)
def print_stuff():
global countval
e = input("press enter to start")
count()
while True:
if countval == 3:
print("time out")
count_thread = threading.Thread(target=count)
print_thread = threading.Thread(target=print_stuff)
print_thread.start()
count_thread.start()
print_stuff is getting to the if statement before the count function is able to create the variable. Just do them in the opposite order. Either that, or create a global countval = 0 to start things off.
To solve the no attribute problem you can use Queue,
and if you want to stop your counting thread you can set a global variable or you can pass a function (using lambda or inner function or ...) to do that.
Here is one way to do that:
import threading
import time
from queue import Queue
from typing import Callable
def count(q, stop_counting):
# type: (Queue, Callable[[], bool]) -> None
for countval in range(3):
if stop_counting():
print('stopped')
break
print(f'looping {countval}')
q.put(countval)
time.sleep(1)
def print_stuff(q):
# type: (Queue) -> None
while True:
countval = q.get()
print(f'countval gotten: {countval}')
if countval == 3:
print("time out")
def main():
flag_stop_counting = False
q = Queue()
def stop_counting():
return flag_stop_counting
count_thread = threading.Thread(target=count, args=(q, stop_counting,))
print_thread = threading.Thread(target=print_stuff, args=(q,))
print_thread.start()
count_thread.start()
time.sleep(1.25)
flag_stop_counting = True
if __name__ == '__main__':
main()
In this code:
counter checks if it should stop counting or not
counter puts the value that it made to q
print_stuff get the value from q (Note: he waits until counter puts his value in q)
To check that program works:
after 1.25 seconds we change the value of flag_stop_counting
But if you want your counter to only have a for, i guess it's better to don't make it as a thread and run it whenever you want.
Hope it was helpful.

Interrupting Infinite Loop in Threading

I am learning about threading using Python 3.8.2. I have one function with an infinite loop, and then two other functions that use the threading.Timer class.
def x:
while True:
dosomething()
def f1():
dosomething2()
threading.Timer(60,f1).start()
def f2():
dosomething3()
threading.Timer(100,f2).start()
Then I start three threads:
t1 = threading.Thread(target=x)
t2 = threading.Thread(target=f1)
t3 = threading.Thread(target=f2)
When it comes time to execute the f1 or f2, I don't want the x() to be executing at the same time, (they might be using the same resource) and pause, let f2 or f1 finish, and then resume the infinite loop in x(). How can I do this?
I've looked at join() but it seems to me it will wait forever for f1() and f2() because it creates a new thread every time and won't terminate.
Here's a photo to explain the flow:
Here's a possible solution, I added the functions dosomething() dosomething2() dosomething3() into the code to have a working example. I've also changed the timer on the threads to 6 and 10 seconds each instead of 60 and 100 so we don't have to wait that long to see their functionality.
dosomething()
prints 'dosomething is running' every second if no other function is running
dosomething2()
sets dosomething2.status = 'run'
prints 'dosomething2 is running 1st second'
waits one second
prints 'dosomething2 is running 2nd second'
sets dosomething2.status = 'sleep'
dosomething3()
sets dosomething3.status = 'run'
prints 'dosomething3 is running 1st second'
waits one second
prints 'dosomething3 is running 2nd second'
waits one second
prints 'dosomething3 is running 3rd second'
sets dosomething3.status = 'sleep'
The first and last lines in dosomething2() and dosomething3() will be triggers to let our x() function know to run only if both functions are in the state of 'sleep'.
You could use global variables instead of dosomething2.status and dosomething3.status but some people recommend not using them.
Code
import time
import threading
def dosomething():
print('dosomething is running')
time.sleep(1)
def dosomething2():
dosomething2.status = 'run'
print('\tdosomething2 is running 1st second')
time.sleep(1)
print('\tdosomething2 is running 2nd second')
dosomething2.status = 'sleep'
def dosomething3():
dosomething3.status = 'run'
print('\tdosomething3 is running 1st second')
time.sleep(1)
print('\tdosomething3 is running 2nd second')
time.sleep(1)
print('\tdosomething3 is running 3rd second')
dosomething3.status = 'sleep'
dosomething2.status = ''
dosomething3.status = ''
def x():
while True:
if dosomething2.status == 'sleep' and dosomething3.status == 'sleep':
dosomething()
def f1():
dosomething2()
threading.Timer(6,f1).start()
def f2():
dosomething3()
threading.Timer(10,f2).start()
t1 = threading.Thread(target=x)
t2 = threading.Thread(target=f1)
t3 = threading.Thread(target=f2)
t1.start()
t2.start()
t3.start()

How to timeout a long running program using rxpython?

Say I have a long running python function that looks something like this?
import random
import time
from rx import Observable
def intns(x):
y = random.randint(5,10)
print(y)
print('begin')
time.sleep(y)
print('end')
return x
I want to be able to set a timeout of 1000ms.
So I'm dong something like, creating an observable and mapping it through the above intense calculation.
a = Observable.repeat(1).map(lambda x: intns(x))
Now for each value emitted, if it takes more than 1000ms I want to end the observable, as soon as I reach 1000ms using on_error or on_completed
a.timeout(1000).subscribe(lambda x: print(x), lambda x: print(x))
above statement does get timeout, and calls on_error, but it goes on to finish calculating the intense calculation and only then returns to the next statements. Is there a better way of doing this?
The last statement prints the following
8 # no of seconds to sleep
begin # begins sleeping, trying to emit the first value
Timeout # operation times out, and calls on_error
end # thread waits till the function ends
The idea is that if a particular function timesout, i want to be able to continue with my program, and ignore the result.
I was wondering if the intns function was done on a separate thread, I guess the main thread continues execution after timeout, but I still want to stop computing intns function on a thread, or kill it somehow.
The following is a class that can be called using with timeout() :
If the block under the code runs for longer than the specified time, a TimeoutError is raised.
import signal
class timeout:
# Default value is 1 second (1000ms)
def __init__(self, seconds=1, error_message='Timeout'):
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum, frame):
raise TimeoutError(self.error_message)
def __enter__(self):
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.seconds)
def __exit__(self, type, value, traceback):
signal.alarm(0)
# example usage
with timeout() :
# infinite while loop so timeout is reached
while True :
pass
If I'm understanding your function, here's what your implementation would look like:
def intns(x):
y = random.randint(5,10)
print(y)
print('begin')
with timeout() :
time.sleep(y)
print('end')
return x
You can do this partially using threading
Although there's no specific way to kill a thread in python, you can implement a method to flag the thread to end.
This won't work if the thread is waiting on other resources (in your case, you simulated a "long" running code by a random wait)
See also
Is there any way to kill a Thread in Python?
This way it works:
import random
import time
import threading
import os
def intns(x):
y = random.randint(5,10)
print(y)
print('begin')
time.sleep(y)
print('end')
return x
thr = threading.Thread(target=intns, args=([10]), kwargs={})
thr.start()
st = time.clock();
while(thr.is_alive() == True):
if(time.clock() - st > 9):
os._exit(0)
Here's an example for timeout
import random
import time
import threading
_timeout = 0
def intns(loops=1):
print('begin')
processing = 0
for i in range(loops):
y = random.randint(5,10)
time.sleep(y)
if _timeout == 1:
print('timedout end')
return
print('keep processing')
return
# this will timeout
timeout_seconds = 10
loops = 10
# this will complete
#timeout_seconds = 30.0
#loops = 1
thr = threading.Thread(target=intns, args=([loops]), kwargs={})
thr.start()
st = time.clock();
while(thr.is_alive() == True):
if(time.clock() - st > timeout_seconds):
_timeout = 1
thr.join()
if _timeout == 0:
print ("completed")
else:
print ("timed-out")
You can use time.sleep() and make a while loop for time.clock()

Python 3.4 Multiprocessing, code does not progress past loop (containing Queue)

I have written a bit of code to test out multiprocessing before I implement in other code. The main code sends a number to the other program (running on another core). That program increments and returns the number. main code increments and returns the value again. All works well, but the loop in the main program, which is a while time < Timeout form, is never exited. Seems simple enough to me, but it never exists the loop. I wondered if it was hanging when no value is returned (.get() ), but I have tried 'try:', and making the timeout very short and the loop huge. Any suggestions what is going on?
The code is running on Windows7, and will eventually run on a Raspberry Pi 2.
Main program
import multiprocessing as mp
import multi_processing_slave as MPS
from time import perf_counter as TimeIs
if __name__ == "__main__":
print("Hello World")
mp.set_start_method("spawn")
q = mp.Queue()
r = mp.Queue()
p = mp.Process(target = MPS.foo, args = (q, r))
p.start()
ThisVar = 0
Timer = TimeIs() + 2
while TimeIs() < Timer - 1: pass
print("time remaining is", Timer - TimeIs())
while TimeIs() < Timer:
#try:
r.put(ThisVar)
#except: pass
#try:
ThisVar = int(q.get()) + 1
#except:
#r.put(ThisVar)
print("master ThisVar", ThisVar, "and time remaining is", round(Timer - TimeIs(), 4))
#p.join()
#p.close()
p.terminate()
print("at end, ThisVar is", ThisVar, "and", Timer - TimeIs(), "seconds remaining")
Slave program named multi_processing_slave
def foo(q, r):
for i in range(100):
ThisVar2 = r.get() + 1
q.put(ThisVar2)
print("foo value", ThisVar2)
print("foo has finished")
return
After the slave process exits, your master process does one more r.put() and then keeps waiting for q.get() to return. You can solve the problem by providing a timeout value (in seconds) to q.get():
ThisVar = int(q.get(timeout=1)) + 1
Note that timing out on q.get() will raise the Empty exception.
How you could find this problem yourself:
Add debugging print statements that show the progress and state of your program.
Learn to use a debugger.

python multithreading wait till all threads finished

This may have been asked in a similar context but I was unable to find an answer after about 20 minutes of searching, so I will ask.
I have written a Python script (lets say: scriptA.py) and a script (lets say scriptB.py)
In scriptB I want to call scriptA multiple times with different arguments, each time takes about an hour to run, (its a huge script, does lots of stuff.. don't worry about it) and I want to be able to run the scriptA with all the different arguments simultaneously, but I need to wait till ALL of them are done before continuing; my code:
import subprocess
#setup
do_setup()
#run scriptA
subprocess.call(scriptA + argumentsA)
subprocess.call(scriptA + argumentsB)
subprocess.call(scriptA + argumentsC)
#finish
do_finish()
I want to do run all the subprocess.call() at the same time, and then wait till they are all done, how should I do this?
I tried to use threading like the example here:
from threading import Thread
import subprocess
def call_script(args)
subprocess.call(args)
#run scriptA
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()
But I do not think this is right.
How do I know they have all finished running before going to my do_finish()?
Put the threads in a list and then use the Join method
threads = []
t = Thread(...)
threads.append(t)
...repeat as often as necessary...
# Start all threads
for x in threads:
x.start()
# Wait for all of them to finish
for x in threads:
x.join()
You need to use join method of Thread object in the end of the script.
t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))
t1.start()
t2.start()
t3.start()
t1.join()
t2.join()
t3.join()
Thus the main thread will wait till t1, t2 and t3 finish execution.
In Python3, since Python 3.2 there is a new approach to reach the same result, that I personally prefer to the traditional thread creation/start/join, package concurrent.futures: https://docs.python.org/3/library/concurrent.futures.html
Using a ThreadPoolExecutor the code would be:
from concurrent.futures.thread import ThreadPoolExecutor
import time
def call_script(ordinal, arg):
print('Thread', ordinal, 'argument:', arg)
time.sleep(2)
print('Thread', ordinal, 'Finished')
args = ['argumentsA', 'argumentsB', 'argumentsC']
with ThreadPoolExecutor(max_workers=2) as executor:
ordinal = 1
for arg in args:
executor.submit(call_script, ordinal, arg)
ordinal += 1
print('All tasks has been finished')
The output of the previous code is something like:
Thread 1 argument: argumentsA
Thread 2 argument: argumentsB
Thread 1 Finished
Thread 2 Finished
Thread 3 argument: argumentsC
Thread 3 Finished
All tasks has been finished
One of the advantages is that you can control the throughput setting the max concurrent workers.
To use multiprocessing instead, you can use ProcessPoolExecutor.
I prefer using list comprehension based on an input list:
inputs = [scriptA + argumentsA, scriptA + argumentsB, ...]
threads = [Thread(target=call_script, args=(i)) for i in inputs]
[t.start() for t in threads]
[t.join() for t in threads]
You can have class something like below from which you can add 'n' number of functions or console_scripts you want to execute in parallel passion and start the execution and wait for all jobs to complete..
from multiprocessing import Process
class ProcessParallel(object):
"""
To Process the functions parallely
"""
def __init__(self, *jobs):
"""
"""
self.jobs = jobs
self.processes = []
def fork_processes(self):
"""
Creates the process objects for given function deligates
"""
for job in self.jobs:
proc = Process(target=job)
self.processes.append(proc)
def start_all(self):
"""
Starts the functions process all together.
"""
for proc in self.processes:
proc.start()
def join_all(self):
"""
Waits untill all the functions executed.
"""
for proc in self.processes:
proc.join()
def two_sum(a=2, b=2):
return a + b
def multiply(a=2, b=2):
return a * b
#How to run:
if __name__ == '__main__':
#note: two_sum, multiply can be replace with any python console scripts which
#you wanted to run parallel..
procs = ProcessParallel(two_sum, multiply)
#Add all the process in list
procs.fork_processes()
#starts process execution
procs.start_all()
#wait until all the process got executed
procs.join_all()
I just came across the same problem where I needed to wait for all the threads which were created using the for loop.I just tried out the following piece of code.It may not be the perfect solution but I thought it would be a simple solution to test:
for t in threading.enumerate():
try:
t.join()
except RuntimeError as err:
if 'cannot join current thread' in err:
continue
else:
raise
From the threading module documentation
There is a “main thread” object; this corresponds to the initial
thread of control in the Python program. It is not a daemon thread.
There is the possibility that “dummy thread objects” are created.
These are thread objects corresponding to “alien threads”, which are
threads of control started outside the threading module, such as
directly from C code. Dummy thread objects have limited functionality;
they are always considered alive and daemonic, and cannot be join()ed.
They are never deleted, since it is impossible to detect the
termination of alien threads.
So, to catch those two cases when you are not interested in keeping a list of the threads you create:
import threading as thrd
def alter_data(data, index):
data[index] *= 2
data = [0, 2, 6, 20]
for i, value in enumerate(data):
thrd.Thread(target=alter_data, args=[data, i]).start()
for thread in thrd.enumerate():
if thread.daemon:
continue
try:
thread.join()
except RuntimeError as err:
if 'cannot join current thread' in err.args[0]:
# catchs main thread
continue
else:
raise
Whereupon:
>>> print(data)
[0, 4, 12, 40]
Maybe, something like
for t in threading.enumerate():
if t.daemon:
t.join()
using only join can result in false-possitive interaction with thread. Like said in docs :
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in
seconds (or fractions thereof). As join() always returns None, you
must call isAlive() after join() to decide whether a timeout happened
– if the thread is still alive, the join() call timed out.
and illustrative piece of code:
threads = []
for name in some_data:
new = threading.Thread(
target=self.some_func,
args=(name,)
)
threads.append(new)
new.start()
over_threads = iter(threads)
curr_th = next(over_threads)
while True:
curr_th.join()
if curr_th.is_alive():
continue
try:
curr_th = next(over_threads)
except StopIteration:
break

Categories

Resources