Check if current thread is main thread, in Python - python

This has been answered for Android, Objective C and C++ before, but apparently not for Python. How do I reliably determine whether the current thread is the main thread? I can think of a few approaches, none of which really satisfy me, considering it could be as easy as comparing to threading.MainThread if it existed.
Check the thread name
The main thread is instantiated in threading.py like this:
Thread.__init__(self, name="MainThread")
so one could do
if threading.current_thread().name == 'MainThread'
but is this name fixed? Other codes I have seen checked whether MainThread is contained anywhere in the thread's name.
Store the starting thread
I could store a reference to the starting thread the moment the program starts up, i.e. while there are no other threads yet. This would be absolutely reliable, but way too cumbersome for such a simple query?
Is there a more concise way of doing this?

The problem with threading.current_thread().name == 'MainThread' is that one can always do:
threading.current_thread().name = 'MyName'
assert threading.current_thread().name == 'MainThread' # will fail
Perhaps the following is more solid:
threading.current_thread().__class__.__name__ == '_MainThread'
Having said that, one may still cunningly do:
threading.current_thread().__class__.__name__ = 'Grrrr'
assert threading.current_thread().__class__.__name__ == '_MainThread' # will fail
But this option still seems better; "after all, we're all consenting adults here."
UPDATE:
Python 3.4 introduced threading.main_thread() which is much better than the above:
assert threading.current_thread() is threading.main_thread()
UPDATE 2:
For Python < 3.4, perhaps the best option is:
isinstance(threading.current_thread(), threading._MainThread)

The answers here are old and/or bad, so here's a current solution:
if threading.current_thread() is threading.main_thread():
...
This method is available since Python 3.4+.

If, like me, accessing protected attributes gives you the Heebie-jeebies, you may want an alternative for using threading._MainThread, as suggested. In that case, you may exploit the fact that only the Main Thread can handle signals, so the following can do the job:
import signal
def is_main_thread():
try:
# Backup the current signal handler
back_up = signal.signal(signal.SIGINT, signal.SIG_DFL)
except ValueError:
# Only Main Thread can handle signals
return False
# Restore signal handler
signal.signal(signal.SIGINT, back_up)
return True
Updated to address potential issue as pointed out by #user4815162342.

Related

Equivalent of thread.interrupt_main() in Python 3

In Python 2 there is a function thread.interrupt_main(), which raises a KeyboardInterrupt exception in the main thread when called from a subthread.
This is also available through _thread.interrupt_main() in Python 3, but it's a low-level "support module", mostly for use within other standard modules.
What is the modern way of doing this in Python 3, presumably through the threading module, if there is one?
Well raising an exception manually is kinda low-level, so if you think you have to do that just use _thread.interrupt_main() since that's the equivalent you asked for (threading module itself doesn't provide this).
It could be that there is a more elegant way to achieve your ultimate goal, though. Maybe setting and checking a flag would be already enough or using a threading.Event like #RFmyD already suggested, or using message passing over a queue.Queue. It depends on your specific setup.
If you need a way for a thread to stop execution of the whole program, this is how I did it with a threading.Event:
def start():
"""
This runs in the main thread and starts a sub thread
"""
stop_event = threading.Event()
check_stop_thread = threading.Thread(
target=check_stop_signal, args=(stop_event), daemon=True
)
check_stop_thread.start()
# If check_stop_thread sets the check_stop_signal, sys.exit() is executed here in the main thread.
# Since the sub thread is a daemon, it will be terminated as well.
stop_event.wait()
logging.debug("Threading stop event set, calling sys.exit()...")
sys.exit()
def check_stop_signal(stop_event):
"""
Checks continuously (every 0.1 s) if a "stop" flag has been set in the database.
Needs to run in its own thread.
"""
while True:
if io.check_stop():
logger.info("Program was aborted by user.")
logging.debug("Setting threading stop event...")
stop_event.set()
break
sleep(0.1)
You might want to look into the threading.Event module.

How can I reproduce the race conditions in this python code reliably?

Context
I recently posted a timer class for review on Code Review. I'd had a gut feeling there were concurrency bugs as I'd once seen 1 unit test fail, but was unable to reproduce the failure. Hence my post to code review.
I got some great feedback highlighting various race conditions in the code. (I thought) I understood the problem and the solution, but before making any fixes, I wanted to expose the bugs with a unit test. When I tried, I realised it was difficult. Various stack exchange answers suggested I'd have to control the execution of threads to expose the bug(s) and any contrived timing would not necessarily be portable to a different machine. This seemed like a lot of accidental complexity beyond the problem I was trying to solve.
Instead I tried using the best static analysis (SA) tool for python, PyLint, to see if it'd pick out any of the bugs, but it couldn't. Why could a human find the bugs through code review (essentially SA), but a SA tool could not?
Afraid of trying to get Valgrind working with python (which sounded like yak-shaving), I decided to have a bash at fixing the bugs without reproducing them first. Now I'm in a pickle.
Here's the code now.
from threading import Timer, Lock
from time import time
class NotRunningError(Exception): pass
class AlreadyRunningError(Exception): pass
class KitchenTimer(object):
'''
Loosely models a clockwork kitchen timer with the following differences:
You can start the timer with arbitrary duration (e.g. 1.2 seconds).
The timer calls back a given function when time's up.
Querying the time remaining has 0.1 second accuracy.
'''
PRECISION_NUM_DECIMAL_PLACES = 1
RUNNING = "RUNNING"
STOPPED = "STOPPED"
TIMEUP = "TIMEUP"
def __init__(self):
self._stateLock = Lock()
with self._stateLock:
self._state = self.STOPPED
self._timeRemaining = 0
def start(self, duration=1, whenTimeup=None):
'''
Starts the timer to count down from the given duration and call whenTimeup when time's up.
'''
with self._stateLock:
if self.isRunning():
raise AlreadyRunningError
else:
self._state = self.RUNNING
self.duration = duration
self._userWhenTimeup = whenTimeup
self._startTime = time()
self._timer = Timer(duration, self._whenTimeup)
self._timer.start()
def stop(self):
'''
Stops the timer, preventing whenTimeup callback.
'''
with self._stateLock:
if self.isRunning():
self._timer.cancel()
self._state = self.STOPPED
self._timeRemaining = self.duration - self._elapsedTime()
else:
raise NotRunningError()
def isRunning(self):
return self._state == self.RUNNING
def isStopped(self):
return self._state == self.STOPPED
def isTimeup(self):
return self._state == self.TIMEUP
#property
def timeRemaining(self):
if self.isRunning():
self._timeRemaining = self.duration - self._elapsedTime()
return round(self._timeRemaining, self.PRECISION_NUM_DECIMAL_PLACES)
def _whenTimeup(self):
with self._stateLock:
self._state = self.TIMEUP
self._timeRemaining = 0
if callable(self._userWhenTimeup):
self._userWhenTimeup()
def _elapsedTime(self):
return time() - self._startTime
Question
In the context of this code example, how can I expose the race conditions, fix them, and prove they're fixed?
Extra points
extra points for a testing framework suitable for other implementations and problems rather than specifically to this code.
Takeaway
My takeaway is that the technical solution to reproduce the identified race conditions is to control the synchronism of two threads to ensure they execute in the order that will expose a bug. The important point here is that they are already identified race conditions. The best way I've found to identify race conditions is to put your code up for code review and encourage more expert people analyse it.
Traditionally, forcing race conditions in multithreaded code is done with semaphores, so you can force a thread to wait until another thread has achieved some edge condition before continuing.
For example, your object has some code to check that start is not called if the object is already running. You could force this condition to make sure it behaves as expected by doing something like this:
starting a KitchenTimer
having the timer block on a semaphore while in the running state
starting the same timer in another thread
catching AlreadyRunningError
To do some of this you may need to extend the KitchenTimer class. Formal unit tests will often use mock objects which are defined to block at critical times. Mock objects are a bigger topic than I can address here, but googling "python mock object" will turn up a lot of documentation and many implementations to choose from.
Here's a way that you could force your code to throw AlreadyRunningError:
import threading
class TestKitchenTimer(KitchenTimer):
_runningLock = threading.Condition()
def start(self, duration=1, whenTimeUp=None):
KitchenTimer.start(self, duration, whenTimeUp)
with self._runningLock:
print "waiting on _runningLock"
self._runningLock.wait()
def resume(self):
with self._runningLock:
self._runningLock.notify()
timer = TestKitchenTimer()
# Start the timer in a subthread. This thread will block as soon as
# it is started.
thread_1 = threading.Thread(target = timer.start, args = (10, None))
thread_1.start()
# Attempt to start the timer in a second thread, causing it to throw
# an AlreadyRunningError.
try:
thread_2 = threading.Thread(target = timer.start, args = (10, None))
thread_2.start()
except AlreadyRunningError:
print "AlreadyRunningError"
timer.resume()
timer.stop()
Reading through the code, identify some of the boundary conditions you want to test, then think about where you would need to pause the timer to force that condition to arise, and add Conditions, Semaphores, Events, etc. to make it happen. e.g. what happens if, just as the timer runs the whenTimeUp callback, another thread tries to stop it? You can force that condition by making the timer wait as soon as it's entered _whenTimeUp:
import threading
class TestKitchenTimer(KitchenTimer):
_runningLock = threading.Condition()
def _whenTimeup(self):
with self._runningLock:
self._runningLock.wait()
KitchenTimer._whenTimeup(self)
def resume(self):
with self._runningLock:
self._runningLock.notify()
def TimeupCallback():
print "TimeupCallback was called"
timer = TestKitchenTimer()
# The timer thread will block when the timer expires, but before the callback
# is invoked.
thread_1 = threading.Thread(target = timer.start, args = (1, TimeupCallback))
thread_1.start()
sleep(2)
# The timer is now blocked. In the parent thread, we stop it.
timer.stop()
print "timer is stopped: %r" % timer.isStopped()
# Now allow the countdown thread to resume.
timer.resume()
Subclassing the class you want to test isn't an awesome way to instrument it for testing: you'll have to override basically all of the methods in order to test race conditions in each one, and at that point there's a good argument to be made that you're not really testing the original code. Instead, you may find it cleaner to put the semaphores right in the KitchenTimer object but initialized to None by default, and have your methods check if testRunningLock is not None: before acquiring or waiting on the lock. Then you can force races on the actual code that you're submitting.
Some reading on Python mock frameworks that may be helpful. In fact, I'm not sure that mocks would be helpful in testing this code: it's almost entirely self-contained and doesn't rely on many external objects. But mock tutorials sometimes touch on issues like these. I haven't used any of these, but the documentation on these like a good place to get started:
Getting Started with Mock
Using Fudge
Python Mock Testing Techniques and Tools
The most common solution to testing thread (un)safe code is to start a lot of threads and hope for the best. The problem I, and I can imagine others, have with this is that it relies on chance and it makes tests 'heavy'.
As I ran into this a while ago I wanted to go for precision instead of brute force. The result is a piece of test code to cause race-conditions by letting the threads race neck to neck.
Sample racey code
spam = []
def set_spam():
spam[:] = foo()
use(spam)
If set_spam is called from several threads, a race condition exists between modification and use of spam. Let's try to reproduce it consistently.
How to cause race-conditions
class TriggeredThread(threading.Thread):
def __init__(self, sequence=None, *args, **kwargs):
self.sequence = sequence
self.lock = threading.Condition()
self.event = threading.Event()
threading.Thread.__init__(self, *args, **kwargs)
def __enter__(self):
self.lock.acquire()
while not self.event.is_set():
self.lock.wait()
self.event.clear()
def __exit__(self, *args):
self.lock.release()
if self.sequence:
next(self.sequence).trigger()
def trigger(self):
with self.lock:
self.event.set()
self.lock.notify()
Then to demonstrate the use of this thread:
spam = [] # Use a list to share values across threads.
results = [] # Register the results.
def set_spam():
thread = threading.current_thread()
with thread: # Acquires the lock.
# Set 'spam' to thread name
spam[:] = [thread.name]
# Thread 'releases' the lock upon exiting the context.
# The next thread is triggered and this thread waits for a trigger.
with thread:
# Since each thread overwrites the content of the 'spam'
# list, this should only result in True for the last thread.
results.append(spam == [thread.name])
threads = [
TriggeredThread(name='a', target=set_spam),
TriggeredThread(name='b', target=set_spam),
TriggeredThread(name='c', target=set_spam)]
# Create a shifted sequence of threads and share it among the threads.
thread_sequence = itertools.cycle(threads[1:] + threads[:1])
for thread in threads:
thread.sequence = thread_sequence
# Start each thread
[thread.start() for thread in threads]
# Trigger first thread.
# That thread will trigger the next thread, and so on.
threads[0].trigger()
# Wait for each thread to finish.
[thread.join() for thread in threads]
# The last thread 'has won the race' overwriting the value
# for 'spam', thus [False, False, True].
# If set_spam were thread-safe, all results would be true.
assert results == [False, False, True], "race condition triggered"
assert results == [True, True, True], "code is thread-safe"
I think I explained enough about this construction so you can implement it for your own situation. I think this fits the 'extra points' section quite nicely:
extra points for a testing framework suitable for other implementations and problems rather than specifically to this code.
Solving race-conditions
Shared variables
Each threading issue is solved in it's own specific way. In the example above I caused a race-condition by sharing a value across threads. Similar problems can occur when using global variables, such as a module attribute. The key to solving such issues may be to use a thread-local storage:
# The thread local storage is a global.
# This may seem weird at first, but it isn't actually shared among threads.
data = threading.local()
data.spam = [] # This list only exists in this thread.
results = [] # Results *are* shared though.
def set_spam():
thread = threading.current_thread()
# 'get' or set the 'spam' list. This actually creates a new list.
# If the list was shared among threads this would cause a race-condition.
data.spam = getattr(data, 'spam', [])
with thread:
data.spam[:] = [thread.name]
with thread:
results.append(data.spam == [thread.name])
# Start the threads as in the example above.
assert all(results) # All results should be True.
Concurrent reads/writes
A common threading issue is the problem of multiple threads reading and/or writing to a data holder concurrently. This problem is solved by implementing a read-write lock. The actual implementation of a read-write lock may differ. You may choose a read-first lock, a write-first lock or just at random.
I'm sure there are examples out there describing such locking techniques. I may write an example later as this is quite a long answer already. ;-)
Notes
Have a look at the threading module documentation and experiment with it a bit. As each threading issue is different, different solutions apply.
While on the subject of threading, have a look at the Python GIL (Global Interpreter Lock). It is important to note that threading may not actually be the best approach in optimizing performance (but this is not your goal). I found this presentation pretty good: https://www.youtube.com/watch?v=zEaosS1U5qY
You can test it by using a lot of threads:
import sys, random, thread
def timeup():
sys.stdout.write("Timer:: Up %f" % time())
def trdfunc(kt, tid):
while True :
sleep(1)
if not kt.isRunning():
if kt.start(1, timeup):
sys.stdout.write("[%d]: started\n" % tid)
else:
if random.random() < 0.1:
kt.stop()
sys.stdout.write("[%d]: stopped\n" % tid)
sys.stdout.write("[%d] remains %f\n" % ( tid, kt.timeRemaining))
kt = KitchenTimer()
kt.start(1, timeup)
for i in range(1, 100):
thread.start_new_thread ( trdfunc, (kt, i) )
trdfunc(kt, 0)
A couple of problem problems I see:
When a thread sees the timer as not running and try to start it, the
code generally raises an exception due to context switch in between
test and start. I think raising an exception is too much. Or you can
have an atomic testAndStart function
A similar problem occurs with stop. You can implement a testAndStop
function.
Even this code from the timeRemaining function:
if self.isRunning():
self._timeRemaining = self.duration - self._elapsedTime()
Needs some sort of atomicity, perhaps you need to grab a lock before
testing isRunning.
If you plan to share this class between threads, you need to address these issues.
In general - this is not viable solution. You can reproduce this race condition by using debugger (set breakpoints in some locations in the code, than, when it hits one of the breakpoints - freeze the thread and run the code until it hits another breakpoint, then freeze this thread and unfreeze the first thread, you can interleave threads execution in any way using this technique).
The problem is - the more threads and code you have, the more ways to interleave side effects they will have. Actually - it will grow exponentially. There is no viable solution to test it in general. It is possible only in some simple cases.
The solution to this problem are well known. Write code that is aware of it's side effects, control side effects with synchronisation primitives like locks, semaphores or queues or use immutable data if its possible.
Maybe more practical way is to use runtime checks to force correct call order. For example (pseudocode):
class RacyObject:
def __init__(self):
self.__cnt = 0
...
def isReadyAndLocked(self):
acquire_object_lock
if self.__cnt % 2 != 0:
# another thread is ready to start the Job
return False
if self.__is_ready:
self.__cnt += 1
return True
# Job is in progress or doesn't ready yet
return False
release_object_lock
def doJobAndRelease(self):
acquire_object_lock
if self.__cnt % 2 != 1:
raise RaceConditionDetected("Incorrect order")
self.__cnt += 1
do_job()
release_object_lock
This code will throw exception if you doesn't check isReadyAndLock before calling doJobAndRelease. This can be tested easily using only one thread.
obj = RacyObject()
...
# correct usage
if obj.isReadyAndLocked()
obj.doJobAndRelease()

Timout on a function

Let us say we have a python function magical_attack(energy) which may or may not last more than a second. It could even be an infinite loop? How would I run, but if it goes over a second, terminate it, and tell the rest of the program. I am looking for a sleek module to do this. Example:
import timeout
try: timout.run(magical_attack(5), 1)
except timeout.timeouterror:
blow_up_in_face(wizard)
Note: It is impossible to modify the function. It comes from the outside during runtime.
The simplest way to do this is to run the background code in a thread:
t = threading.Thread(target=magical_attack, args=(5,))
t.start()
t.join(1)
if not t.isAlive():
blow_up_in_face(wizard)
However, note that this will not cancel the magical_attack function; it could still keep spinning along in the background for as long as it wants even though you no longer care about the results.
Canceling threads safely is inherently hard to do, and different on each platform, so Python doesn't attempt to provide a way to do it. If you need that, there are three alternatives:
If you can edit the code of magical_attack to check a flag every so often, you can cancel it cooperatively by just setting that flag.
You can use a child process instead of a thread, which you can then kill safely.
You can use ctypes, pywin32, PyObjC, etc. to access platform-specific routines to kill the thread. But you have to really know what you're doing to make sure you do it safely, and don't confuse Python in doing it.
As Chris Pak pointed out, the futures module in Python 3.2+ makes this even easier. For example, you can throw off thousands of jobs without having thousands of threads; you can apply timeouts to a whole group of jobs as if they were a single job; etc. Plus, you can switch from threads to processes with a trivial one-liner change. Unfortunately, Python 2.7 does not have this module—but there is a quasi-official backport that you can install and use just as easily.
Abamert beat me there on the answer I was preparing, except for this detail:
If, and only if, the outside function is executed through the Python interpreter, even though you can't change it (for example, from a compiled module), you might be able to use the technique described in this other question to kill the thread that calls that function using an exception.
Is there any way to kill a Thread in Python?
Of course, if you did have control over the function you were calling, the StoppableThread class from that answer works well for this:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self):
super(StoppableThread, self).__init__()
self._stop = threading.Event()
def stop(self):
self._stop.set()
def stopped(self):
return self._stop.isSet()
class Magical_Attack(StoppableThread):
def __init__(self, enval):
self._energy = enval
super(Magical_Attack, self).__init__()
def run(self):
while True and not self.stopped():
print self._energy
if __name__ == "__main__":
a = Magical_Attack(5)
a.start()
a.join(5.0)
a.stop()

right way to run some code with timeout in Python

I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
Use signal.SIGALRM - but this approach not working on Windows!
Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.
What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
If it's network related you could try:
import socket
socket.setdefaulttimeout(number)
I found this with eventlet library:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
An other way is to use faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
I've solved that in that way:
For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work.
I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
solving with the 'with' construct and merging solution from -
Timeout function if it takes too long to finish
this thread which work better.
import threading, time
class Exception_TIMEOUT(Exception):
pass
class linwintimeout:
def __init__(self, f, seconds=1.0, error_message='Timeout'):
self.seconds = seconds
self.thread = threading.Thread(target=f)
self.thread.daemon = True
self.error_message = error_message
def handle_timeout(self):
raise Exception_TIMEOUT(self.error_message)
def __enter__(self):
try:
self.thread.start()
self.thread.join(self.seconds)
except Exception, te:
raise te
def __exit__(self, type, value, traceback):
if self.thread.is_alive():
return self.handle_timeout()
def function():
while True:
print "keep printing ...", time.sleep(1)
try:
with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
pass
except Exception_TIMEOUT, e:
print " attention !! execeeded timeout, giving up ... %s " % e

Is this Python code thread safe?

import time
import threading
class test(threading.Thread):
def __init__ (self):
threading.Thread.__init__(self)
self.doSkip = False
self.count = 0
def run(self):
while self.count<9:
self.work()
def skip(self):
self.doSkip = True
def work(self):
self.count+=1
time.sleep(1)
if(self.doSkip):
print "skipped"
self.doSkip = False
return
print self.count
t = test()
t.start()
while t.count<9:
time.sleep(2)
t.skip()
Thread-safe in which way? I don't see any part you might want to protect here.
skip may reset the doSkip at any time, so there's not much point in locking it. You don't have any resources that are accessed at the same time - so IMHO nothing can be corrupted / unsafe in this code.
The only part that might run differently depending on locking / counting is how many "skip"s do you expect on every call to .skip(). If you want to ensure that every skip results in a skipped call to .work(), you should change doSkip into a counter that is protected by a lock on both increment and compare/decrement. Currently one thread might turn doSkip on after the check, but before the doSkip reset. It doesn't matter in this example, but in some real situation (with more code) it might make a difference.
Whenever the test of a mutex boolean ( e.g. if(self.doSkip) ) is separate from the set of the mutex boolean you will probably have threading problems.
The rule is that your thread will get swapped out at the most inconvenient time. That is, after the test and before the set. Moving them closer together reduces the window for screw-ups but does not eliminate them. You almost always need a specially created mechanism from the language or kernel to fully close that window.
The threading library has Semaphores that can be used to synchronize threads and/or create critical sections of code.
Apparently there isn't any critical resource, so I'd say it's thread-safe.
But as usual you can't predict in which order the two threads will be blocked/run by the scheduler.
This is and will thread safe as long as you don't share data between threads.
If an other thread needs to read/write data to your thread class, then this won't be thread safe unless you protect data with some synchronization mechanism (like locks).
To elaborate on DanM's answer, conceivably this could happen:
Thread 1: t.skip()
Thread 2: if self.doSkip: print 'skipped'
Thread 1: t.skip()
Thread 2: self.doSkip = False
etc.
In other words, while you might expect to see one "skipped" for every call to t.skip(), this sequence of events would violate that.
However, because of your sleep() calls, I think this sequence of events is actually impossible.
(unless your computer is running really slowly)

Categories

Resources