I'm writing code for a game, with the AI running as threads. They are stored in a list with the relevant information so that when their character 'dies', they can be efficiently removed from play by simply removing them from the list of active AIs.
The threads controlling them were initially called from within another thread while a tkinter window controlling the player runs in the main thread - however, the AI threads caused the main thread to slow down. After searching for a way to decrease the priority of the thread creating the AI, I found that I would have to use multiprocessing.Process instead of threading.Thread, and then altering its niceness - but when I try to do this, the AI threads don't work. The functions running this are defined within class AI():
def start():
'''
Creates a process to oversee all AI functions
'''
global activeAI, AIsentinel, done
AIsentinel=multiprocessing.Process(target=AI.tick,args=(activeAI, done))
AIsentinel.start()
and
def tick(activeAI, done):
'''
Calls all active AI functions as threads
'''
AIthreads=[]
for i in activeAI:
# i[0] is the AI function, i[1] is the character class instance (user defined)
# and the other list items are its parameters.
AIthreads.append(threading.Thread(name=i[1].name,target=lambda: i[0](i[1],i[2],i[3],i[4],i[5])))
AIthreads[-1].start()
while not done:
for x in AIthreads:
if not x.is_alive():
x.join()
AIthreads.remove(x)
for i in activeAI:
if i[1].name==x.name:
AIthreads.append(threading.Thread(name=i[1].name,target=lambda: i[0](i[1],i[2],i[3],i[4],i[5])))
AIthreads[-1].start()
The outputs of these threads should display in stdout, but when running the program, nothing appears - I assume it's because the threads don't start, but I can't tell if it's just because their output isn't displaying. I'd like a way to get this solution working, but I suspect this solution is far too ugly and not worth fixing, or simply cannot be fixed.
In the event this is as horrible as I fear, I'm also open to new approaches to the problem entirely.
I'm running this on a Windows 10 machine. Thanks in advance.
Edit:
The actual print statement is within another thread - when an AI performs an action, it adds a string describing its action to a queue, which is printed out by yet another thread (as if I didn't have enough of those). As an example:
battle_log.append('{0} dealt {1} damage to {2}!'.format(self.name,damage[0],target.name))
Which is read out by:
def battlereport():
'''
Displays battle updates.
'''
global battle_log, done
print('\n')
while not done:
for i in battle_log:
print(i)
battle_log.remove(i)
Related
I'm trying to create a chess game using tkinter. I don't have a huge experience in python programming, but I kind of find weird the philosophy of tkinter : if my assumptions are correct, it seems to me that using tkinter means setting it as the base of the project, and everything has to work around it. And what I mean by that is that using whatever code that is not 'wrapped' in the tkinter framework is a pain to deal with (you have to use the event system, you have to use the after method if you want to perform an action after starting the main loop, etc.)
I have a rather different view on that, and in my chess project I simply consider the tkinter display as a part of my rendering system, and the event system provided by tkinter as a part of my input parser system. That being said, I want to be able to easily change the renderer or the input parser, which means that I could want to detect input from the terminal (for instance by writing D2 D3) instead of moving the objects on the screen. I could also want to print the chessboard on the terminal instead of having a GUI.
More to the point, because tkinter blocks the thread through the mainloop method instead of looping in another thread, I have to put my Tk object in a different thread, so that I can run the rest of my program in parallel. And I'm having a tough time doing it, because my Tk variable contained by my thread needs to be accessed by my program, to update it for instance.
After quite a bit of research, I found that queues in python were synchronized, which means that if I put my Tk object in a queue, I could access it without any problem from the main thread. I tried to see if the following code was working :
import threading, queue
class VariableContainer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.queue = queue.Queue()
def run(self):
self.queue.put("test")
container = VariableContainer()
container.start()
print(container.queue.get(False))
and it does ! The output is test.
However, if I replace my test string by a Tk object, like below :
import threading, queue
import tkinter
class VariableContainer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.queue = queue.Queue()
def run(self):
root = tkinter.Tk()
self.queue.put(root)
root.mainloop() # whether I call the mainloop or not doesn't change anything
container = VariableContainer()
container.start()
print(container.queue.get(False))
then the print throws an error, stating that the queue is empty.
(Note that the code above is not the code of my program, it is just an exemple since posting sample codes from my project might be less clear)
Why?
The answer to the trivial question you actually asked: you have a race condition because you call Queue.get(block=False). Tk taking a lot longer to initialize, the main thread almost always wins and finds the queue still empty.
The real question is “How do I isolate my logic from the structure of my interface library?”. (While I understand the desire for a simple branch point between “read from the keyboard” and “wait for a mouse event”, it is considered more composable, in the face of large numbers of event types, sources, and handlers, to have one event dispatcher provided by the implementation. It can sometimes be driven one event at a time, but that composes less well with other logic than one might think.)
The usual answer to that is to make your logic a state machine rather than an algorithm. Mechanically, this means replacing local variables with attributes on an object and dividing the code into methods on its class (e.g., one call per “read from keyboard” in a monolithic implementation). Sometimes language features like coroutines can be used to make this transformation mostly transparent/automatic, but they’re not always a good fit. For example:
def algorithm(n):
tot=0
for i in range(n):
s=int(input("#%s:"%i))
tot+=(i+1)*(n-i)*s
print(tot)
class FSM(object):
def __init__(self,n):
self.n=n
self.i=self.tot=0
def send(self,s):
self.tot+=(self.i+1)*(self.n-self.i)*s
self.i+=1
def count(self): return self.i
def done(self): return self.i>=self.n
def get(self):
return self.tot
def coroutine(n): # old-style, not "async def"
tot=0
for i in range(n):
s=(yield)
tot+=(i+1)*(n-i)*s
yield tot
Having done this, it’s trivial to layer the traditional stream-driven I/O back on top, or to connect it to an event-driven system (be it a GUI or asyncio). For example:
def fsmClient(n):
fsm=FSM(n)
while not fsm.done():
fsm.send(int(input("#%s:"%fsm.count())))
return fsm.get()
def coClient(n):
co=coroutine(n)
first=True
while True:
ret=co.send(None if first else
int(input("#%s:"%fsm.count())))
if ret is not None:
co.close()
return ret
first=False
These clients can work with any state machine/coroutine using the same interface; it should be obvious how to instead supply values from a Tkinter input box or so.
I have the following problem:
I am running a large process with threading which involves importing a pair of files, calculating and giving back a score. this is all in a TKinter Python API.
For single pairs it works fine:
def run_single():
#importing , calculating and scoring
couple = [path1,path2]
th_score = threading.Thread(target= scoring_function)
th_score.start()
...
The problem is when i want to do stock imports: Importing two directories, making pairs of the same file references and operating each pair in a loop. I want to do threading for each individual iteration and start with the next pair of files only when all the threads from the previous scoring process are done. I tried:
def run_multiple():
for pair in couples:
couple = pair.copy()
th_score = threading.Thread(target= scoring_function)
th_score.start()
This was wrong because of the threads not synchronizing, running some threads over others already running. Threading continues with the for loop even when the execution of scoring_function in the current iteration is not finished (yeah, I know that's threading's purpose).
I tried using conditions and events, even using them as global variables so the scoring_function could regulate them, but all they do is freezing the program.
Any suggestion on what I should use?
From my experience the best way of doing threading is to use the threading.Thread class of python along with this little trick:
import threading
couples = [1,2,3] # probably a list you created with some elements
class run_in_thread(threading.Thread):
def run(self):
for pair in couples:
couple = pair.copy()
wanted_threads = 3
for a in range(wanted_threads):
instance = run_in_thread()
instance.start()
In this case, you start three threads.
You use the built in run() function from threading.
threading.Thread class that is by default will create a new thread by executing it later with start().
I hope this helps.
More documentation of correct threading see: threading
I am using the multiprocessing module in python. Here is a sample of the code I am using:
import multiprocessing as mp
def function(fun_var1, fun_var2):
b = fun_var1 + fun_var2
# and more computationally intensive stuff happens here
return b
# my program freezes after the return command
class Worker(mp.Process):
def __init__(self, queue_obj, func_var1, func_var2):
mp.Process.__init__(self)
self.queue_obj = queue_obj
self.func_var1 = func_var1
self.func_var2 = func_var2
def run(self):
self.var = function( self.func_var1, self.func_var2 )
self.queue_obj.put(self.var)
if __name__ == '__main__':
mp.freeze_support()
queue_list = []
processes = []
result = []
for i in range(2):
queue_list.append(mp.Queue())
processes.append( Worker(queue_list[i], i, var1, var2 )
processes[i].start()
for i in range(2):
processes[i].join()
result.append(queue_list[i].get())
During runtime of the program two instances of the worker class are generated which work simultaneously. One instance finishes after about 2 minutes and the other would take about 7 minutes. The first instance returns its results fine. However, the second instance freezes the program when the function() that is called within the run() method returns its value. No error is being thrown, the program just does not continue to execute. The console also indicates that it is busy but not displaying the >>> prompt. I am completely clueless why this behavior occurs. The same code works fine for slightly different inputs in the two Worker instances. The only difference I can make out is that the work loads are more equal when it executes correctly. Could the time difference cause trouble? Does anyone have experience with this kind of behavior? Also note that if I run a serial setup of the program in which function() is just called twice by the main program, the code executes flawlessly. Could there be some timeout involved in the worker instance that makes it impossible for function() to return its value to the Worker instance? The return value of function() is actually a list that is fairly small. It contains about 100 float values.
Any suggestions are welcomed!
This is a bit of an educated guess without actually seeing what's going on in worker, but is it possible that your child has put items into the Queue that haven't been consumed? The documentation has a warning about this:
Warning
As mentioned above, if a child process has put items on a queue (and
it has not used JoinableQueue.cancel_join_thread), then that process
will not terminate until all buffered items have been flushed to the
pipe.
This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.
Note that a queue created using a manager does not have this issue.
See Programming guidelines.
It might be worth trying to create your Queue object using mp.Manager.Queue and see if the issue goes away.
I'm making a text-based farmville clone using objects but I need to be able to control growth rate. I need some sort of counter that will run in the background of my program and determine how grown a crop is.
for example:
class Grow(object):
def growth(self, crop):
self.grown = 0
while self.grown < 5:
<every x number of seconds add one to self.grown>
I need something like time.sleep() but something that does not stop the program from running.
Thanks =D
If you only need to know how much the crop would have grown since you last checked, you can build this into your Crop objects:
from datetime import datetime
class Crop:
RATE = 1 # rate of growth, units per second
def __init__(self, ..., grown=0): # allow starting growth to be set
...
self.last_update = datetime.now()
self.grown = grown
def grow(self):
"""Set current growth based on time since last update."""
now = datetime.now()
self.grown += Crop.RATE * (now - self.last_update).seconds
self.last_update = now
Alternatively, you could define this functionality in a separate Growable class and have all objects that grow (e.g. Crop, Animal) inherit the grow method from that superclass.
class Growable:
def __init__(self, grown=0):
self.last_update = datetime.now()
self.grown = grown
def grow(self, rate):
"""Set current growth based on time since last update and rate."""
now = datetime.now()
self.grown += rate * (now - self.last_update).seconds
self.last_update = now
class Crop(Growable):
RATE = 1
def __init__(self, ..., grown=0):
super().__init__(grown)
...
def grow(self):
super().grow(Crop.RATE)
There are different ways to do this, which depend on how you want to structure your app. Every game is basically running some kind of loop; the question is what kind of loop you're using.
For a simple "console mode" game, the loop is just a loop around input().. While you're waiting for the user to type his input, nothing else can happen. And that's the problem you're trying to solve.
One way to get around this is to fake it. You may not be able to run any code while you're waiting for the user's input, but you can figure out all the code you would have run, and do the same thing it would have done. If the crop is supposed to grow every 1.0 seconds up to 5 times, and it's been 3.7 seconds since the crop was planted, it's now grown 3 times. jonrsharpe's answer shows a great way to structure this.
This same idea works for graphical games that are driven by a frame-rate loop, like a traditional arcade game, but even simpler. Each frame, you check for input, update all of your objects, do any output, then sleep until it's time for the next frame. Because frames come at a fixed rate, you can just do things like this:
def grow(self, rate):
self.grown += rate / FRAMES_PER_SECOND
A different solution is to use background threads. While your main thread can't run any code while it's waiting around for user input, any other threads keep running. So, you can spin off a background thread for the crop. You can use your original growth method, with the time.sleep(1.0) and everything, but instead of calling self.growth(crop), call threading.Thread(target=self.growth, args=[crop]).start(). That's about as simple as it gets—but that simplicity comes at a cost. If you have a thread for each of 80x25=2000 plots of land, you'll be using all your CPU time in the scheduler and all your memory for thread stacks. So, this option only works if you have only a few dozen independently-active objects. The other problem with threads is that you have to synchronize any objects that are used on multiple threads, or you end up with race conditions, and that can be complicated to get right.
A solution to the "too many threads" problem (but not the synchronization problem) is to use a Timer. The one built into the stdlib isn't really usable (because it creates a thread for each timer), but you can find third-party implementations that are, like timer2. So, instead of sleeping for a second and then doing the rest of your code, move the rest of your code into a function, and create a Timer that calls that function after a second:
def growth(self, crop):
self.grown = 0
def grow_callback():
with self.lock:
if self.grown < 5:
self.grown += 1
Timer(1.0, grow_callback).start()
Timer(1.0, grow_callback).start()
Now you can call self.growth(crop) normally. But notice how the flow of control has been turned inside-out by having to move everything after the sleep (which was in the middle of a loop) into a separate function.
Finally, instead of a loop around input or sleeping until the next frame, you can use a full event loop: wait until something happens, where that "something" can be user input, or a timer expiring, or anything else. This is how most GUI apps and network servers work, and it's also used in many games. Scheduling a timer event in an event loop program looks just like scheduling a threaded timer, but without the locks. For example, with Tkinter, it looks like this:
def growth(self, crop):
self.grown = 0
def grow_callback():
if self.grown < 5:
self.grown += 1
self.after(1000, function=grow_callback)
self.after(1000, function=grow_callback)
One final option is to break your program up into two parts: an engine and an interface. Put them in two separate threads (or child processes, or even entirely independent programs) that communicate over queues (or pipes or sockets), and then you can write each one the way that's most natural. This also means you can replace the interface with a Tkinter GUI, a pygame full-screen graphics interface, or even a web app without rewriting any of your logic in the engine.
In particular, you can write the interface as a loop around input that just checks the input queue for any changes that happened while it was waiting, and then posts any commands on the output queue for the engine. Then write the engine as an event loop that treats new commands on the input queue as events, or a frame-rate loop that checks the queue every frame, or whatever else makes the most sense.
When multiple threads access the same function then do we require to implement the lock mechanism explicitly or not.
I have a program using thread.
There are two thread, t1 and t2. t1 is for add1() and t2 is for subtract1().Both of the threads concurrently access the same function myfunction(caller,num)
1. I have defined a simple lock mechanism in the given program using a variable functionLock. Is this reliable or do we need to modify it.
import time, threading
functionLock = '' # blank means lock is open
def myfunction(caller,num):
global functionLock
while functionLock!='': # check and wait until the lock is open
print "locked by "+ str(functionLock)
time.sleep(1)
functionLock = caller # apply lock
total=0
if caller=='add1':
total+=num
print"1. addition finish with Total:"+str(total)
time.sleep(2)
total+=num
print"2. addition finish with Total:"+str(total)
time.sleep(2)
total+=num
print"3. addition finish with Total:"+str(total)
else:
time.sleep(1)
total-=num
print"\nSubtraction finish with Total:"+str(total)
print '\n For '+caller+'() Total: '+str(total)
functionLock='' # release the lock
def add1(arg1, arg2):
print '\n START add'
myfunction('add1',10)
print '\n END add'
def subtract1():
print '\n START Sub'
myfunction('sub1',100)
print '\n END Sub'
def main():
t1 = threading.Thread(target=add1, args=('arg1','arg2'))
t2 = threading.Thread(target=subtract1)
t1.start()
t2.start()
if __name__ == "__main__":
main()
The output is as follows:
START add
START Sub
1. addition finish with Total:10
locked by add1
locked by add1
2. addition finish with Total:20
locked by add1
locked by add1
3. addition finish with Total:30
locked by add1
For add1() Total: 30
END add
Subtraction finish with Total:-100
For sub1() Total: -100
END Sub
2. is it ok it we do not use locks?
Even if I do not use the lock mechanism defined in the above program the result is same from both threads t1 and t2. Does this mean that python implements locks automatically when multiple threads access the same function.
The output of the program without using the lock, functionLock , in the above program
START add
START Sub
1. addition finish with Total:10
Subtraction finish with Total:-100
For sub1() Total: -100
END Sub
2. addition finish with Total:20
3. addition finish with Total:30
For add1() Total: 30
END add
Thanks!
In addition to the other comments on this thread about busy waiting on a variable, I would like to point out that the fact that you are not using any kind of atomic swap may cause concurrency bugs. Even though your test execution does not cause them come up, if executed enough repetitions with different timings, the following sequence of events may come up:
Thread #1 executes while functionLock!='' and gets False. Then, Thread#1 is interrupted (preempted for something else to be executed), and Thread #2 executes the same line, while functionLock!='' also getting False. In this example, both threads have entered the critical section, which is clearly not what you wanted. In particular, in any line where threads modify total, the result may not be that which you expected, since both threads can be in that section at the same time. See the following example:
total is 10. For the sake of simplicity, assume num is always 1. Thread#1 executes total+=num, which is composed of three operations: (i) loading the value of total, (ii) adding it num and (iii) storing the result in total. If after (i), Thread#1 gets preempted and Thread#2 then executes total-=num, total is set to 9. Then, Thread#1 resumes. However, it had already loaded total = 10, so it adds 1 and stores 11 into the total variable. This effectively transformed the decrement operation by Thread#2 in a no-op.
Notice that in the wikipedia article linked by #ron-klein, the code uses an xchg operation, which atomically swaps a register with a variable. This is vital for the correction of the lock. In conclusion, if you want to steer clear of incredibly hard to debug concurrency bugs, never implement your own locks as alternative to atomic operations.
[edit] I just noticed that in fact total is a local variable in your code, so this could never happen. However, I believe that you are not aware that this is the cause of the code you have working perfectly, due to you affirming "Does this mean that python implements locks automatically when multiple threads access the same function.", which is not true. Please try adding global total to the beginning of myfunction, and executing the threads several times, and you should see errors in the output. [/edit]
Although I don't know much Python, I would say this is like in any other language:
As long as there are no variables involved that have been declared outside of the function and can therefore be shared between threads, there shouldn't be a need for locks. And this doesn't seem to be the case with your function.
Output to console might be garbled, though.
You need to lock when you think that code you are writing is critical section code i.e. whether the code snippet is modifying shared state between threads if it is not then you don't need to worry about locking.
Whether methods should be locked or not is a design choice, ideally you should lock as closer to the shared state access by the threads.
In your code you implement your own spin-lock. While this is possible, I don't think it's recommended in Python, since it might lead to a performance issue.
I used a well known searching engine (starts with G), querying about "python lock". On of the first results is this one: Thread Synchronization Mechanisms in Python. It looks like a good article to start with.
For the code itself: You should lock whenever the operation(s) executed on a shared resource are not atomic. It currently looks like there's no such resource in your code.