How to while loop that only run when certain condition is met - python

I want a while loop that only run when certain condition is met. For example, I need to loop the condition A if listposts != 0 and listposts != listView: to check whether there is a new record or not. If it found a new record it will do function B and stop until the condition is met again.
I'm new to programming and I tried with this code but its still looping endlessly.
while True:
if listposts != 0 and listposts != listView:
Condition = True
while Condition == True :
function B()
Condition = False
What I want to achieve is the loop will stop after 1 loop and wait until the condition is met to loop again.

From what you expect as behavior you need 3 things:
a condition-test (as function) returning either True or False
a loop that calls the condition-test regularly
a conditional call of function B() when condition is met (or condition-test function returns True)
# list_posts can change and is used as parameter
# listView is a global variable (could also be defined as parameter)
# returns either True or Fals
def condition_applies(list_posts):
return list_posts != 0 and list_posts != listView
# Note: method names are by convention lower-case
def B():
print("B was called")
# don't wait ... just loop and test until when ?
# until running will become False
running = True
while running:
if condition_applies(listposts):
print("condition met, calling function B ..")
B()
# define an exit-condition to stop at some time
running = True
Warning: This will be an endless-loop!
So you need at some point in time to set running = False.
Otherwise the loop will continue infinite and check if condition applies.

To me it seems that you have a producer/consumer like situation.
IMHO your loop is ok. The principle applied here is called polling. Polling keeps looking for new items by constantly asking.
Another way of implementing this in a more CPU optimized way (using less CPU) requires synchronization. A synchronization object such as a mutex or semaphore will be signaled when a new element is available for processing. The processing thread can then be stopped (e.g. WaitForSingleObject() on Windows), freeing the CPU for other things to do. When being signaled, Windows finds out that the thread should wake up and let's it run again.
Queue.get() and Queue.put() are such methods that have synchronization built-in.
Unfortunately, I see many developers using Sleep() instead, which is not the right tool for the job.
Here's a producer/consumer example:
from threading import Thread
from time import sleep
import random
import queue
q = queue.Queue(10) # Shared resource to work on. Synchronizes itself
producer_stopped = False
consumer_stopped = False
def producer():
while not producer_stopped:
try:
item = random.randint(1, 10)
q.put(item, timeout=1.0)
print(f'Producing {str(item)}. Size: {str(q.qsize())}')
except queue.Full:
print("Consumer is too slow. Consider using more consumers.")
def consumer():
while not consumer_stopped:
try:
item = q.get(timeout=1.0)
print(f'Consuming {str(item)}. Size: {str(q.qsize())}')
except queue.Empty:
if not consumer_stopped:
print("Producer is too slow. Consider using more producers.")
if __name__ == '__main__':
producer_stopped = False
p = Thread(target=producer)
p.start()
consumer_stopped = False
c = Thread(target=consumer)
c.start()
sleep(2) # run demo for 2 seconds. This is not for synchronization!
producer_stopped = True
p.join()
consumer_stopped = True
c.join()

Related

Multiprocessing Running Slower than a Single Process

I'm attempting to use multiprocessing to run many simulations across multiple processes; however, the code I have written only uses 1 of the processes as far as I can tell.
Updated
I've gotten all the processes to work (I think) thanks to #PaulBecotte ; however, the multiprocessing seems to run significantly slower than its non-multiprocessing counterpart.
For instance, not including the function and class declarations/implementations and imports, I have:
def monty_hall_sim(num_trial, player_type='AlwaysSwitchPlayer'):
if player_type == 'NeverSwitchPlayer':
player = NeverSwitchPlayer('Never Switch Player')
else:
player = AlwaysSwitchPlayer('Always Switch Player')
return (MontyHallGame().play_game(player) for trial in xrange(num_trial))
def do_work(in_queue, out_queue):
while True:
try:
f, args = in_queue.get()
ret = f(*args)
for result in ret:
out_queue.put(result)
except:
break
def main():
logging.getLogger().setLevel(logging.ERROR)
always_switch_input_queue = multiprocessing.Queue()
always_switch_output_queue = multiprocessing.Queue()
total_sims = 20
num_processes = 5
process_sims = total_sims/num_processes
with Timer(timer_name='Always Switch Timer'):
for i in xrange(num_processes):
always_switch_input_queue.put((monty_hall_sim, (process_sims, 'AlwaysSwitchPlayer')))
procs = [multiprocessing.Process(target=do_work, args=(always_switch_input_queue, always_switch_output_queue)) for i in range(num_processes)]
for proc in procs:
proc.start()
always_switch_res = []
while len(always_switch_res) != total_sims:
always_switch_res.append(always_switch_output_queue.get())
always_switch_success = float(always_switch_res.count(True))/float(len(always_switch_res))
print '\tLength of Always Switch Result List: {alw_sw_len}'.format(alw_sw_len=len(always_switch_res))
print '\tThe success average of switching doors was: {alw_sw_prob}'.format(alw_sw_prob=always_switch_success)
which yields:
Time Elapsed: 1.32399988174 seconds
Length: 20
The success average: 0.6
However, I am attempting to use this for total_sims = 10,000,000 over num_processes = 5, and doing so has taken significantly longer than using 1 process (1 process returned in ~3 minutes). The non-multiprocessing counterpart I'm comparing it to is:
def main():
logging.getLogger().setLevel(logging.ERROR)
with Timer(timer_name='Always Switch Monty Hall Timer'):
always_switch_res = [MontyHallGame().play_game(AlwaysSwitchPlayer('Monty Hall')) for x in xrange(10000000)]
always_switch_success = float(always_switch_res.count(True))/float(len(always_switch_res))
print '\n\tThe success average of not switching doors was: {not_switching}' \
'\n\tThe success average of switching doors was: {switching}'.format(not_switching=never_switch_success,
switching=always_switch_success)
You could try import “process “ under some if statements
EDIT- you changed some stuff, let me try and explain a bit better.
Each message you put into the input queue will cause the monty_hall_sim function to get called and send num_trial messages to the output queue.
So your original implementation was right- to get 20 output messages, send in 5 input messages.
However, your function is slightly wrong.
for trial in xrange(num_trial):
res = MontyHallGame().play_game(player)
yield res
This will turn the function into a generator that will provide a new value on each next() call- great! The problem is here
while True:
try:
f, args = in_queue.get(timeout=1)
ret = f(*args)
out_queue.put(ret.next())
except:
break
Here, on each pass through the loop you create a NEW generator with a NEW message. The old one is thrown away. So here, each input message only adds a single output message to the queue before you throw it away and get another one. The correct way to write this is-
while True:
try:
f, args = in_queue.get(timeout=1)
ret = f(*args)
for result in ret:
out_queue.put(ret.next())
except:
break
Doing it this way will continue to yield output messages from the generator until it finishes (after yielding 4 messages in this case)
I was able to get my code to run significantly faster by changing monty_hall_sim's return to a list comprehension, having do_work add the lists to the output queue, and then extend the results list of main with the lists returned by the output queue. Made it run in ~13 seconds.

Time-intensive collection processing in Python

The code has been vastly simplified, but should serve to illustrate my question.
S = ('A1RT', 'BDF7', 'CP09')
for s in S:
if is_valid(s): # very slow!
process(s)
I have a collection of strings obtained from a site-scrape. (Strings will be retrieved from site-scrapes periodically.) Each of these strings need to be validated, over the network, against a third party. The validation process can be slow at times, which is problematic. Due to the iterative nature of the above code, it may take some time before the last string is validated and processed.
Is there a proper way to parallelize the above logic in Python? To be frank, I'm not very familiar with concurrency / parallel-processing concepts, but it would seem as though they may be useful in this circumstance. Thoughts?
The concurrent.futures module is a great way to start work on "embarrassingly parallel" problems, and can very easily be switched between using either multiple processes or multiple threads within a single process.
In your case, it sounds like the "hard work" is being done on other machines over the network, and your main program will spend most of its time waiting for them to deliver results. If so, threads should work fine. Here's a complete, executable toy example:
import concurrent.futures as cf
def is_valid(s):
import random
import time
time.sleep(random.random() * 10)
return random.choice([False, True])
NUM_WORKERS = 10 # number of threads you want to run
strings = list("abcdefghijklmnopqrstuvwxyz")
with cf.ThreadPoolExecutor(max_workers=NUM_WORKERS) as executor:
# map a future object to the string passed to is_valid
futures = {executor.submit(is_valid, s): s for s in strings}
# `as_complete()` returns results in the order threads
# complete work, _not_ necessarily in the order the work
# was passed out
for future in cf.as_completed(futures):
result = future.result()
print(futures[future], result)
And here's sample output from one run:
g False
i True
j True
b True
f True
e True
k False
h True
c True
l False
m False
a False
s False
v True
q True
p True
d True
n False
t False
z True
o True
y False
r False
w False
u True
x False
concurrent.futures handles all the headaches of starting threads, parceling out work for them to do, and noticing when threads deliver results.
As written above, up through 10 (NUM_WORKERS) is_valid() invocations can be active simultaneously. as_completed() returns a future object as soon as its result is ready to retrieve, and the executor automatically hands the thread that computed the result another string for is_valid() to chew on.

Is there a variation of the while loop that will only run the clause once until a change occurs?

Sorry about the title, this is a bit of a tough question to phrase. I'm using Python. Basically, I want the program to check a variable indefinitely. If the variable goes above 100 for example, I want code block A to run only once, and then I want the program to do nothing until the variable goes back below 100, then run code block B, and wait again until the variable goes back above 100, and then run block A again, and repeat.
The current setup I've written is as follows:
while on = True:
if value_ind >= 100:
open_time = time()
else:
close_time = time()
calculate_time_open(open_time, close_time)
The obvious problem here is that whichever if/else code block is true will run itself indefinitely, and create multiple entries in my lists for only one event. So, how would I make the code blocks run only once and then wait for a change instead of repeating constantly while waiting for a change? Thanks in advance.
You can use a state machine: your program is in one of two state: "waiting for a high/low value" and behaves appropriately:
THRESHOLD = 100
waiting_for_high_value = True # False means: waiting for low value
while True: # Infinite loop (or "while on", if "on" is a changing variable)
if waiting_for_high_value:
if value_ind >= THRESHOLD:
open_time = time()
waiting_for_high_value = False
else: # Waiting for a low value:
if value < THRESHOLD:
close_time = time()
calculate_time_open(open_time, close_time)
waiting_for_high_value = True
Now, you do need to update you test value value_ind somewhere during the loop. This is best done through a local variable (and not by changing a global variable as an invisible side effect).
PS: The answer above can be generalized to any number of states, and is convenient for adding some code that must be done continuously while waiting. In your particular case, though, you toggle between two states, and maybe there is not much to do while waiting for a change, so Stefan Pochmann's answer might be appropriate too (unless it forces you to duplicate code in the two "wait" loops).
How about this?
while True:
# wait until the variable goes over 100, then do block A once
while value_ind <= 100:
pass
<block A here>
# wait until the variable goes below 100, then do block B once
while value_ind => 100:
pass
<block B here>
This solves your repetition issue. You might better actually wait rather than constantly checking the variable, though, although it depends on what you're actually doing.
Added: Here it is with the actual blocks A and B from your code and using not, which maybe makes it nicer. One of them with parentheses which maybe highlights the condition better. (And with pass not on an extra line... I think that's ok here):
while True:
while not value_ind > 100: pass
open_time = time()
while not (value_ind < 100): pass
close_time = time()
calculate_time_open(open_time, close_time)

Endless loop in python class member function. Threads?

I need to implement something like this
def turnOn(self):
self.isTurnedOn = True
while self.isTurnedOn:
updateThread = threading.Thread(target=self.updateNeighborsList, args=())
updateThread.daemon = True
updateThread.start()
time.sleep(1)
def updateNeighborsList(self):
self.neighbors=[]
for candidate in points:
distance = math.sqrt((candidate.X-self.X)**2 + (candidate.Y-self.Y)**2)
if distance <= maxDistance and candidate!=self and candidate.isTurnedOn:
self.neighbors.append(candidate)
print self.neighbors
print points
This is a class member function from which updateNeighborsList function should be called every second until self.isTurnedOn == True.
When I create class object and call turnOn function, all following statements are not being executed, it takes the control and stacks on that while loop, but I need a lot of objects of class.
What is the correct way to do this kind of thing?
I think you'd be better off creating a single Thread when turnOn is called, and have the looping happen inside that thread:
def turnOn(self):
self.isTurnedOn = True
self.updateThread = threading.Thread(target=self.updateNeighborsList, args=())
self.updateThread.daemon = True
self.updateThread.start()
def updateNeighborsList(self):
while self.isTurnedOn:
self.neighbors=[]
for candidate in points:
distance = math.sqrt((candidate.X-self.X)**2 + (candidate.Y-self.Y)**2)
if distance <= maxDistance and candidate!=self and candidate.isTurnedOn:
self.neighbors.append(candidate)
print self.neighbors
print points
time.sleep(1)
Note, though, that doing mathematical calculations inside of a thread will not improve performance at all using CPython, because of the Global Interpreter Lock. In order to utilize multiple cores in parallel, you'll need to use the multiprocessing module instead. However, if you're just trying to prevent your main thread from blocking, feel free to stick with threads. Just know that only one thread will ever actually be running at a time.

Set a timer for running a process, pause the timer under certain conditions

I've got this program:
import multiprocessing
import time
def timer(sleepTime):
time.sleep(sleepTime)
fooProcess.terminate()
fooProcess.join() #line said to "cleanup", not sure if it is required, refer to goo.gl/Qes6KX
def foo():
i=0
while 1
print i
time.sleep(1)
i
if i==4:
#pause timerProcess for X seconds
fooProcess = multiprocessing.Process(target=foo, name="Foo", args=())
timer()
fooProcess.start()
And as you can see in the comment, under certain conditions (in this example i has to be 4) the timer has to stop for a certain X time, while foo() keeps working.
Now, how do I implement this?
N.B.: this code is just an example, the point is that I want to pause a process under certain conditions for a certain amount of time.
I am think you're going about this wrong for game design. Games always (no exceptions come to mind) use a primary event loop controlled in software.
Each time through the loop you check the time and fire off all the necessary events based on how much time has elapsed. At the end of the loop you sleep only as long as necessary before you got the next timer or event or refresh or ai check or other state change.
This gives you the best performance regarding lag, consistency, predictability, and other timing features that matter in games.
roughly:
get the current timestamp at the time start time (time.time(), I presume)
sleep with Event.wait(timeout=...)
wake up on an Event or timeout.
if on Event: get timestamp, subtract initial on, subtract result from timer; wait until foo() stops; repeat Event.wait(timeout=[result from 4.])
if on timeout: exit.
Here is an example, how I understand, what your Programm should do:
import threading, time, datetime
ACTIVE = True
def main():
while ACTIVE:
print "im working"
time.sleep(.3)
def run(thread, timeout):
global ACTIVE
thread.start()
time.sleep(timeout)
ACTIVE = False
thread.join()
proc = threading.Thread(target = main)
print datetime.datetime.now()
run(proc, 2) # run for 2 seconds
print datetime.datetime.now()
In main() it does a periodic task, here printing something. In the run() method you can say, how long main should do the task.
This code producess following output:
2014-05-25 17:10:54.390000
im working
im working
im working
im working
im working
im working
im working
2014-05-25 17:10:56.495000
please correct me, if I've understood you wrong.
I would use multiprocessing.Pipe for signaling, combined with select for timing:
#!/usr/bin/env python
import multiprocessing
import select
import time
def timer(sleeptime,pipe):
start = time.time()
while time.time() < start + sleeptime:
n = select.select([pipe],[],[],1) # sleep in 1s intervals
for conn in n[0]:
val = conn.recv()
print 'got',val
start += float(val)
def foo(pipe):
i = 0
while True:
print i
i += 1
time.sleep(1)
if i%7 == 0:
pipe.send(5)
if __name__ == '__main__':
mainpipe,foopipe = multiprocessing.Pipe()
fooProcess = multiprocessing.Process(target=foo,name="Foo",args=(foopipe,))
fooProcess.start()
timer(10,mainpipe)
fooProcess.terminate()
# since we terminated, mainpipe and foopipe are corrupt
del mainpipe, foopipe
# ...
print 'Done'
I'm assuming that you want some condition in the foo process to extend the timer. In the sample I have set up, every time foo hits a multiple of 7 it extends the timer by 5 seconds while the timer initially counts down 10 seconds. At the end of the timer we terminate the process - foo won't finish nicely at all, and the pipes will get corrupted, but you can be certain that it'll die. Otherwise you can send a signal back along mainpipe that foo can listen for and exit nicely while you join.

Categories

Resources