How is the waittime calculated in printer simulation python program? - python

I am currently learning datastructures and algorithms.
I found this code on Interactive python
from pythonds.basic.queue import Queue
import random
class Printer:
def __init__(self, ppm):
self.pagerate = ppm
self.currentTask = None
self.timeRemaining = 0
def tick(self):
if self.currentTask != None:
self.timeRemaining = self.timeRemaining - 1
if self.timeRemaining <= 0:
self.currentTask = None
def busy(self):
if self.currentTask != None:
return True
else:
return False
def startNext(self,newtask):
self.currentTask = newtask
self.timeRemaining = newtask.getPages() * 60/self.pagerate
class Task:
def __init__(self,time):
self.timestamp = time
self.pages = random.randrange(1,21)
def getStamp(self):
return self.timestamp
def getPages(self):
return self.pages
def waitTime(self, currenttime):
return currenttime - self.timestamp
def simulation(numSeconds, pagesPerMinute):
labprinter = Printer(pagesPerMinute)
printQueue = Queue()
waitingtimes = []
for currentSecond in range(numSeconds):
if newPrintTask():
task = Task(currentSecond)
printQueue.enqueue(task)
if (not labprinter.busy()) and (not printQueue.isEmpty()):
nexttask = printQueue.dequeue()
waitingtimes.append( nexttask.waitTime(currentSecond))
labprinter.startNext(nexttask)
labprinter.tick()
averageWait=sum(waitingtimes)/len(waitingtimes)
print("Average Wait %6.2f secs %3d tasks remaining."%(averageWait,printQueue.size()))
def newPrintTask():
num = random.randrange(1,181)
if num == 180:
return True
else:
return False
for i in range(10):
simulation(3600,5)
Please can someone explain how does the waitingtimes.append( nexttask.waitTime(currentSecond)) compute waitime for the currentsecond.
Won't it be zero for that particular currentsecond.
Also as per the simulation, every 180 seconds there is a newtask , but it is enqueued and dequeued in the same currentsecond.
So the printqueue is always empty at any particular time or is it ?
Please Help...

Every second, there is a random chance a task is added to the queue. Only if the printer is available (not labprinter.busy() is true) is then a task taken from the queue to be sent to the printer.
Once a task is been added to the printer, it'll take that printer a a certain number of ticks ('seconds') to handle the random number of pages assigned to each task. No new task can then be sent to it! Each loop iteration labprinter.tick() is called, which decrements self.timeRemaining (calculated based on the task size and printer page rate). Only when that number reaches 0 is the task cleared and is the printer no longer busy (ready to take another task).
So the queue could be filling up while the printer is busy. Tasks that spend several rounds of the loop in the queue will have had a waiting time accumulate.
You could write down the ticks; lets say it can handle 20 pages per minute, so it'll take 3 seconds per page:
0. Nothing happens
1. A task of size 10 is created. The printer is free, so it'll take the task. 10 pages take 30 seconds.
2 - 5. No new tasks are created, the printer prints 1 page.
6 - 9. One new task is created at tick 8, added to the queue. The printer prints a 2nd page.
9 - 30. More tasks could be created, the printer prints the rest of the pages.
31. The printer is free, the task created at tick 8 can now be handled. That task waited 31 - 8 == 23 seconds.

Related

Python Async Limit Concurrent coroutines per second

My use case is the following :
I’m using python 3.8
I have an async function analyse_doc that is a wrapper for a http request to a web service.
I have approx 1000 docs to analyse as fast as possible. The service allows for 15 transaction per second (and not 15 concurrent request at any second). So first sec I can send 15, then 2nd sec I can send 15 again and so on. If I try to hit the service more than 15 times per sec I get 429 error msg or sometimes 503/504 error (server is busy…)
My question is : is it possible to implement smt in python that effectively sends 15 requests per sec asynchronously then wait 1 sec then do it again until the queue is empty. Also some tasks might fail. Those failing tasks might need a rerun at some point.
So far my code is the following (unbounded parallelism… not even a semaphore) but it handles retry.
tasks = {asyncio.create_task(analyse_async(doc)): doc for doc in documents}
pending = set(tasks)
# Handle retry
while pending:
# backoff in case of 429
time.sleep(1)
# concurrent call return_when all completed
finished, pending = await asyncio.wait(
pending, return_when=asyncio.ALL_COMPLETED
)
# check if task has exception and register for new run.
for task in finished:
arg = tasks[task]
if task.exception():
new_task = asyncio.create_task(analyze_doc(doc))
tasks[new_task] = doc
pending.add(new_task)
You could try adding another sleep tasks into the mix to drive the request generation. Something like this
import asyncio
import random
ONE_SECOND = 1
CONCURRENT_TASK_LIMIT = 2
TASKS_TO_CREATE = 10
loop = asyncio.new_event_loop()
work_todo = []
work_in_progress = []
# just creates arbitrary work to do
def create_tasks():
for i in range(TASKS_TO_CREATE):
work_todo.append(worker_task(i))
# muddle this up to see how drain works
random.shuffle(work_todo)
# represents the actual work
async def worker_task(index):
print(f"i am worker {index} and i am starting")
await asyncio.sleep(index)
print(f"i am worker {index} and i am done")
# gets the next 'concurrent' workload segment (if there is one)
def get_next_tasks():
todo = []
i = 0
while i < CONCURRENT_TASK_LIMIT and len(work_todo) > 0:
todo.append(work_todo.pop())
i += 1
return todo
# drains down any outstanding tasks and closes the loop
async def are_we_done_yet():
print('draining')
await asyncio.gather(*work_in_progress)
loop.stop()
# closes out the program
print('done')
# puts work on the queue every tick (1 second)
async def work():
next_tasks = get_next_tasks()
if len(next_tasks) > 0:
print(f'found {len(next_tasks)} tasks to do')
for task in next_tasks:
# schedules the work, puts it in the in-progress pile
work_in_progress.append(loop.create_task(task))
# this is the 'tick' or speed work gets scheduled on
await asyncio.sleep(ONE_SECOND)
# every 'tick' we add this tasks onto the loop again unless there isn't any more to do...
loop.create_task(work())
else:
# ... if there isn't any to do we just enter drain mode
await are_we_done_yet()
# bootstrap the process
create_tasks()
loop.create_task(work())
loop.run_forever()
Updated version with a simulated exception
import asyncio
import random
ONE_SECOND = 1
CONCURRENT_TASK_LIMIT = 2
TASKS_TO_CREATE = 10
loop = asyncio.new_event_loop()
work_todo = []
work_in_progress = []
# just creates arbitrary work to do
def create_tasks():
for i in range(TASKS_TO_CREATE):
work_todo.append(worker_task(i))
# muddle this up to see how drain works
random.shuffle(work_todo)
# represents the actual work
async def worker_task(index):
try:
print(f"i am worker {index} and i am starting")
await asyncio.sleep(index)
if index % 9 == 0:
print('simulating error')
raise NotImplementedError("some error happened")
print(f"i am worker {index} and i am done")
except:
# put this work back on the pile (fudge the index so it doesn't throw this time)
work_todo.append(worker_task(index + 1))
# gets the next 'concurrent' workload segment (if there is one)
def get_next_tasks():
todo = []
i = 0
while i < CONCURRENT_TASK_LIMIT and len(work_todo) > 0:
todo.append(work_todo.pop())
i += 1
return todo
# drains down any outstanding tasks and closes the loop
async def are_we_done_yet():
print('draining')
await asyncio.gather(*work_in_progress)
if (len(work_todo)) > 0:
loop.create_task(work())
print('found some retries')
else:
loop.stop()
# closes out the program
print('done')
# puts work on the queue every tick (1 second)
async def work():
next_tasks = get_next_tasks()
if len(next_tasks) > 0:
print(f'found {len(next_tasks)} tasks to do')
for task in next_tasks:
# schedules the work, puts it in the in-progress pile
work_in_progress.append(loop.create_task(task))
# this is the 'tick' or speed work gets scheduled on
await asyncio.sleep(ONE_SECOND)
# every 'tick' we add this tasks onto the loop again unless there isn't any more to do...
loop.create_task(work())
else:
# ... if there isn't any to do we just enter drain mode
await are_we_done_yet()
# bootstrap the process
create_tasks()
loop.create_task(work())
loop.run_forever()
This just simulates something going wrong and re-queues the failed task. If the error happens after the main work method has finished it won't get re-queued so in the are-we-there-yet method it would need to check and rerun any failed tasks - this isn't particularly optimal as it'll wait to drain before checking everything else but gives you an idea of an implementation

Python CPU Scheduler Simulator

So I have a FCFS and SJF CPU simulator scheduling algorithm, however I'm struggling to implement shortest remaining time first algorithm.
This is what I have so far.
def srtf(submit_times, burst_times):
"""First Come First Serve Algorithm returns the time metrics"""
cpu_clock = 0
job = 0
response_times = []
turn_around_times = []
wait_times = []
total_jobs = []
remaining_burst_times = []
for stuff in range(len(submit_times)):
total_jobs.append(tuple((submit_times[stuff], burst_times[stuff])))
remaining_burst_times.append(burst_times[stuff])
while job < len(submit_times):
if cpu_clock < int(submit_times[job]):
cpu_clock = int(submit_times[job])
ready_queue = []
for the_job in total_jobs:
job_time = int(the_job[0])
if job_time <= cpu_clock:
ready_queue.append(the_job)
short_job = ready_queue_bubble(ready_queue)
submit, burst = short_job[0], short_job[1]
next_event = cpu_clock + int(burst)
response_time = cpu_clock - int(submit)
response_times.append(response_time)
remaining_burst_times[job] = next_event - cpu_clock
# cpu_clock = next_event
if remaining_burst_times[job] == 0:
turn_around_time = next_event - int(submit)
wait_time = turn_around_time - int(burst)
turn_around_times.append(turn_around_time)
wait_times.append(wait_time)
else:
pass
job += 1
total_jobs.remove(short_job)
remaining_burst_times.remove(short_job[1])
return response_times, turn_around_times, wait_times
Basically the function takes in a list of submit times and burst times and returns lists for the response, turn around and wait times. I have been trying to edit remnants from my short job first with a ready queue, to no avail.
Can anyone point me in the right direction?
It's not a very simple simulation due to preemption. Designing simulations is all about representing 1) the state of the world and 2) events that act on the world.
State of the world here is:
Processes. These have their own internal state.
Submit time (immutable)
Burst time (immutable)
Remaining time (mutable)
Completion time (mutable)
Wall clock time.
Next process to be submitted.
Running process.
Run start time (of the currently running process).
Waiting runnable processes (i.e. past submit with remaining > 0).
There are only two kinds of events.
A process's submit time occurs.
The running process completes.
When there are no more processes waiting to be submitted, and no process is running, the simulation is over. You can get the statistics you need from the processes.
The algorithm initializes the state then gets executes a standard event loop:
processes = list of Process built from parameters, sorted by submit time
wall_clock = 0
next_submit = 0 # index in list of processes
running = None # index of running process
run_start = None # start of current run
waiting = []
while True:
event = GetNextEvent()
if event is None:
break
wall_clock = event.time
if event.kind == 'submit':
# Update state for new process submission.
else: # event.kind is 'completion'
# Update state for running process completion.
An important detail is that if completion and submit events happen at the same time, process the completion first. The other way 'round makes update logic complicated; a running process with zero time remaining is a special case.
The "update state" methods adjust all the elements of the state according to the srtf algorithm. Roughly like this...
def UpdateStateForProcessCompletion():
# End the run of the running process
processes[running].remaining = 0
processes[running].completion_time = wall_clock
# Schedule a new one, if any are waiting.
running = PopShortestTimeRemainingProcess(waiting)
run_start_time = clock_time if running else None
A new submit is more complex.
def UpdateStateForProcessCompletion():
new_process = next_submit
next_submit += 1
new_time_remaining = processes[new_process].remaining
# Maybe preempt the running process.
if running:
# Get updated remaining time to run.
running_time_remaining = processes[running].remaining - (wall_clock - run_start)
# We only need to look at new and running processes.
# Waiting ones can't win because they already lost to the running one.
if new_time_remaining < running_time_remaining:
# Preempt.
processes[running].remaining = running_time_remaining
waiting.append(running)
running = new_process
run_start_time = wall_clock
else:
# New process waits. Nothing else changes
waiting.append(new_process)
else:
# Nothing's running. Run the newly submitted process.
running = new_process
run_start_time = wall_clock
The only thing left is getting the next event. You need only inspect processes[next_submit].submit and wall_clock + processes[running].remaining. Choose the smallest. The event has that time and the respective type. Of course you need to deal with the cases where next_submit and/or running are None.
I may not have everything perfect here, but it's pretty close.
Addition
Hope you're done with your homework by this time. This is fun to code up. I ran it on this example, and the trace matches well. Cheers
import heapq as pq
class Process(object):
"""A description of a process in the system."""
def __init__(self, id, submit, burst):
self.id = id
self.submit = submit
self.burst = burst
self.remaining = burst
self.completion = None
self.first_run = None
#property
def response(self):
return None if self.first_run is None else self.first_run - self.submit
#property
def turnaround(self):
return None if self.completion is None else self.completion - self.submit
#property
def wait(self):
return None if self.turnaround is None else self.turnaround - self.burst
def __repr__(self):
return f'P{self.id} # {self.submit} for {self.burst} ({-self.remaining or self.completion})'
def srtf(submits, bursts):
# Make a list of processes in submit time order.
processes = [Process(i + 1, submits[i], bursts[i]) for i in range(len(submits))]
processes_by_submit_asc = sorted(processes, key=lambda x: x.submit)
process_iter = iter(processes_by_submit_asc)
# The state of the simulation:
wall_clock = 0 # Wall clock time.
next_submit = next(process_iter, None) # Next process to be submitted.
running = None # Running process.
run_start = None # Time the running process started running.
waiting = [] # Heap of waiting processes. Pop gets min remaining.
def run(process):
"""Switch the running process to the given one, which may be None."""
nonlocal running, run_start
running = process
if running is None:
run_start = None
return
running.first_run = running.first_run or wall_clock
run_start = wall_clock
while next_submit or running:
print(f'Wall clock: {wall_clock}')
print(f'Running: {running} since {run_start}')
print(f'Waiting: {waiting}')
# Handle completion first, if there is one.
if running and (next_submit is None or run_start + running.remaining <= next_submit.submit):
print('Complete')
wall_clock = run_start + running.remaining
running.remaining = 0
running.completion = wall_clock
run(pq.heappop(waiting)[1] if waiting else None)
continue
# Handle a new submit, if there is one.
if next_submit and (running is None or next_submit.submit < run_start + running.remaining):
print(f'Submit: {next_submit}')
new_process = next_submit
next_submit = next(process_iter, None)
wall_clock = new_process.submit
new_time_remaining = new_process.remaining
if running:
# Maybe preempt the running process. Otherwise new process waits.
running_time_remaining = running.remaining - (wall_clock - run_start)
if new_time_remaining < running_time_remaining:
print('Preempt!')
running.remaining = running_time_remaining
pq.heappush(waiting, (running_time_remaining, running))
run(new_process)
else:
pq.heappush(waiting, (new_time_remaining, new_process))
else:
run(new_process)
for p in processes:
print(f'{p} {p.response} {p.turnaround} {p.wait}')
return ([p.response for p in processes],
[p.turnaround for p in processes],
[p.wait for p in processes])
submits = [6,3,4,1,2,5]
bursts = [1,3,6,5,2,1]
print(srtf(submits, bursts))

Python - Exiting while loop externally

I am writing a web server that will log temperatures. The user clicks "collect data" on the web interface, that then triggers a flask function to run a "collect temperature" function which just collects temperature data indefinitely. I then want to be able for the user to hit a "stop data collection" button that would stop the collect temperature function while loop.
The problem (my understanding at least) boils down to something like the following code:
class myClass:
counterOn = 0
num = 0
def __init__(self):
self.num = 0
def setCounterOn(self, value):
self.counterOn = value
def printCounterOn(self):
print self.counterOn
def count(self):
while True:
if self.counterOn == 1:
self.num += 1
print self.num
time.sleep(1)
then the server file:
myCounter = myClass.myClass()
myCounter.setCounterOn(1)
myCounter.count()
time.sleep(5)
myCounter.setCounterOn(0)
Ideally I would like the server file to create a counter object, then turn on and off the counter function externally. As it functions now, it is stuck in the while loop. I tried threading only to discover you can't pause or stop a thread. Am I looking at this completely wrong, or is it as simple as a try/except?
Edit:
The external file idea is great. I was having some trouble parsing the text file consistantly across my functions and wound up stumbleing across ConfigParsers to read .ini files. I think I'm going to go that way since eventually I want to have a PID controller controlling the temperature and it will be great to be able to store configurations externally.
I implemented just a while loop that looped forever and only recorded if it saw the config file configured to collect. The problem was that, in my flask file, i would run
#app.route('/startCollection', methods=['POST'])
def startCollectData():
print "collectPressed"
config.read('congif.ini')
config.set('main', 'counterOn', '1')
with open('config.ini', 'w') as f:
config.write(f)
C.count()
return "collect data pressed"
#app.route('/stopCollection', methods=['POST'])
def stopCollectData():
print "stop hit"
config.read('config.ini')
config.set('main', 'counterOn', '0')
with open('config.ini', 'w') as f:
config.write(f)
C.count()
return "stop pressed"
def count(self):
while True:
self.config.read('config.ini')
print self.num
time.sleep(1)
if self.config.get('main', 'counterOn') == '1':
self.num += 1
From my observation, the startDataCollection was getting stuck on count(). It would never return data, so then when i would try to stop data collection, the flask script wouldn't be there to interpret the stop command.
So i moved on to the mutex. That is exactly the functionality i thought would come out of the box with threads. It seems to be working fine, other than there is usually a really long delay in the 2nd time i stop collection.
#app.route('/')
def main():
print "MYLOG - asdf"
cls.start()
cls.pause()
return render_template('index.html')
#app.route('/startCollection', methods=['POST'])
def startCollectData():
print "collectPressed"
cls.unpause()
return "collect data pressed"
#app.route('/stopCollection', methods=['POST'])
def stopCollectData():
print "stop hit"
cls.pause()
return "collect data pressed"
results in the following output if i click start, stop, start, then stop:
collectPressed
1
10.240.0.75 - - [22/Apr/2016 15:58:42] "POST /startCollection HTTP/1.1" 200 -
2
3
4
5
6
7
8
9
stop hit
10.240.0.207 - - [22/Apr/2016 15:58:51] "POST /stopCollection HTTP/1.1" 200 -
collectPressed
10
10.240.0.166 - - [22/Apr/2016 15:58:57] "POST /startCollection HTTP/1.1" 200 -
11
12
13
14
15
16
stop hit
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
10.240.0.75 - - [22/Apr/2016 15:59:24] "POST /stopCollection HTTP/1.1" 200 -
So i hit stop, then it collects for 20 seconds, and then it finally stops. My collection points are going to be 5 minutes apart, so its not a big deal, but just curious.
import threading
import time
class myThread(threading.Thread):
num = 0
def __init__(self, threadID, name, counter):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.counter = counter
self.mutex = threading.Lock()
self.paused = False
def pause(self):
if(not self.paused):
self.mutex.acquire()
self.paused = True
def unpause(self):
self.mutex.release()
self.paused = False
def run(self):
print "starting" + self.name
while True:
self.mutex.acquire()
self.num += 1
print self.num
time.sleep(1)
self.mutex.release()
Anyways, thanks for the help. I've been stuck on how to handle this for about 4 months and its great to finally make some progress on it!
Edit 2
Actually, just ran it again and it took 100 seconds for it to actually stop counting. Thats not going to cut it. Any idea whats going on?
I would try using threads again. The fact of the matter is that you have a computation that needs to run, while another instruction sequence (namely the GUI logic) also needs to execute.
I would approach the problem with mutex's (a standard concurrency control technique) which should be able to supply a pause/unpause functionality:
import time
import threading
class myClass(threading.Thread):
num = 0
def __init__(self):
super(myClass, self).__init__()
self.num = 0
self.mutex = threading.Lock()
self.paused = False
def pause(self):
if(not self.paused):
self.mutex.acquire()
self.paused = True
def unpause(self):
self.mutex.release()
self.paused = False
def run(self):
while True:
self.mutex.acquire()
self.num += 1
print self.num
time.sleep(1)
self.mutex.release()
cls = myClass()
cls.start()
time.sleep(10)
cls.pause()
time.sleep(2)
cls.unpause()
time.sleep(2)
And this should output: (or something similar)
1
2
3
4
5
6
7
8
9
10
(wait)
11
12

Python multipul Event Timer

I have a water pump with pressure sensors. One on the input (low) and one on the output (high). My problem is my low pressure sensor. Sometimes the low pressure is just at the cut-off point causing the motor to start and stop quickly - this is not desirable. The system is running on a home-made PLS.
I'm a beginner at programming, 3 months, but the system is working for the most part. I need help on creating a timer between low pressure alarm events. I am thinking that the system can have 3 events within 30 seconds, but if any one event occurs in less than 5 seconds the system should shut down.
So if less than 5 seconds between the first event and second event the motor shuts down for good. The same goes for for second to third and third to fourth event. On the fourth event if less than 30 seconds occurs between first event and the fourth, the system also shuts down for good. Keep in mind that this is a part of a much larger loop. Here is the code I was able to create:
def Systemofftimer():
EventCounter = (0)
OneTimeLoopVarable = (0)
While True
if (is_low_pressure_alarm_on() and (OneTimeLoopVarable ==0)):
Timer = time.time()
EventCounter = EventCounter + (1)
OneTimeLoopVarable = 1
if EventCounter == (2) and (time.time() - Timer >= (10))
EventCounter = EventCounter + (1)
stop_motor()
if EventCounter == (3) and (time.time() - Timer >= (20))
EventCounter = EventCounter + (1)
stop_motor()
if EventCounter == (4) and (time.time() - Timer >= (30))
EventCounter = EventCounter + (1)
stop_motor()
else:
start_motor()
I would actually use a different approach for this: simply make your threshold for turning on larger than your threshold for turning of. For example:
That way you don't need to deal with the timing of it and can still eliminate the jittery nature around your state transition. You can also tune this to account for how noisy your sensors are.
Edit:
Below I've mocked up the piece of your system you're asking about. It's probably way more than you were initially looking for, but I wanted to test make sure it all worked properly before I posted so you're welcome to use it in whole or in part. As for the timer you asked about, it's based on Hans Then's post from this thread. To trigger the alarm, you just call TriggerAlarm() on the PumpSystem class. It will log that an alarm was triggered and then check the two conditions you mentioned in your question (5 sec and 30 sec errors). Each element of self.alarms contains the number of alarms that happened in a particular second, and each second the timer triggers to remove the oldest second from the list and create a fresh one. If you run the program, you can trigger alarms by pressing spacebar and see how the list is updated. The MockUp class is just meant to test and demonstrate how this works. I imagine you'll remove it if you decide to plug some portion of this into what you're working on. Anyway, here's the code.
from threading import Timer, Thread, Event
class PumpSystem():
def __init__(self):
self.alarms = [0 for x in range(30)]
self.Start()
return None
def SetUpdateFlag(self, flag):
self.update_flag = flag
return True
def Start(self):
self.stop_flag = Event()
self.thread = ClockTimer(self.OnTimerExpired, self.stop_flag)
self.thread.start()
return True
def Stop(self):
self.stop_flag.set()
return True
def TriggerAlarmEvent(self):
self.alarms[-1] += 1
self.CheckConditions()
self.update_flag.set()
return True
def OnTimerExpired(self):
self.UpdateRunningAverage()
def CheckConditions(self):
# Check if another error has triggered in the past 5 seconds
if sum(self.alarms[-5:]) > 1:
print('5 second error')
# Check if more than 3 errors have triggered in the past 30 seconds
if sum(self.alarms) > 3:
print('30 second error')
return True
def UpdateRunningAverage(self):
self.alarms.append(0)
self.alarms.pop(0)
self.update_flag.set()
return True
class ClockTimer(Thread):
def __init__(self, callback, event):
Thread.__init__(self)
self.callback = callback
self.stopped = event
return None
def SetInterval(self, time_in_seconds):
self.delay_period = time_in_seconds
return True
def run(self):
while not self.stopped.wait(1.0):
self.callback()
return True
## START MOCKUP CODE ##
import tkinter as tk
class MockUp():
def __init__(self):
self.pump_system = PumpSystem()
self.update_flag = Event()
self.pump_system.SetUpdateFlag(self.update_flag)
self.StartSensor()
return None
def StartSensor(self):
self.root = tk.Tk()
self.root.protocol("WM_DELETE_WINDOW", self.Exit)
self.alarms = tk.StringVar()
w = tk.Label(self.root, textvariable=self.alarms, width=100, height=15)
self.alarms.set(self.pump_system.alarms)
w.pack()
self.root.after('idle', self.ManageUpdate)
self.root.bind_all('<Key>', self.ManageKeypress)
self.root.mainloop()
return True
def ManageUpdate(self):
if self.update_flag.isSet():
self.alarms.set(self.pump_system.alarms)
self.update_flag.clear()
self.root.after(1, self.ManageUpdate)
return True
def ManageKeypress(self, event):
if event.keysym == 'Escape':
self.Exit()
if event.keysym == 'space':
self.pump_system.TriggerAlarmEvent()
return True
def Exit(self):
self.pump_system.Stop()
self.root.destroy()
mockup = MockUp()
This may look like a lot, but half is the mockup class that you can probably just ignore. Let me know if there's anything that you're confused about and I'd be happy to explain what's happening.

Is it possible to execute function every x seconds in python, when it is performing pool.map?

I am running pool.map on big data array and i want to print report in console every minute.
Is it possible? As i understand, python is synchronous language, it can't do this like nodejs.
Perhaps it can be done by threading.. or how?
finished = 0
def make_job():
sleep(1)
global finished
finished += 1
# I want to call this function every minute
def display_status():
print 'finished: ' + finished
def main():
data = [...]
pool = ThreadPool(45)
results = pool.map(make_job, data)
pool.close()
pool.join()
You can use a permanent threaded timer, like those from this question: Python threading.timer - repeat function every 'n' seconds
from threading import Timer,Event
class perpetualTimer(object):
# give it a cycle time (t) and a callback (hFunction)
def __init__(self,t,hFunction):
self.t=t
self.stop = Event()
self.hFunction = hFunction
self.thread = Timer(self.t,self.handle_function)
def handle_function(self):
self.hFunction()
self.thread = Timer(self.t,self.handle_function)
if not self.stop.is_set():
self.thread.start()
def start(self):
self.stop.clear()
self.thread.start()
def cancel(self):
self.stop.set()
self.thread.cancel()
Basically this is just a wrapper for a Timer object that creates a new Timer object every time your desired function is called. Don't expect millisecond accuracy (or even close) from this, but for your purposes it should be ideal.
Using this your example would become:
finished = 0
def make_job():
sleep(1)
global finished
finished += 1
def display_status():
print 'finished: ' + finished
def main():
data = [...]
pool = ThreadPool(45)
# set up the monitor to make run the function every minute
monitor = PerpetualTimer(60,display_status)
monitor.start()
results = pool.map(make_job, data)
pool.close()
pool.join()
monitor.cancel()
EDIT:
A cleaner solution may be (thanks to comments below):
from threading import Event,Thread
class RepeatTimer(Thread):
def __init__(self, t, callback, event):
Thread.__init__(self)
self.stop = event
self.wait_time = t
self.callback = callback
self.daemon = True
def run(self):
while not self.stop.wait(self.wait_time):
self.callback()
Then in your code:
def main():
data = [...]
pool = ThreadPool(45)
stop_flag = Event()
RepeatTimer(60,display_status,stop_flag).start()
results = pool.map(make_job, data)
pool.close()
pool.join()
stop_flag.set()
One way to do this, is to use main thread as the monitoring one. Something like below should work:
def main():
data = [...]
results = []
step = 0
pool = ThreadPool(16)
pool.map_async(make_job, data, callback=results.extend)
pool.close()
while True:
if results:
break
step += 1
sleep(1)
if step % 60 == 0:
print "status update" + ...
I've used .map() instead of .map_async() as the former is synchronous one. Also you probably will need to replace results.extend with something more efficient. And finally, due to GIL, speed improvement may be much smaller than expected.
BTW, it is little bit funny that you wrote that Python is synchronous in a question that asks about ThreadPool ;).
Consider using the time module. The time.time() function returns the current UNIX time.
For example, calling time.time() right now returns 1410384038.967499. One second later, it will return 1410384039.967499.
The way I would do this would be to use a while loop in the place of results = pool(...), and on every iteration to run a check like this:
last_time = time.time()
while (...):
new_time = time.time()
if new_time > last_time+60:
print "status update" + ...
last_time = new_time
(your computation here)
So that will check if (at least) a minute has elapsed since your last status update. It should print a status update approximately every sixty seconds.
Sorry that this is an incomplete answer, but I hope this helps or gives you some useful ideas.

Categories

Resources