I was trying to make a recursive realtime simulation to see it it was possible within the simpy framework.
the function was made to track into a log dictionary...
The jupyter kernel stopped working and shutdown when I ran this code, why?
def simStudy(env,AVERAGE_PROGRAMME_LENGTH):
i = 0
while i < int(3000/TYPING_RATE):
ans = input("Target achieved?")
log[env.now] = int(bool(ans))
print(log)
AVERAGE_PROGRAMME_LENGTH -= TYPING_RATE
yield env.timeout(1)
env = sim.rt.RealtimeEnvironment(factor = 10,strict = True)
def startSim():
try:
env.process(simStudy(env,AVERAGE_PROGRAMME_LENGTH))
env.run()
except RuntimeError as e:
startSim()
print(str(e)[-8:-3])
startSim()
I do not use recursion very often in simulation. Most of my processes will have a queue and if a entity needs the same process, I just put it back into the queue and let the queue processor process it multiple times.
The example is pealing layers of a onion one layer at a time
here it is with recursion
"""
demonstrates how recurssion can be used in simpy
by simulating the pealing of a onion
programmer: Michael R. Gibbs
"""
import simpy
import random
class Onion():
"""
Simple onion object
id: unique oning id
layers: number of layers to peal
"""
# class var for generating ids
id = 0
def __init__(self):
"""
initialize onion with unique id and random number of layers
"""
Onion.id += 1
self.id = Onion.id
self.layers = random.randint(5,9)
def get_onions(env, res):
"""
sim startup
generate a bunch of onions to be pealed and
start pealing them
"""
for i in range(5):
onion = Onion()
env.process(peal_onion_layer(env,res,onion,0))
# note the lack of of a yeild so all onions get processed in parallel
def peal_onion_layer(env,res,onion,pealed):
"""
gets a pealer resource and peals one onion layer
and release pealer.
will need to get the pealer resource again if another layer
needs to be pealed
"""
with res.request() as req: # Generate a request event
yield req # Wait for access
yield env.timeout(1) # peal
# release resource
pealed += 1
print(f"{env.now} onion: {onion.id} pealed layer {pealed}, {onion.layers - pealed} layers to go")
if onion.layers <= pealed:
#finish, stop recursion
print(f"{env.now} onion: {onion.id} is pealed, {pealed} layers")
else:
# still have layers to peal, recurse
yield env.process(peal_onion_layer(env,res,onion,pealed))
# do any post recurse acions here
print(f"{env.now} onion: {onion.id} exiting layer {pealed}")
# start up recursion
env = simpy.Environment()
res = simpy.Resource(env, capacity=2)
get_onions(env,res)
env.run()
Here is the same sim without recursion
"""
demonstrates how queue can be used instead of recursion
by simulating the pealing of a onion
programmer: Michael R. Gibbs
"""
import simpy
import random
class Onion():
"""
Simple onion object
id: unique oning id
layers: number of layers to peal
"""
# class var for generating ids
id = 0
def __init__(self):
"""
initialize onion with unique id and random number of layers
"""
Onion.id += 1
self.id = Onion.id
self.layers = random.randint(5,9)
def get_onions(env, res):
"""
sim startup
generate a bunch of onions to be pealed and
start pealing them
"""
for i in range(5):
onion = Onion()
env.process(peal_onion_layer(env,res,onion,0))
# note the lack of of a yeild so all onions get processed in parallel
def peal_onion_layer(env,res,onion,pealed):
"""
gets a pealer resource and peals one onion layer
and release pealer.
will need to get the pealer resource again if another layer
needs to be pealed
"""
with res.request() as req: # Generate a request event
yield req # Wait for access
yield env.timeout(1) # peal
# release resource
pealed += 1
print(f"{env.now} onion: {onion.id} pealed layer {pealed}, {onion.layers - pealed} layers to go")
if onion.layers <= pealed:
#finish, stop recursion
print(f"{env.now} onion: {onion.id} is pealed, {pealed} layers")
else:
# still have layers to peal, recurse
env.process(peal_onion_layer(env,res,onion,pealed))
# start up recursion
env = simpy.Environment()
res = simpy.Resource(env, capacity=2)
get_onions(env,res)
env.run()
Related
I am currently working on a model which simulates a delivery process throughout an operating day in simpy. The deliveries are conducted by riders which are delivering one package at a time. The packages need to be delivered within a certain time window. In addition, the system deals with fluctuating demand throughout the day. Therefore, I would like to adapt the number of employees available to the system matching the fluctuating demand on an hourly basis.
I modelled the riders as a resource with a certain capacity. Is there any possibility to adjust the capacity of the resource during a simulation run or are there other ways to model the riders with my system?
I already looked for possible solution within the simpy documentation, examples or other posts. However, I was not successful yet. Thus, for any advice or possible solutions I would be very grateful! Thank you in advance.
Use a store instead of a resource. Resources has a fix number of resources. Stores works a bit like a bit more like a queue backed with a list with a optional max capacity. To reduce then number in the store, just stop putting the object back into the store.
So is a example I wrapped a store with a class to manage the number of riders.
"""
Simple demo of a pool of riders delivering packages where the
number of riders can change over time
Idel riders are keep in a store that is wrapped in a class
to manage the number of riders
Programmer Michael R. Gibbs
"""
import simpy
import random
class Rider():
"""
quick class to track the riders that deliver packanges
"""
# tranks the next id to be assiend to a rider
next_id = 1
def __init__(self):
self.id = Rider.next_id
Rider.next_id += 1
class Pack():
"""
quick class to track the packanges
"""
# tranks the next id to be assiend to a pack
next_id = 1
def __init__(self):
self.id = Pack.next_id
Pack.next_id += 1
class RiderPool():
"""
Pool of riders where the number of riders can be changed
"""
def __init__(self, env, start_riders=10):
self.env = env
# tracks the number of ridres we need
self.target_cnt = start_riders
# tracks the number of riders we have
self.curr_cnt = start_riders
# the store idel riders
self.riders = simpy.Store(env)
# stores do not start with objects like resource pools do.
# need to add riders yourself as part of set up
self.riders.items = [Rider() for _ in range(start_riders)]
def add_rider(self):
"""
Add a rider to the pool
"""
self.target_cnt += 1
if self.curr_cnt < self.target_cnt:
# need to add a rider to the pool to get to the target
rider = Rider()
self.riders.put(rider)
self.curr_cnt += 1
print(f'{env.now:0.2f} rider {rider.id} added')
else:
# already have enough riders,
# must have tried to reduce the rider pool while all riders were busy
# In effect we are cancelling a previous remove rider call
print(f'{env.now:0.2f} keeping rider scheduled to be removed instead of adding')
def remove_rider(self):
"""
Remove a ridder from the pool
If all the riders are busy, the actual removal of a rider
will happen when a that rider finishes it current task and is
tried to be put/returned back into the pool
"""
self.target_cnt -= 1
if self.curr_cnt > self.target_cnt:
if len(self.riders.items) > 0:
# we have a idel rider that we can remove now
rider = yield self.riders.get()
self.curr_cnt -= 1
print(f'{env.now:0.2f} rider {rider.id} removed from store')
else:
# wait for a rider th be put back to the pool
pass
def get(self):
"""
Get a rider from the pool
returns a get request that can be yield to, not a rider
"""
rider_req = self.riders.get()
return rider_req
def put(self, rider):
"""
put a rider pack into the pool
"""
if self.curr_cnt <= self.target_cnt:
# still need the rider
self.riders.put(rider)
else:
# have tool many riders, do not add back to pool
self.curr_cnt -= 1
print(f'{env.now:0.2f} rider {rider.id} removed on return to pool')
def gen_packs(env, riders):
"""
generates the arrival of packages to be delivered by riders
"""
while True:
yield env.timeout(random.randint(1,4))
pack = Pack()
env.process(ship_pack(env, pack, riders))
def ship_pack(env, pack, riders):
"""
The process of a rider delivering a packages
"""
print(f'{env.now:0.2f} pack {pack.id} getting rider')
rider = yield riders.get()
print(f'{env.now:0.2f} pack {pack.id} has rider {rider.id}')
# trip time
yield env.timeout(random.randint(5,22))
riders.put(rider)
print(f'{env.now:0.2f} pack {pack.id} delivered')
def rider_sched(env, riders):
"""
Changes the number of riders in rider pool over time
"""
yield env.timeout(30)
# time to remove a few riders
print(f'{env.now:0.2f} -- reducing riders')
print(f'{env.now:0.2f} -- request queue len {len(riders.riders.get_queue)}')
print(f'{env.now:0.2f} -- rider store len {len(riders.riders.items)}')
for _ in range(5):
env.process(riders.remove_rider())
yield env.timeout(60)
# time to add back some riders
print(f'{env.now:0.2f} -- adding riders ')
for _ in range(2):
riders.add_rider()
# run the model
env = simpy.Environment()
riders = RiderPool(env, 10)
env.process(gen_packs(env, riders))
env.process(rider_sched(env, riders))
env.run(100)
print(f'{env.now:0.2f} -- end rider count {riders.target_cnt}')
I have an endless loop problem in Simpy simulation.
My scenario is:
After finishing triage, a patient wait until getting empty bed.
However, if there is no empty bed, the patient should wait until an empty bed is available.
When a patient uses a bed and leaves the hospital, the bed turns into a dirty bed.
The dirty bed turns into an empty bed when the cleaner cleans it.
Bed cleaning work start after getting clean request.
I used "Store" for managing beds
I think I have an infinite loop when there is no empty bed and no dirty bed.
I think...
Add a queue (After triage)
From the queue, assign a bed based on the FIFO.
If we don't have empty or dirty bed, we should wait until a dirty bed is available.
But, I don't know how to implement this idea.
Please help me to solve this problem.
import simpy
import random
class Pre_Define:
warmup_period = 1440
sim_duration = 14400
number_of_runs = 3
class Patients:
def __init__(self, p_id):
self.id = p_id
self.bed_name = ""
self.admission_decision = ""
def admin_decision(self):
admin_decision_prob = random.uniform(0, 1)
if admin_decision_prob <= 0.7:
self.admission_decision = "DIS"
class Model:
def __init__(self, run_number):
self.env = simpy.Environment()
self.pt_counter = 0
self.tg = simpy.Resource(self.env, capacity = 4)
self.physician = simpy.Resource(self.env, capacity = 4)
self.bed_clean = simpy.Store(self.env, capacity = 77)
self.bed_dirty = simpy.Store(self.env, capacity = 77)
self.IU_bed = simpy.Resource(self.env, capacity = 50)
self.bed_cleaner = simpy.Resource(self.env, capacity = 2)
self.run_number = run_number
def generate_beds(self):
for i in range(77):
yield self.env.timeout(0)
yield self.bed_clean.put(f'bed{i}')
print(self.bed_clean.items)
def generate_pt_arrivals(self):
while True:
self.pt_counter += 1
pt = Patients(self.pt_counter)
yield self.env.timeout(1/7)
self.env.process(self.Process(pt))
def Process(self, Patients):
with self.tg.request() as req:
yield req
triage_service_time = random.expovariate(1.0/18)
yield self.env.timeout(triage_service_time)
if self.bed_clean.items != []:
get_empty_bed_name = yield self.bed_clean.get()
Patients.bed_name = get_empty_bed_name
elif self.bed_dirty.items != []:
get_dirty_bed_name = yield self.bed_dirty.get()
with self.bed_cleaner.request() as req:
yield req
yield self.env.timeout(50)
else:
print("NO BED, Should Wait!!")
no_bed = True
while no_bed:
#print("Waiting dirty bed")
if self.bed_dirty.items != []:
get_dirty_bed_name = yield self.bed_dirty.get()
print("Find dirty bed!")
with self.bed_cleaner.request() as req:
yield req
yield self.env.timeout(30)
Patients.bed_name = get_dirty_bed_name
no_bed = False
with self.physician.request() as req:
yield req
yield self.env.timeout(10)
Patients.admin_decision()
if Patients.admission_decision == "DIS":
with self.IU_bed.request() as req:
yield req
yield self.env.timeout(600)
get_dirty_bed_name = Patients.bed_name
yield self.bed_dirty.put(get_dirty_bed_name)
else:
get_dirty_bed_name = Patients.bed_name
yield self.bed_dirty.put(get_dirty_bed_name)
def run(self):
self.env.process(self.generate_pt_arrivals())
self.env.process(self.generate_beds())
self.env.run(until = Pre_Define.warmup_period + Pre_Define.sim_duration)
for run in range(Pre_Define.number_of_runs):
run_model = Model(run)
run_model.run()
print()
So you got the right idea where you use two Stores to track dirty and clean beds. The trick is to request both a clean and a dirty bed at the same time, and discard the request you do not use.
So the big changes I made was to request both a clean bed and a dirty bed and used env.any_of() to get me the first filled request. Note that both requests could get filled. Since I made two requests, that means I need to either cancel the request I do not use, or if filled, return the unused bed back to its queue. The other thing I did was to make a separate process for the cleaners. This means that the both the patients and the cleaners will be competing for dirty beds.
"""
Quick sim model of beds in a hosptial triage
beds have two states, clean and dirty
When a patient arrives they are assigned to a empty bed
If no beds are avail then the patient wait for first empty bed
If there is both a clean bed and a empty bed, the clean bed will be assigned
When the patient leaves triage the bed state is dirty.
empty dirty beads are queued to be clean, after cleaning bed state is clean
After being triaged, a patient is either admited or discharged.
If the patient is admitted, then they wait for a IU bed befor the triage bed is empty
Programmer: Michael R. Gibbs
"""
import simpy
import random
TRIAGE_TIME = 30
BED_CLEAN_TIME = 20
class Patient():
"""
has id to track patient progress
and bed when assigned to a bed
"""
next_id = 1
#classmethod
def get_next_id(cls):
id = cls.next_id
cls.next_id += 1
return id
def __init__(self):
self.id = self.get_next_id()
self.bed = None
class Bed():
"""
has id to track patient progress
and bed when assigned to a bed
"""
next_id = 1
#classmethod
def get_next_id(cls):
id = cls.next_id
cls.next_id += 1
return id
def __init__(self):
self.id = self.get_next_id()
self.bed_state = 'Clean'
def clean_beds(env, dirty_bed_q, clean_bed_q):
"""
Sim process for cleaning dirty beds from
the dirty queue, and putting the clean
beds into clean queue
A instance shoud be started for each cleaner
"""
while True:
bed = yield dirty_bed_q.get()
print(f'{env.now:.2f} bed {bed.id} is being cleaned')
# clean
yield env.timeout(BED_CLEAN_TIME)
bed.bed_state = "Clean"
clean_bed_q.put(bed)
print(f'{env.now:.2f} bed {bed.id} is clean')
def triage(env, pat, clean_bed_q, dirty_bed_q, ui_bed_q):
"""
models the patients life cycle in triage
stepss are:
get bed (clean perfered)
get triaged
leave
if addmited leaving is blocked until a ui bed is found
"""
# get bed
clean_req = clean_bed_q.get()
dirty_req = dirty_bed_q.get()
print(f'{env.now:.2f} patient {pat.id} has arrived')
fired = yield env.any_of([clean_req, dirty_req])
# see if we got a clean or dirty or both
if clean_req in fired:
# got a clean bead
pat.bed = fired[clean_req]
# need to deal with the dirty req
if dirty_req in fired:
# got two beds but dirty back
dirty_bed_q.put(fired[dirty_req])
else:
# stil need to cancel req
dirty_req.cancel()
else:
# we have a dirty bed
pat.bed = fired[dirty_req]
# need to deal with the dirty req
if clean_req in fired:
# got two beds but dirty back
clean_bed_q.put(fired[clean_req])
else:
# stil need to cancel req
clean_req.cancel()
print(f'{env.now:.2f} {pat.bed.bed_state} bed {pat.bed.id} is occupied and dirty with patient {pat.id}')
pat.bed.bed_state = 'Dirty'
# triage
yield env.timeout(TRIAGE_TIME)
admit = (random.uniform(0,1) < 0.7)
if admit:
# need to get a IU bed before
# patient gives up triage bed
print(f'{env.now:.2f} patient {pat.id} is being admitted')
ui_req = ui_bed_q.request()
yield ui_req
print(f'{env.now:.2f} patient {pat.id} got iu bed')
# patient leaves triage
# give up dirty bed
dirty_bed_q.put(pat.bed)
print(f'{env.now:.2f} bed {pat.bed.id} is empty and patient {pat.id} has left')
def gen_pats(env, clean_bed_q, dirty_bed_q, iu_bed_q):
"""
creates the arriving patients
"""
while True:
yield env.timeout(random.uniform(1,8))
pat = Patient()
# not each patient gets their own process
# so if one patient blocks while waiting for a iu bed, it does
# not block the other patients
env.process(triage(env, pat, clean_bed_q, dirty_bed_q, iu_bed_q))
def model():
env = simpy.Environment()
# make queues
clean_bed_q = simpy.Store(env)
dirty_bed_q = simpy.Store(env)
iu_bed_q = simpy.Resource(env,capacity=5)
clean_bed_q.items = [Bed() for _ in range(10)]
# start generating patients
env.process(gen_pats(env, clean_bed_q, dirty_bed_q, iu_bed_q))
# star up cleaners
for _ in range(2):
env.process(clean_beds(env, dirty_bed_q, clean_bed_q))
env.run(until=100)
print('simulation finish')
model()
I am working on a python simulation using Simpy but I am having somewhat of a brick wall. The simulation has an arrival process and a shipping process. The goal is to ship as many orders as they arrive and not leave orders waiting. This code gives me a process that ships more orders than they arrive.
class LC:
def __init__(self,env):
self.dispatch=simpy.Container(env,capacity=100000, init=0)
self.costs=simpy.Container(env, capacity=100000, init=0)
self.shipment_count=simpy.Container(env,capacity=10000, init=0)
self.orders=simpy.Container(env,capacity=10000,init=0)
def shipping(env, ship):
while True:
#yield env.timeout(i)
print('Shipment was sent at %d' % (env.now))
time,weight=get_weight_n_time() ####other function that assigns weight and time of shipment
if weight <=35:
cost=get_cost(weight,0)###other function that calculates costs
else:
cost=get_cost(weight,1)
yield env.timeout(time)
print ('Shipment of %d kgs where delivered after %d days with cost of $ %d' %(weight,time,cost))
yield ship.dispatch.put(weight)
yield ship.orders.get(1)
yield ship.costs.put(cost)
yield ship.shipment_count.put(1)
def arrival (env,ship):
while True:
print('Order arrived at %d'%(env.now))
yield ship.orders.put(1)
yield env.timeout(1)
def shipment_gen(env,ship):
env.process(arrival(env,ship))
num_trucks=5
for i in range(num_trucks):
env.process(shipping(env,ship))
yield env.timeout(0)
env = simpy.Environment()
ship=LC(env)
env.process(shipment_gen(env,ship))
env.run(until=100)
when I add a variable to determine weather or not to make a shipment based in the # of orders then the shipment process does not occur
class LC:
def __init__(self,env):
self.dispatch=simpy.Container(env,capacity=100000, init=0)
self.costs=simpy.Container(env, capacity=100000, init=0)
self.shipment_count=simpy.Container(env,capacity=10000, init=0)
self.orders=simpy.Container(env,capacity=10000,init=0)
self.order=0
def shipping(env, ship):
while True:
#yield env.timeout(i)
print('Shipment was sent at %d' % (env.now))
time,weight=get_weight_n_time()
if weight <=35:
cost=get_cost(weight,0)
else:
cost=get_cost(weight,1)
yield env.timeout(time)
print ('Shipment of %d kgs where delivered after %d days with cost of $ %d' %(weight,time,cost))
yield ship.dispatch.put(weight)
yield ship.orders.get(1)
yield ship.costs.put(cost)
yield ship.shipment_count.put(1)
def arrival (env,ship):
while True:
print('Order arrived at %d'%(env.now))
yield ship.orders.put(1)
yield env.timeout(1)
ship.order+=1
def shipment_gen(env,ship):
env.process(arrival(env,ship))
# if ship.orders.level >0:
# num_trucks=get_number_trucks(ship)
# else:
# num_trucks=get_number_trucks(ship)
num_trucks=5
for i in range(num_trucks):
if ship.order>0:
env.process(shipping(env,ship))
yield env.timeout(0)
ship.order-=1
else:
print('-')
env = simpy.Environment()
ship=LC(env)
env.process(shipment_gen(env,ship))
env.run(until=100)
You do not need to add extra ship.order in the class. The container has the information about the number of orders(ship.orders.level).So delete the ship.order and delete the loop in shipment_gen ,your code will be executed normally. By using of multi-thread,the multi-process is realized by Simpy. If you add extra variable in the ship class,it may lead to errors.
do you want to move your
yield ship.orders.get(1)
line to the top of the function so your truck waits for a order to arrive before it does anything else?
also
shipment_gen is doing its if statement at time 0, but your order counter is not =+1 till time 1
so at time 0 your order counter is still 0 so your if will always do the else
I'm using Simpy for discrete event simulation and am having problems with what I can best describe as optional resources. The context is that I am simulating missions which will be undertaken and will demand some assets (resources) to undertake that mission. Unlike most SimPy implementations, the mission needs to start at the allocated time or else fails and may accept lesser resources to enable the mission to start.
For example a mission requires four vehicles at time = t. At time t only three vehicles are available, so the mission commences with three vehicles but has a lesser outcome. If only two or less vehicles are available, the mission would not go ahead and would be considered as failed.
Sorry for the lack of code in this example, I'm struggling to get my head around it. Any help would be appreciated.
I used containers to track resources, but take a look and let me know if this is what you want
"""
Demos checking the availability of resources befor taking resouces
Each agent has a first choice and second choice for seizing resources
if neither choice is avalale will not wait and skip getting resources
programmer: Michael R. Gibbs
"""
import simpy
import random
def eval(choice, containers):
"""
steps through each requirement and checks if there is enough resouces is available
returns True if there is enough, else returns False
"""
for k,v in choice.items():
if containers[k].level < v:
# not enough
return False
return True
def mission(env, id, firstChoice, secondChoice, startDelay, missingTime, containers):
"""
Sims agent checking if first or second choice of resouces is available
If so, will seize the resources, hold for a time, and then release the resouces
"""
choice = None # which choice to use
# wait for mission start
yield env.timeout(startDelay)
print()
print(f'{env.now} agent {id} is starting a mission')
x = [(k, v.level) for k, v in containers.items()]
print(f'available: {x}')
print(f'first choice {firstChoice}')
print(f'second choice {secondChoice}')
# evaluate the choices and see if either will work
if eval(firstChoice, containers):
choice = firstChoice
print(f'{env.now} agent {id} is going with first Choice')
else:
if eval(secondChoice,containers):
choice = secondChoice
print(f'{env.now} agent {id} is going with second Choice')
else:
print(f'{env.now} agent {id} is abort because not enough resources')
if choice is not None:
# found a choice that works
# seize the resouces, we checked that they are availale, so there should not be any queue time
for k,v in choice.items():
containers[k].get(v)
yield env.timeout(missingTime)
# return resouces
for k,v in choice.items():
containers[k].put(v)
print(f'{env.now} agent {id} has finished the mission ')
def build_choice():
"""
quick helper to generate random first and second choice requirements
(may not use all three resouces)
returns a dictionary of the requirements where key=type, value=how much
"""
choice = {}
while len(choice) == 0:
for k in ['A','B','C']:
v = random.randint(0,3)
if v > 0:
choice[k] = v
return choice
def gen_missions(env, containers):
"""
Generate a series of agents to seize the resources
"""
i = 0
while True:
i += 1 # id generator
firstChoice = build_choice()
secondChoice = build_choice()
env.process(mission(env,i,firstChoice,secondChoice,random.randint(1,4),random.randint(2,8),containers))
# put some time between agents
yield env.timeout(1)
env = simpy.Environment()
# load the resouces
containers = {}
containers['A'] = simpy.Container(env, init=10, capacity=10)
containers['B'] = simpy.Container(env, init=10, capacity=10)
containers['C'] = simpy.Container(env, init=10, capacity=10)
# start generating agents
env.process(gen_missions(env,containers))
env.run(100)
I've seen several posts about this, so I know it is fairly straightforward to do, but I seem to be coming up short. I'm not sure if I need to create a worker pool, or use the Queue class. Basically, I want to be able to create several processes that each act autonomously (which is why they inherit from the Agent superclass).
At random ticks of my main loop I want to update each Agent. I'm using time.sleep with different values in the main loop and the Agent's run loop to simulate different processor speeds.
Here is my Agent superclass:
# Generic class to handle mpc of each agent
class Agent(mpc.Process):
# initialize agent parameters
def __init__(self,):
# init mpc
mpc.Process.__init__(self)
self.exit = mpc.Event()
# an agent's main loop...generally should be overridden
def run(self):
while not self.exit.is_set():
pass
print "You exited!"
# safely shutdown an agent
def shutdown(self):
print "Shutdown initiated"
self.exit.set()
# safely communicate values to this agent
def communicate(self,value):
print value
A specific agent's subclass (simulating an HVAC system):
class HVAC(Agent):
def __init__(self, dt=70, dh=50.0):
super(Agent, self).__init__()
self.exit = mpc.Event()
self.__pref_heating = True
self.__pref_cooling = True
self.__desired_temperature = dt
self.__desired_humidity = dh
self.__meas_temperature = 0
self.__meas_humidity = 0.0
self.__hvac_status = "" # heating, cooling, off
self.start()
def run(self): # handle AC or heater on
while not self.exit.is_set():
ctemp = self.measureTemp()
chum = self.measureHumidity()
if (ctemp < self.__desired_temperature):
self.__hvac_status = 'heating'
self.__meas_temperature += 1
elif (ctemp > self.__desired_temperature):
self.__hvac_status = 'cooling'
self.__meas_temperature += 1
else:
self.__hvac_status = 'off'
print self.__hvac_status, self.__meas_temperature
time.sleep(0.5)
print "HVAC EXITED"
def measureTemp(self):
return self.__meas_temperature
def measureHumidity(self):
return self.__meas_humidity
def communicate(self,updates):
self.__meas_temperature = updates['temp']
self.__meas_humidity = updates['humidity']
print "Measured [%d] [%f]" % (self.__meas_temperature,self.__meas_humidity)
And my main loop:
if __name__ == "__main__":
print "Initializing subsystems"
agents = {}
agents['HVAC'] = HVAC()
# Run simulation
timestep = 0
while timestep < args.timesteps:
print "Timestep %d" % timestep
if timestep % 10 == 0:
curr_temp = random.randrange(68,72)
curr_humidity = random.uniform(40.0,60.0)
agents['HVAC'].communicate({'temp':curr_temp, 'humidity':curr_humidity})
time.sleep(1)
timestep += 1
agents['HVAC'].shutdown()
print "HVAC process state: %d" % agents['HVAC'].is_alive()
So the issue is that, whenever I run agents['HVAC'].communicate(x) within the main loop, I can see the value being passed into the HVAC subclass in its run loop (so it prints the received value correctly). However, the value never is successfully stored.
So typical output looks like this:
Initializing subsystems
Timestep 0
Measured [68] [56.948675]
heating 1
heating 2
Timestep 1
heating 3
heating 4
Timestep 2
heating 5
heating 6
When in reality, as soon as Measured [68] appears, the internal stored value should be updated to output 68 (not heating 1, heating 2, etc.). So effectively, the HVAC's self.__meas_temperature is not being properly updated.
Edit: After a bit of research, I realized that I didn't necessarily understand what is happening behind the scenes. Each subprocess operates with its own virtual chunk of memory and is completely abstracted away from any data being shared this way, so passing the value in isn't going to work. My new issue is that I'm not necessarily sure how to share a global value with multiple processes.
I was looking at the Queue or JoinableQueue packages, but I'm not necessarily sure how to pass a Queue into the type of superclass setup that I have (especially with the mpc.Process.__init__(self) call).
A side concern would be if I can have multiple agents reading values out of the queue without pulling it out of the queue? For instance, if I wanted to share a temperature value with multiple agents, would a Queue work for this?
Pipe v Queue
Here's a suggested solution assuming that you want the following:
a centralized manager / main process which controls lifetimes of the workers
worker processes to do something self-contained and then report results to the manager and other processes
Before I show it though, for the record I want to say that in general unless you are CPU bound multiprocessing is not really the right fit, mainly because of the added complexity, and you'd probably be better of using a different high-level asynchronous framework. Also, you should use python 3, it's so much better!
That said, multiprocessing.Manager, makes this pretty easy to do using multiprocessing. I've done this in python 3 but I don't think anything shouldn't "just work" in python 2, but I haven't checked.
from ctypes import c_bool
from multiprocessing import Manager, Process, Array, Value
from pprint import pprint
from time import sleep, time
class Agent(Process):
def __init__(self, name, shared_dictionary, delay=0.5):
"""My take on your Agent.
Key difference is that I've commonized the run-loop and used
a shared value to signal when to stop, to demonstrate it.
"""
super(Agent, self).__init__()
self.name = name
# This is going to be how we communicate between processes.
self.shared_dictionary = shared_dictionary
# Create a silo for us to use.
shared_dictionary[name] = []
self.should_stop = Value(c_bool, False)
# Primarily for testing purposes, and for simulating
# slower agents.
self.delay = delay
def get_next_results(self):
# In the real world I'd use abc.ABCMeta as the metaclass to do
# this properly.
raise RuntimeError('Subclasses must implement this')
def run(self):
ii = 0
while not self.should_stop.value:
ii += 1
# debugging / monitoring
print('%s %s run loop execution %d' % (
type(self).__name__, self.name, ii))
next_results = self.get_next_results()
# Add the results, along with a timestamp.
self.shared_dictionary[self.name] += [(time(), next_results)]
sleep(self.delay)
def stop(self):
self.should_stop.value = True
print('%s %s stopped' % (type(self).__name__, self.name))
class HVACAgent(Agent):
def get_next_results(self):
# This is where you do your work, but for the sake of
# the example just return a constant dictionary.
return {'temperature': 5, 'pressure': 7, 'humidity': 9}
class DumbReadingAgent(Agent):
"""A dumb agent to demonstrate workers reading other worker values."""
def get_next_results(self):
# get hvac 1 results:
hvac1_results = self.shared_dictionary.get('hvac 1')
if hvac1_results is None:
return None
return hvac1_results[-1][1]['temperature']
# Script starts.
results = {}
# The "with" ensures we terminate the manager at the end.
with Manager() as manager:
# the manager is a subprocess in its own right. We can ask
# it to manage a dictionary (or other python types) for us
# to be shared among the other children.
shared_info = manager.dict()
hvac_agent1 = HVACAgent('hvac 1', shared_info)
hvac_agent2 = HVACAgent('hvac 2', shared_info, delay=0.1)
dumb_agent = DumbReadingAgent('dumb hvac1 reader', shared_info)
agents = (hvac_agent1, hvac_agent2, dumb_agent)
list(map(lambda a: a.start(), agents))
sleep(1)
list(map(lambda a: a.stop(), agents))
list(map(lambda a: a.join(), agents))
# Not quite sure what happens to the shared dictionary after
# the manager dies, so for safety make a local copy.
results = dict(shared_info)
pprint(results)