This is a bank simulation that takes into account 20 different serving lines with a single queue, customers arrive following an exponential rate and they are served during a time that follows a normal probability distribution with mean 40 and standard deviation 20.
Things were working just fine till I decided to exclude the negative values given by the normal distribution using this method:
def getNormal(self):
normal = normalvariate(40,20)
if (normal>=1):
return normal
else:
getNormal(self)
Am I screwing up the recursive call? I don't get why it wouldn't work. I have changed the getNormal() method to:
def getNormal(self):
normal = normalvariate(40,20)
while (normal <=1):
normal = normalvariate (40,20)
return normal
But I'm curious on why the previous recursive statement gets busted.
This is the complete source code, in case you're interested.
""" bank21: One counter with impatient customers """
from SimPy.SimulationTrace import *
from random import *
## Model components ------------------------
class Source(Process):
""" Source generates customers randomly """
def generate(self,number):
for i in range(number):
c = Customer(name = "Customer%02d"%(i,))
activate(c,c.visit(tiempoDeUso=15.0))
validateTime=now()
if validateTime<=600:
interval = getLambda(self)
t = expovariate(interval)
yield hold,self,t #esta es la rata de generaciĆ³n
else:
detenerGeneracion=999
yield hold,self,detenerGeneracion
class Customer(Process):
""" Customer arrives, is served and leaves """
def visit(self,tiempoDeUso=0):
arrive = now() # arrival time
print "%8.3f %s: Here I am "%(now(),self.name)
yield (request,self,counter),(hold,self,maxWaitTime)
wait = now()-arrive # waiting time
if self.acquired(counter):
print "%8.3f %s: Waited %6.3f"%(now(),self.name,wait)
tiempoDeUso=getNormal(self)
yield hold,self,tiempoDeUso
yield release,self,counter
print "%8.3f %s: Completed"%(now(),self.name)
else:
print "%8.3f %s: Waited %6.3f. I am off"%(now(),self.name,wait)
## Experiment data -------------------------
maxTime = 60*10.5 # minutes
maxWaitTime = 12.0 # minutes. maximum time to wait
## Model ----------------------------------
def model():
global counter
#seed(98989)
counter = Resource(name="Las maquinas",capacity=20)
initialize()
source = Source('Source')
firstArrival= expovariate(20.0/60.0) #chequear el expovariate
activate(source,
source.generate(number=99999),at=firstArrival)
simulate(until=maxTime)
def getNormal(self):
normal = normalvariate(40,20)
if (normal>=1):
return normal
else:
getNormal(self)
def getLambda (self):
actualTime=now()
if (actualTime <=60):
return 20.0/60.0
if (actualTime>60)and (actualTime<=120):
return 25.0/60.0
if (actualTime>120)and (actualTime<=180):
return 40.0/60.0
if (actualTime>180)and (actualTime<=240):
return 30.0/60.0
if (actualTime>240)and (actualTime<=300):
return 35.0/60.0
if (actualTime>300)and (actualTime<=360):
return 42.0/60.0
if (actualTime>360)and (actualTime<=420):
return 50.0/60.0
if (actualTime>420)and (actualTime<=480):
return 55.0/60.0
if (actualTime>480)and (actualTime<=540):
return 45.0/60.0
if (actualTime>540)and (actualTime<=600):
return 10.0/60.0
## Experiment ----------------------------------
model()
I think you want
return getnormal(self)
instead of
getnormal(self)
If the function exits without hitting a return statement, then it returns the special value None, which is a NoneType object - that's why Python complains about a 'NoneType.' The abs() function wants a number, and it doesn't know what to do with a None.
Also, you could avoid recursion (and the cost of creating a new stack frame) by using
def getNormal(self):
normal = 0
while normal < 1:
normal = normalvariate(40,20)
return normal
I am not entirely sure, but I think you need to change your method to the following:
def getNormal(self):
normal = normalvariate(40,20)
if (normal>=1):
return normal
else:
return getNormal(self)
You need to have:
return getNormal(self)
instead of
getNormal(self)
Really though, there's no need for recursion:
def getNormal(self):
normal = 0
while normal < 1:
normal = normalvariate(40,20)
return normal
Related
I would like to write a class with the following interface.
class Automaton:
""" A simple automaton class """
def iterate(self, something):
""" yield something and expects some result in return """
print("Yielding", something)
result = yield something
print("Got \"" + result + "\" in return")
return result
def start(self, somefunction):
""" start the iteration process """
yield from somefunction(self.iterate)
raise StopIteration("I'm done!")
def first(iterate):
while iterate("what do I do?") != "over":
continue
def second(iterate):
value = yield from iterate("what do I do?")
while value != "over":
value = yield from iterate("what do I do?")
# A simple driving process
automaton = Automaton()
#generator = automaton.start(first) # This one hangs
generator = automaton.start(second) # This one runs smoothly
next_yield = generator.__next__()
for step in range(4):
next_yield = generator.send("Continue...({})".format(step))
try:
end = generator.send("over")
except StopIteration as excp:
print(excp)
The idea is that Automaton will regularly yield values to the caller which will in turn send results/commands back to the Automaton.
The catch is that the decision process "somefunction" will be some user defined function I have no control over. Which means that I can't really expect it to call the iterate method will a yield from in front. Worst, it could be that the user wants to plug some third-party function he has no control over inside this Automaton class. Meaning that the user might not be able to rewrite his somefunction for it to include yield from in front of iterate calls.
To be clear: I completely understand why using the first function hangs the automaton. I am just wondering if there is a way to alter the definition of iterate or start that would make the first function work.
I have program that takes input from the user and displays multiple variations of the input using the Population() function. The store_fit function adds these different variations to a list then deletes them so that the list is only populated with one variation at a time.
I want to be able to get the variation from the list and use it to update my text. However, my program only updates the text after the Population function is completed. How could I run the Population function and update my text simultaneously?
code:
fit = []
...
def store_fit(fittest): # fittest is each variation from Population
clear.fit()
fit.append(fittest)
...
pg.init()
...
done = False
while not done:
...
if event.key == pg.K_RETURN:
print(text)
target = text
Population(1000) #1000 variations
store_fit(value)
# I want this to run at the same time as Population
fittest = fit[0]
...
top_sentence = font.render(("test: " + fittest), 1, pg.Color('lightskyblue3'))
screen.blit(top_sentence, (400, 400))
I recommend to make Population a generator function. See The Python yield keyword explained:
def Populate(text, c):
for i in range(c):
# compute variation
# [...]
yield variation
Create an iterator and use next() to retrieve the next variation in the loop, so you can print every single variation:
populate_iter = Populate(text, 1000)
final_variation = None
while not done:
next_variation = next(populate_iter, None)
if next_variation :
final_variation = next_variation
# print current variation
# [...]
else:
done = True
Edit according to the comment:
In order to keep my question simple, I didn't mention that Population, was a class [...]
Of course Populate can be a class, too. I this case you've to implement the object.__iter__(self) method. e.g.:
class Populate:
def __init__(self, text, c):
self.text = text
self.c = c
def __iter__(self):
for i in range(self.c):
# compute variation
# [...]
yield variation
Create an iterator by iter(). e.g.:
populate_iter = iter(Populate(text, 1000))
final_variation = None
while not done:
next_variation = next(populate_iter, None)
if next_variation :
final_variation = next_variation
# print current variation
# [...]
else:
done = True
I have a list of ~300K URLs for an API i need to get data from.
The API limit is 100 calls per second.
I have made a class for the asynchronous but this is working to fast and I am hitting an error on the API.
How do I slow down the asynchronous, so that I can make 100 calls per second?
import grequests
lst = ['url.com','url2.com']
class Test:
def __init__(self):
self.urls = lst
def exception(self, request, exception):
print ("Problem: {}: {}".format(request.url, exception))
def async(self):
return grequests.map((grequests.get(u) for u in self.urls), exception_handler=self.exception, size=5)
def collate_responses(self, results):
return [x.text for x in results]
test = Test()
#here we collect the results returned by the async function
results = test.async()
response_text = test.collate_responses(results)
The first step that I took was to create an object who can distribute a maximum of n coins every t ms.
import time
class CoinsDistribution:
"""Object that distribute a maximum of maxCoins every timeLimit ms"""
def __init__(self, maxCoins, timeLimit):
self.maxCoins = maxCoins
self.timeLimit = timeLimit
self.coin = maxCoins
self.time = time.perf_counter()
def getCoin(self):
if self.coin <= 0 and not self.restock():
return False
self.coin -= 1
return True
def restock(self):
t = time.perf_counter()
if (t - self.time) * 1000 < self.timeLimit:
return False
self.coin = self.maxCoins
self.time = t
return True
Now we need a way of forcing function to only get called if they can get a coin.
To do that we can write a decorator function that we could use like that:
#limitCalls(callLimit=1, timeLimit=1000)
def uniqFunctionRequestingServer1():
return 'response from s1'
But sometimes, multiple functions are calling requesting the same server so we would want them to get coins from the the same CoinsDistribution object.
Therefor, another use of the decorator would be by supplying the CoinsDistribution object:
server_2_limit = CoinsDistribution(3, 1000)
#limitCalls(server_2_limit)
def sendRequestToServer2():
return 'it worked !!'
#limitCalls(server_2_limit)
def sendAnOtherRequestToServer2():
return 'it worked too !!'
We now have to create the decorator, it can take either a CoinsDistribution object or enough data to create a new one.
import functools
def limitCalls(obj=None, *, callLimit=100, timeLimit=1000):
if obj is None:
obj = CoinsDistribution(callLimit, timeLimit)
def limit_decorator(func):
#functools.wraps(func)
def limit_wrapper(*args, **kwargs):
if obj.getCoin():
return func(*args, **kwargs)
return 'limit reached, please wait'
return limit_wrapper
return limit_decorator
And it's done ! Now you can limit the number of calls any API that you use and you can build a dictionary to keep track of your CoinsDistribution objects if you have to manage a lot of them (to differrent API endpoints or to different APIs).
Note: Here I have choosen to return an error message if there are no coins available. You should adapt this behaviour to your needs.
You can just keep track of how much time has passed and decide if you want to do more requests or not.
This will print 100 numbers per second, for example:
from datetime import datetime
import time
start = datetime.now()
time.sleep(1);
counter = 0
while (True):
end = datetime.now()
s = (end-start).seconds
if (counter >= 100):
if (s <= 1):
time.sleep(1) # You can keep track of the time and sleep less, actually
start = datetime.now()
counter = 0
print(counter)
counter += 1
This other question in SO shows exactly how to do this. By the way, what you need is usually called throttling.
I have this class called DecayingSet which is a deque with expiration
class DecayingSet:
def __init__(self, timeout): # timeout in seconds
from collections import deque
self.timeout = timeout
self.d = deque()
self.present = set()
def add(self, thing):
# Return True if `thing` not already in set,
# else return False.
result = thing not in self.present
if result:
self.present.add(thing)
self.d.append((time(), thing))
self.clean()
return result
def clean(self):
# forget stuff added >= `timeout` seconds ago
now = time()
d = self.d
while d and now - d[0][0] >= self.timeout:
_, thing = d.popleft()
self.present.remove(thing)
I'm trying to use it inside a running script, that connects to a streaming api.
The streaming api is returning urls that I am trying to put inside the deque to limit them from entering the next step of the program.
class CustomStreamListener(tweepy.StreamListener):
def on_status(self, status, include_entities=True):
longUrl = status.entities['urls'][0]['expanded_url']
limit = DecayingSet(86400)
l = limit.add(longUrl)
print l
if l == False:
pass
else:
r = requests.get("http://api.some.url/show?url=%s"% longUrl)
When i use this class in an interpreter, everything is good.
But when the script is running, and I repeatedly send in the same url, l returns True every time indicating that the url is not inside the set, when is supposed to be. What gives?
Copying my comment ;-) I think the indentation is screwed up, but it looks like you're creating a brand new limit object every time on_status() is called. Then of course it would always return True: you'd always be starting with an empty limit.
Regardless, change this:
l = limit.add(longUrl)
print l
if l == False:
pass
else:
r = requests.get("http://api.some.url/show?url=%s"% longUrl)
to this:
if limit.add(longUrl):
r = requests.get("http://api.some.url/show?url=%s"% longUrl)
Much easier to follow. It's usually the case that when you're comparing something to a literal True or False, the code can be made more readable.
Edit
i just saw in the interpreter the var assignment is the culprit.
How would I use the same obj?
You could, for example, create the limit object at the module level. Cut and paste ;-)
I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station.
I have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit.
Could not find any code on the internet that does this.
Maybe I need to have a sub-process, but I am not familiar with that at all.
Does anyone have an example that I can use?
Use this timeout decorator.
import signal
class TimeoutError(Exception):
def __init__(self, value = "Timed Out"):
self.value = value
def __str__(self):
return repr(self.value)
def timeout(seconds_before_timeout):
def decorate(f):
def handler(signum, frame):
raise TimeoutError()
def new_f(*args, **kwargs):
old = signal.signal(signal.SIGALRM, handler)
signal.alarm(seconds_before_timeout)
try:
result = f(*args, **kwargs)
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result
new_f.func_name = f.func_name
return new_f
return decorate
Usage:
import time
#timeout(5)
def mytest():
print "Start"
for i in range(1,10):
time.sleep(1)
print "%d seconds have passed" % i
if __name__ == '__main__':
mytest()
Probably the most straight-forward approach (you can put this into a decorator if you like, but that's just cosmetics I think):
import time
import datetime
class MySQLWrapper:
def __init__(self, min_period_seconds):
self.min_period = datetime.timedelta(seconds=min_period_seconds)
self.last_calltime = datetime.datetime.now() - self.min_period
def insert(self, item):
now = datetime.datetime.now()
if now-self.last_calltime < self.min_period:
print "not insert"
else:
self.last_calltime = now
print "insert", item
m = MySQLWrapper(5)
m.insert(1) # insert 1
m.insert(2) # not insert
time.sleep(5)
m.insert(3) # insert 3
As a side-note: Have you noticed RRDTool during your web-search for related stuff? It does apparantly what you want to achieve, i.e.
a database to store the most recent values of arbitrary resolution/update frequency.
extrapolation/interpolation of values if updates are too frequent or missing.
generates graphs from the data.
An approach could be to store all data you can get into your MySQL database and forward a subset to such RRDTool database to generate a nice time series visualization of it. Depending on what you might need.
import time
def timeout(f, k, n):
last_time = [time.time()]
count = [0]
def inner(*args, **kwargs):
distance = time.time() - last_time[0]
if distance > k:
last_time[0] = time.time()
count[0] = 0
return f(*args, **kwargs)
elif distance < k and (count[0]+1) == n:
return False
else:
count[0] += 1
return f(*args, **kwargs)
return inner
timed = timeout(lambda x, y : x + y, 300, 1)
print timed(2, 4)
First argument is the function you want run, second is the time interval, and the third is the number of times it's allowed to run in that time interval.
Each time the function is run save a file with the current time. When the function is run again check the time stored in the file and make sure it is old enough.
Just derive to a new class and override the insert function. In the overwriting function, check last insert time and call father's insert method if it has been more than five minutes, and of course update the most recent insert time.