Increase the number of steps Python turtle performs each second? - python

My class has been learning Python's Turtle module recently (which I gather uses tkinter), and I was wondering if there was a way to adjust the rate at which tkinter/turtle executes its code, because it doesn't seem (from my limited understanding) to be limited by the computational abilities of my computer. I say that because in task manager (I'm on Windows if that affects anything), the python shell only uses a small percentage of the CPU's limits (~2%) and likewise for the GPU, RAM, disc etc. Additionally, increasing it's operational priority neither affects how much of my CPU is used, nor does it increase the rate it executes its code.
Note that I'm not referring to the speed that the Turtle executes each action as determined by turtle.speed(), I've already got that at '0' such that it's effectively instantaneous, my problem instead lies with what seems to be the time taken between each step which appears to be limited to 80 actions per second (more on this later).
For example, the following code draws an approximation of a parabola, given some precision. The higher the precision, the better the approximation but the longer it takes to draw, as it's taking more, smaller steps.
precision=0.1
t.penup()
t.goto(-250,150)
t.pendown()
for n in range(800*precision):
t.setheading(math.degrees(math.atan(0.02*n-8)))
t.fd(1)
Effectively, for precisions close to or above 1, it takes far longer than I would like, and in general, drawing fine curves in Tkinter is too slow, so I want to know if there's a way to adjust this speed.
Part of my difficulty when trying to research a solution has been that I simply don't know what the relevant terminology is‚ so I've tried using vaguely related terms including some hardware-based analogues along with various other things that are kind of analogous eg:
clock speed
refresh rate
frame rate
tick speed (Minecraft ftw?)
step-through rate
execution rate
actions per second
steps per second
But all to no avail, attempting to describe the issue in Google fails too.
Additionally, I simply don't understand what the underlying bottleneck is (or even if there is a single bottleneck) that's causing it to be so slow, which makes the issue difficult to solve.
I've noticed that if a command for the turtle takes a significant amount of time to calculate (for example by forcing it to do ridiculous amounts of calculations to work out a simple value), then it does simply take longer to execute each step, suggesting that maybe it is just a hardware limitation. However, when using the python timeit decorator to time the execution, it seems to always execute precisely some number of actions per second for any function, regardless of the complexity of the individual action, up to a point, beyond which complexity begins to slow it down. So it's as though there's some limit on the rate it can occur. Though additionally, this specific limit seems to occasionally change suggesting that the computer's state does influence it to some degree.
Also, just in case, this is the timeit setup I used:
import timeit
mysetup="""
import math
import turtle as t
def DefaultDerivative(x):
return 2*x-x
def GeneralEquation(precision=1,XShift=0,YShift=0,Derivative=DefaultDerivative):
t.penup()
t.goto(XShift,YShift)
t.pendown()
for n in range(0,int(800*precision)):
t.setheading((math.degrees(math.atan(Derivative(((0.01*n)-(4*precision))/precision)))))
t.fd(1/precision)
def equation1(x):
return (2*(x**2))+(2*x)
def equation2(x):
return x**2
def equation3(x):
return math.cos(x)
def equation4(x):
return 2*x
t.speed(0)
"""
mycode="""
GeneralEquation(5,-350,300,equation4)
"""
print("time: "+str(timeit.timeit(setup=mysetup,stmt=mycode,number=10)))
Anyway, this is my first question so I hope I explained myself well enough.
Thank you.

Is this quick enough for your purpose:
import timeit
mysetup = """
import turtle
from math import atan, cos
def DefaultDerivative(x):
return 2 * x - x
def GeneralEquation(precision=1, XShift=0, YShift=0, Derivative=DefaultDerivative):
turtle.radians()
turtle.tracer(False)
turtle.penup()
turtle.goto(XShift, YShift)
turtle.pendown()
for n in range(0, int(800 * precision)):
turtle.setheading(atan(Derivative((0.01 * n - 4 * precision) / precision)))
turtle.forward(1 / precision)
turtle.tracer(True)
def equation1(x):
return 2 * x ** 2 + 2 * x
def equation2(x):
return x ** 2
def equation3(x):
return cos(x)
def equation4(x):
return 2 * x
"""
mycode = """
GeneralEquation(5, -350, 300, equation4)
"""
print("time: " + str(timeit.timeit(setup=mysetup, stmt=mycode, number=10)))
Basically, I've turned off turtle's attempts at animation. I also threw in a command to make turtle think in radians so you don't need to call the degrees() function over and over. If you want to see some animation, you can tweak the argument to tracer(), eg. turtle.tracer(20).

Related

Timing operation with increasing list size - unexpected behaviour

Problem: How long does it take to generate a Python list of prime numbers from 1 to N? Plot a graph of time taken against N.
I used SymPy to generate the list of primes.
I expected the time to increase monotonically.
But why is there a dip?
import numpy as np
import matplotlib.pyplot as plt
from time import perf_counter as timer
from sympy import sieve
T = []
tic=timer()
N= np.logspace(1,8,30)
for Nup in N:
tic = timer()
A=list(sieve.primerange(1,Nup))
toc = timer()
T.append(toc-tic)
plt.loglog(N,T,'x-')
plt.grid()
plt.show()
Time taken to generate primes up to N
The sieve itself requires an exponential amount of time to compute ever larger numbers of primes, so plotting the pure runtime of a sieve should come out to roughly a straight line for large numbers.
In your copy of the plot, it looks like it's actually getting a bit worse over time, but when I run your script it's not perfectly straight, but close to a straight line on the log scale towards the end. However, there is a bit of a bend at the start, as with your result.
This makes sense because the sieve caches previous results, but initially it gets little benefit from that and there's the small overhead of setting up the cache and increasing its size which goes down over time, and more importantly there's the overhead of the actual call to the sieve routine. Also, this type of performance measurement is very sensitive to anything else going on on your system, including whatever Python and your IDE are doing
Here's your code with some added code to loop over various initial runs, warming the cache of the sieve before every run - it shows pretty clearly what the effect is:
import numpy as np
import matplotlib.pyplot as plt
from time import perf_counter as timer, sleep
from sympy import sieve
for warmup_step in range(0, 5):
warmup = 100 ** warmup_step
sieve._reset() # this resets the internal cache of the sieve
_ = list(sieve.primerange(1, warmup)) # warming the sieve's cache
_ = timer() # avoid initial delays from other elements of the code
sleep(3)
print('Start')
times = []
tic = timer()
numbers = np.logspace(1, 8, 30)
for n in numbers:
tic = timer()
_ = list(sieve.primerange(1, n))
toc = timer()
times.append(toc - tic)
print(toc, n) # provide some visual feedback of speed
plt.loglog(numbers, times, 'x-')
plt.title(f'Warmup: {warmup}')
plt.ylim(1e-6, 1e+1) # fix the y-axis, so the charts are easily comparable
plt.grid()
plt.show()
The lesson to be learned here is that you need to consider overhead. Of your own code and the libraries you use, but also the entire system that sits around it: the Python VM, your IDE, whatever is running on your workstation, the OS that runs it, the hardware.
The test above is better, but if you want really nice results, run the whole thing a dozen times and average out the results over runs.
Results:

is there a way to workaround ''recursion depth reach in comparison'' in parallelizing a loop within a loop?

What is the problem about:
I am building an agent-based model with mesa & networkx in python. In one line, the model tries to model how changes in an agent's attitude can influence whether or not they take the decision to adopt a new technology. I m currently attempting to parallelize a part of it, to speed up run time. The number of agents currently are 4000. I keep hitting the error message as follows:
'if i < 256
Recursion depth reached in comparison'
The pseudo-code below outlines the process (after which I explain what I've tried and failed).
Initializes a model of 4000 agents
Gives each agent a set of agents to interact with at every time step at two levels: a) geographic neighbhours, b) 3 social circles.
From each interaction pair in the list, agents' attitudes are compared, some modifications to attitudes are made.
This process repeats for several time-steps, with results of one step carrying over to another.
import pandas as pd
import multiprocessing as mp
import dill
from pathos.multiprocessing import ProcessingPool
def model_initialization():
df = pd.read_csv(path+'4000_household_agents.csv')
for agent in df:
model.schedule.add(agent)
#assign three circles of influence
agent.social_circle1 = social_circle1
agent.social_circle2 = social_circle2
agent.social_circle3 = social_circle3
def assign_interactions():
for agent in schedule.agents:
#geograhic neighbhours
neighbours = agent.get_neighbhours()
interaction_list.append(agent,neighbhour)
#interaction in circles of influence
interaction_list.append(agent, social_circle1)
interaction_list.append(agent, social_circle2)
interaction_list.append(agent, social_circle3)
return interaction_list
def attitude_change(agent1,agent2):
#compare attitudes
if agent1.attitude > agent2.attitude:
# make some change to attitudes
agent1.attitude -= 0.2
agent2.attitude += 0.2
return agent1.attitude,agent2.attitude
def interactions(interaction):
agent1 = interaction[0]
agent2 = interaction[1]
agent1.attitude,agent2.attitude = attitude_change(agent1,agent2)
def main():
model_initialization()
interaction_list= assign_interactions()
#pool = mp.Pool(10)
pool = ProcessingPool(10)
#interaction list can have over and above 89,000 interactions atleast
results = pool.map(interactions, [interaction for interaction in interaction_list])
# run this process several times
for i in range(12):
main()
What I've tried
Because the model step is sequential, the only part I can parallelize is the interactions( ) function. Because I thought the interaction loop is called more 90,000 times, I reset the sys.setrecursionlimit( ) to about 100,000. Fails.
I have broken the interactions_list to several chunks of 500 each and pooled the processes for each chunk. Same error.
To see if something was absolutely wrong, I only took the first 35 elements (a small number) of the interactions list and only ran that. It still hits recursion depth.
Can anyone help me see which part of the code hits recursion depth? I tried both dill + multiprocessing as well as multiprocessing alone. The latter gives 'pickling error'.

Avoiding (or speeding up) large loop in Python?

I'm using SageMath to perform some mathematical calculations, and at one point I have a for loop that looks like this:
uni = {}
end = (l[idx]^(e[idx] - 1)) * (l[idx] + 1) # where end in my case is about 2013265922,
# but can also be much much larger too.
for count in range(0, end):
i = randint(1, 303325737249669131) # this executes very fast in Sage
if i in uni:
uni[i] += 1
else:
uni[i] = 1
So basically, I want to create very large number of random integers in the given range, check whether the number was already in the dictionary, if yes increment its count, if not initialize it to 1. But, the loop takes such a long time that it doesn't finish in a reasonable amount of time, and not because the operations inside the loop are complicated, but instead because there are a huge number of iterations to be performed. Therefore, I want to ask whether there is any way to avoid (or speed up) this kind of loops in Python?
I profiled your code (use cProfile for this) and the vast majority of the time spent, is spend within the randint function that is called for each iteration of the loop.
I recommend you vectorize the loop using numpy random number generation libraries, and then a single call to the Counter class to extract frequency counts.
import numpy.random
import numpy
from collections import Counter
assert 303325737249669131 < 18446744073709551615 # limit for uint64
numbers = numpy.random.randint(low=0, high=303325737249669131, size=end,
dtype=numpy.uint64)
frequency = Counter(numbers)
For a loop of 1000000 iterations (smaller than the one you suggest) I observed a reduction from 6 seconds to about 1 second. So even with this you cannot expect more than an order of magnitude reduction in terms of computation time.
You may think that keeping an array of all the values in memory is inefficient, and may lead to memory exhaustion before the computation ends. However, due to the small value of "end" compared with the range of the random integers the rate at which you will be recording collisions is low, and therefore the memory cost of a full array is not significantly larger than storing the dictionary. However, if this becomes and issue you may wish to perform the computation in batches. In that spirit you may also want to use the multiprocessing facilities to distribute computations across many CPUs or even many machines (but lookout for network costs if you chose that).
Biggest speedup you can make without low-level magic is using defaultdict, i.e.
uni = defaultdict(int)
for count in range(0, end):
i = randint(1, 303325737249669131) # this executes very fast in Sage
uni[i] += 1
If you're using python2, change range to xrange.
Except this - I'm pretty sure that its somewhere near limit for python. Loop is
generating random integer (optimized as much as possible without static typing)
calculating hash
updating dict. With defaultdict if-else branches is factored to more optimized code
from time to time, malloc calls to resize dict - this is fast (considering inablity to preallocate memory for dict)

Assert that one run is significantly faster than one other

I need to make a functionnal test that assert that one run is significantly faster than one other.
Here is the code I have written so far:
def test_run5(self):
cmd_line = ["python", self.__right_def_file_only_files]
start = time.clock()
with self.assertRaises(SystemExit):
ClassName().run(cmd_line)
end = time.clock()
runtime1 = end - start
start = time.clock()
with self.assertRaises(SystemExit):
ClassName().run(cmd_line)
end = time.clock()
runtime2 = end - start
self.assertTrue(runtime2 < runtime1 * 1.4)
It works but I don't like this way because the 1.4 factor has been chosen experimentaly with my specific example of execution.
How would you test that the second execution is always faster than the first?
EDIT
I didn't think that it would be necessary to explain it but in the context of my program, it is not up to me to say that a factor is significant for an unknown execution.
The whole program is a kind of Make and it is the pipeline definition file that will define what is the "significant difference of speed", not me:
If the definition file contains a lot of rules that are very fast, the difference of execution time between two consecutive execution will be very small, let's say 5% faster but still significant
Else if the definition file contains few rules but very long ones, the difference will be big, let's say 90% faster so a difference of 5% would not be significant at all.
I found out an equation named Michaelis-Menten kinetics which fit with my needs. Here is the function which should do the trick
def get_best_factor(full_exec_time, rule_count, maximum_ratio=1):
average_rule_time = full_exec_time / rule_count
return 1 + (maximum_ratio * average_rule_time / (1.5 + average_rule_time))
full_exec_time parameter is runtime1 which is the maximum execution time for a given pipeline definition file.
rule_count is the number of rules in the given pipeline definition file.
maximum_ratio means that the second execution will be, at max, 100% faster than the first (impossible, in practice)
The variable parameter of the Michaelis-Menten kinetics equation, is the average rule execution time. And I have arbitrarily chosen 1.5 seconds as the average rule execution time at which the execution time should be maximum_ratio / 2 faster. It is the actual parameter that depends on your use of this equation.

Running time for python

Hi I am trying to estimate the run time for my fft code form numpy. With different input length N. The following is my code.
import cmath
import math
from random import uniform
from numpy.fft import fft
import time
for i in range(3,10):
N = 2**i
x = [uniform(-32768,32767) for i in range(N)]
t0 = time.clock()
X = fft(x)
t1 = time.clock()
print t1-t0
This is the result I got, the first line with input length N=3 should be the fastest one, but no matter how many times I run, the first one is always the largest one. I guess this is a problem with timer, however I don't know the exact reason for it. Can anyone explain this to me?
Output:
4.8e-05
3e-05
1.7e-05
6e-05
3.1e-05
5.4e-05
9.6e-05
The time interval is too small to be accurately measured by time.clock(), as there is latency jitter in the OS call. Instead, do enough work (loop each fft a few thousand or million times) until the work to be measured takes a few seconds. Also repeat each measurement several times and take an average, as there may be other system overheads (cache flushes, process switches, etc.) that can vary the performance.

Categories

Resources