Timing a Python program using time.clock() vs. time.time() [duplicate] - python

This question already has answers here:
Python's time.clock() vs. time.time() accuracy?
(16 answers)
Closed 6 years ago.
I am new to Python programming. I started working on Project Euler this morning and I wanted to find out how long it takes to execute my solution. I have searched online for a solution to my
import time
class Solution(object):
def fibonacci(self,limit):
sum = 0
current = 1
next = 2
while(current <= limit):
if current % 2==0:
sum += current
current, next = next, current + next
return str(sum)
if __name__ == "__main__":
start = time.clock()
solution = Solution().fibonacci(4000000)
elapsed = time.clock()-start
print("Solution: %s"%(solution))
print("Time: %s seconds"%(elapsed))
Output:
Solution: 4613732
Time: 2.006085436846098e-05 seconds
import time
class Solution(object):
def fibonacci(self,limit):
sum = 0
current = 1
next = 2
while(current <= limit):
if current % 2==0:
sum += current
current, next = next, current + next
return str(sum)
if __name__ == "__main__":
start = time.time()
solution = Solution().fibonacci(4000000)
elapsed = time.time()-start
print("Solution: %s"%(solution))
print("Time: %s seconds"%(elapsed))
Output:
Solution: 4613732
Time: 0.0 seconds
My question is
Is the time calculated above correct?
What is the difference between time.time() vs time.clock(). If I use time.time() I get 0.0 as time.

In the Python time module, time.clock() measures the time since the first call of the function in seconds, and time.time() measures the time since January 1st, 1970, in seconds.
time.clock() is generally more precise, so using this is what I recommend. This is the reason why you have the tiny result in the first example, rounded down to zero in the second example.

Related

My multiprocessing threadpool takes longer to complete tasks than a single-threaded implementation

I have written an algorithim and am trying to compare performance of diffrent versions. My benchmark function uses a threadpool, but it takes the same time or longer to benchmark than a single-core implementation.
I have used pypy and python, versions 3.11 and the result is the same.
Method to benchmark:
def main(print_results=True):
results = Queue()
start_time = time.time()
words = get_set_from_dict_file("usa.txt")
results.put(f"Total words read: {len(words)}")
results.put(f"Total time taken to read the file: {round((time.time() - start_time) * 1000)} ms")
start_time_2 = time.time()
pairs = getPairs(words)
results.put(f"Number of words that can be built with 3 letter word + letter + 3 letter word: {len(pairs)}")
results.put(f"Total time taken to find the pairs: {round((time.time() - start_time_2) * 1000)} ms")
results.put(f"Time taken: {round((time.time() - start_time) * 1000)}ms")
if print_results:
[print(x) for x in results.queue]
return (time.time() - start_time) * 1000
MultiThreaded Threadpool:
def benchmark(n=1000):
# start number of threads equal to 90% of cores running main() using multiprocessing, continue until n runs complete
core_count = os.cpu_count()
thread_num = floor(core_count * 0.9)
pool = ThreadPool(thread_num)
results = pool.map_async(main, [False] * n)
results = results.get()
pool.close()
avg_time_ms = round(sum(results) / len(results))
# Save best run time and its code as a pickle file in format (time, code)
# Currently hidden code
return avg_time_ms, -1
Test:
if __name__ == "__main__":
print("Do you want to benchmark? (y/n)")
if input().upper() == "Y":
print("Benchmark n times: (int)")
n = input()
n = int(n) if (n.isdigit() and 0 < int(n) <= 1000) else 100
start = time.time()
bench = benchmark(n)
end = time.time()
print("\n----------Multi-Thread Benchmark----------")
print(f"Average time taken: {bench[0]} ms")
print(f"Best time taken yet: {bench[1]} ms")
print(f"Total bench time: {end - start:0.5} s")
start = time.time()
non_t_results = [main(False) for _ in range(n)]
end = time.time()
print("\n----------Single-Thread Benchmark----------")
print(f"Average time taken: {round(sum(non_t_results) / len(non_t_results))} ms")
print(f"Total bench time: {end - start:0.5} s")
else:
main()
Every time I run it, no matter the number of runs or threads in the pool, the pool never completes faster. Here is an example output:
Do you want to benchmark? (y/n)
y
Benchmark n times: (int)
50
----------Multi-Thread Benchmark----------
Average time taken: 276 ms
Best time taken yet: -1 ms
Total bench time: 2.2814 s
----------Single-Thread Benchmark----------
Average time taken: 36 ms
Total bench time: 1.91 s
Process finished with exit code 0
I expect the threadpool to finish faster.
It turns out I was using threads instead of processes. Thanks to the commentators I was able to understand that ThreadPool is for concurrent processing, and Pool is for parallel processing.
Here was the changed benchmark:
def benchmark(n=1000):
# start number of threads equal to 90% of cores running main() using multiprocessing, continue until n runs complete
core_count = os.cpu_count()
process_num = floor(core_count * 0.9)
with Pool(process_num) as pool:
results = pool.map_async(main, [False] * n)
results = results.get()
avg_time_ms = round(sum(results) / len(results))
# Save best run time and its code as a pickle file in format (time, code)
"""..."""
return avg_time_ms, -1

Time before full counter with percent python

how to calculate based on timestamp and get the progress before the max full value?
def full_energy():
time_now = 1666650096 #changes every update
time_end = 1666679529
max_energy = 50
diff = datetime.utcfromtimestamp(time_now) - datetime.utcfromtimestamp(time_end)
secs = diff.total_seconds()
???
# expected output
# x/y (z)
# 25/50 (50%)
how to get the value of x and z based on this sample?
Something like this will work. You need to provide the start time to compute percent completed. Not sure how you want the display to function:
from datetime import datetime, timedelta
import time
def full_energy(start, now, end):
max_energy = 50
percent = (now - start) / (end - start)
current_energy = max_energy * percent
# In a typical text console this will overwrite the same line
print(f'\r{current_energy:.0f}/{max_energy} ({percent:.0%})', end='', flush=True)
start = datetime.now()
end = start + timedelta(seconds=10)
while (now := datetime.now()) <= end:
time.sleep(.2)
full_energy(start, now, end)

Create clock pulse with python

I want to work with exactly 20ms sleep time. When i was using time.sleep(0.02), i am facing many problems. It is not working what i want. If I had to give an example;
import time
i = 0
end = time.time() + 10
while time.time() < end:
i += 1
time.sleep(0.02)
print(i)
We wait to see "500" in console. But it is like "320". It is a huge difference. Because sleep time is not working true and small deviations occur every sleep time. It is increasing cumulatively and we are seeing wrong result.
And then, i want to create my new project for clock pulse. Is it that possible with time.time()?
import time
first_time = time.time() * 100 #convert seconds to 10 * miliseconds
first_time = int(first_time) #convert integer
first_if = first_time
second_if = first_time + 2 #for sleep 20ms
third_if = first_time + 4 #for sleep 40ms
fourth_if = first_time + 6 #for sleep 60ms
fifth_if = first_time + 8 #for sleep 80ms
end = time.time() + 8
i = 0
while time.time() < end:
now = time.time() * 100 #convert seconds to 10 * miliseconds
now = int(now) #convert integer
if i == 0 and (now == first_if or now > first_if):
print('1_' + str(now))
i = 1
if i == 1 and (now == second_if or now > second_if):
print('2_' + str(now))
i = 2
if i == 2 and (now == third_if or now > third_if):
print('3_' + str(now))
i = 3
if i == 3 and (now == fourth_if or now > fourth_if):
print('4_' + str(now))
i = 4
if i == 4 and (now == fifth_if or now > fifth_if):
print('5_' + str(now))
break
Out >> 1_163255259009
2_163255259011
3_163255259013
4_163255259015
5_163255259017
Is this project true logic? And If it is true logic, how can finish this projects with true loops?
Because i want these sleeps to happen all the time. Thank you in advice.
Let's say you want to count in increments of 20ms. You need to sleep for the portion of the loop that's not the comparison, increment, and print. Those operations take time, probably about 10ms based on your findings.
If you want to do it in a loop, you can't hard code all the possible end times. You need to do something more general, like taking a remainder.
Start with the time before the loop:
t0 = time.time()
while time.time() < end:
i += 1
Now you need to figure out how long to sleep so that the time between t0 and the end of the sleep is a multiple of 20ms.
(time.time() - t0) % 0.02 tells you how far past a 20ms increment you are because python conveniently supports floating point modulo. The amount of time to wait is then
time.sleep(0.02 - (time.time() - t0) % 0.02)
print(i)
Using sign rules of %, you can reduce the calculation to
time.sleep(-(time.time() - t0) % 0.02)

Python loop to run for certain amount of seconds

I have a while loop, and I want it to keep running through for 15 minutes. it is currently:
while True:
#blah blah blah
(this runs through, and then restarts. I need it to continue doing this except after 15 minutes it exits the loop)
Thanks!
Try this:
import time
t_end = time.time() + 60 * 15
while time.time() < t_end:
# do whatever you do
This will run for 15 min x 60 s = 900 seconds.
Function time.time returns the current time in seconds since 1st Jan 1970. The value is in floating point, so you can even use it with sub-second precision. In the beginning the value t_end is calculated to be "now" + 15 minutes. The loop will run until the current time exceeds this preset ending time.
If I understand you, you can do it with a datetime.timedelta -
import datetime
endTime = datetime.datetime.now() + datetime.timedelta(minutes=15)
while True:
if datetime.datetime.now() >= endTime:
break
# Blah
# Blah
Simply You can do it
import time
delay=60*15 ###for 15 minutes delay
close_time=time.time()+delay
while True:
##bla bla
###bla bla
if time.time()>close_time
break
For those using asyncio, an easy way is to use asyncio.wait_for():
async def my_loop():
res = False
while not res:
res = await do_something()
await asyncio.wait_for(my_loop(), 10)
I was looking for an easier-to-read time-loop when I encountered this question here. Something like:
for sec in max_seconds(10):
do_something()
So I created this helper:
# allow easy time-boxing: 'for sec in max_seconds(42): do_something()'
def max_seconds(max_seconds, *, interval=1):
interval = int(interval)
start_time = time.time()
end_time = start_time + max_seconds
yield 0
while time.time() < end_time:
if interval > 0:
next_time = start_time
while next_time < time.time():
next_time += interval
time.sleep(int(round(next_time - time.time())))
yield int(round(time.time() - start_time))
if int(round(time.time() + interval)) > int(round(end_time)):
return
It only works with full seconds which was OK for my use-case.
Examples:
for sec in max_seconds(10) # -> 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
for sec in max_seconds(10, interval=3) # -> 0, 3, 6, 9
for sec in max_seconds(7): sleep(1.5) # -> 0, 2, 4, 6
for sec in max_seconds(8): sleep(1.5) # -> 0, 2, 4, 6, 8
Be aware that interval isn't that accurate, as I only wait full seconds (sleep never was any good for me with times < 1 sec). So if your job takes 500 ms and you ask for an interval of 1 sec, you'll get called at: 0, 500ms, 2000ms, 2500ms, 4000ms and so on. One could fix this by measuring time in a loop rather than sleep() ...
The best solution for best performance is to use #DrV answer and the suggestion from #jfs to use time.monotonic():
import time
from datetime import datetime, timedelta
count = 0
end_time = time.monotonic() + 10
while time.monotonic() < end_time:
count += 1
print(f'10 second result: {count=:,}')
# 10 second result: count=185,519,745
count = 0
end_time = time.time() + 10
while time.time() < end_time:
count += 1
print(f'10 second result: {count=:,}')
# 10 second result: count=158,219,172
count = 0
end_time = datetime.now() + timedelta(seconds=10)
while datetime.now() < end_time:
count += 1
print(f'10 second result: {count=:,}')
# 10 second result: count=39,168,578
try this:
import time
import os
n = 0
for x in range(10): #enter your value here
print(n)
time.sleep(1) #to wait a second
os.system('cls') #to clear previous number
#use ('clear') if you are using linux or mac!
n = n + 1

Python recursion timings in return statement

I am currently trying to time recursions of factorials and I cannot find a way around printing every factorial in each recursion step. Now I have tried printing it just in the return statement which would solve my problem, but that just ended up in a mess of wall of text with timings being fragmented.
EDIT: I should mention that I am trying to get the cumulative timings of the whole process and not fragmented results like I have below with the print statement.
I tried something like:
return (str(n) + '! = ' + (str(FactResult)) +
' - Runtime = %.9f seconds' % (end-start))
But here is what I have below as of now.
import time
def factorial(n):
"""Factorial function that uses recursion and returns factorial of
number given."""
start = time.clock()
if n < 1:
return 1
else:
FactResult = n * factorial(n - 1)
end = time.clock()
print(str(n) + '! - Runtime = %.9f seconds' % (end-start))
return FactResult
It seems to work fine after fixing the indentation and minor (cosmetic) changes:
import time
def factorial(n):
"""Factorial function that uses recursion and returns factorial of number given."""
start = time.clock()
if n < 1:
return 1
else:
FactResult = n * factorial(n - 1)
end = time.clock()
print(str(n) + '! =', FactResult, '- Runtime = %.9f seconds' % (end-start))
return FactResult
factorial(10)
It prints for me... without printing the result value:
c:\tmp\___python\BobDunakey\so12828669>py a.py
1! - Runtime = 0.000001440 seconds
2! - Runtime = 0.000288474 seconds
3! - Runtime = 0.000484790 seconds
4! - Runtime = 0.000690225 seconds
5! - Runtime = 0.000895181 seconds
6! - Runtime = 0.001097736 seconds
7! - Runtime = 0.001294052 seconds
8! - Runtime = 0.001487008 seconds
9! - Runtime = 0.001683804 seconds
10! - Runtime = 0.001884920 seconds
... and with printing the value:
c:\tmp\___python\BobDunakey\so12828669>py a.py
1! = 1 - Runtime = 0.000001440 seconds
2! = 2 - Runtime = 0.001313252 seconds
3! = 6 - Runtime = 0.002450827 seconds
4! = 24 - Runtime = 0.003409847 seconds
5! = 120 - Runtime = 0.004300708 seconds
6! = 720 - Runtime = 0.005694598 seconds
7! = 5040 - Runtime = 0.006678577 seconds
8! = 40320 - Runtime = 0.007579038 seconds
9! = 362880 - Runtime = 0.008463659 seconds
10! = 3628800 - Runtime = 0.009994826 seconds
EDIT
For the cumulative timing, you have to measure outside the call. Otherwise you are not able to capture the start time. It is also more natural:
import time
def factorial(n):
"""Factorial function that uses recursion and returns factorial of number given."""
if n < 1:
return 1
else:
return n * factorial(n - 1)
n = 10
start = time.clock()
result = factorial(n)
end = time.clock()
print(str(n) + '! =', result, '- Runtime = %.9f seconds' % (end-start))
It prints:
c:\tmp\___python\BobDunakey\so12828669>py a.py
10! = 3628800 - Runtime = 0.000007200 seconds
Move the "end = time.clock()" and the print statement right before the "return 1" in the block that catches n<1. This is the last execution at the biggest depth of the recursion stack, so all you will miss is the backing up out of it. To get the most proper result, you should follow the suggestion of NullUserException and time outside the recursion method.

Categories

Resources