Python: how to extend execution time of code to 1 second - python

I'm writing a countdown clock in python, but it looks like the time module only goes down to the second. Is there a way for me to accurately determine when exactly 1 second has passed?
Seems like my question was a little confusing, let me clarify. I need to run some code, then, at the end, the program enters a while loop and exits once at least 1000 milliseconds have passed since the time the code started running

If you know the code you want to run will take less than 1 second, then 1 - elapsed time will give you the remaining time to sleep, no while loop required.
now = time.time()
foo()
time.sleep(1 - (time.time() - now))
There will be some overhead with the arithmetic, but it's within 1/100 of a second and will be strictly greater than 1 second, as you request. I ran the following code:
import time
import random
def foo():
time.sleep(random.random())
now = time.time()
foo()
time.sleep(1 - (time.time() - now))
print "Time elapsed: {}".format(time.time() - now)
Output:
Time elapsed: 1.00379300117
You can run this several times to verify it gives the output you want, no matter how long foo takes.
Unless it takes longer than 1 second, then the sleep time will be negative which will result in IOError. You would need to check for that case.
Or, if you need to kill the function if 1 second has passed, check this question

Here is a way which will work, though im not sure which modules you are limited to.
import time
def procedure:
time.sleep(2.5)
# measure wall time
t0 = time.time()
procedure()
print time.time() - t0, "seconds wall time"
2.50023603439 seconds wall time
where procedure is a reference to the function you are timing.

By default the time module gives you the time to the 10^-5 second
import time
time.time()
>>> 1480643510.89443

Related

How do I time python code, similar to unix time command? [duplicate]

I am working on a Python script that is going to be run in the command line. The idea is to get a command from the user, run it and then provide the wall-clock time and the CPU time of the command provided by the user. See code below.
#!/usr/bin/env python
import os
import sys
def execut_cmd(cmd_line):
utime = os.system('time '+cmd_line)
# Here I would like to store the wall-clock time in the Python variable
# utime.
cputime = os.system('time '+cmd_line)
# Here the CPU time in the cputime variable. utime and cputime are going to
# be used later in my Python script. In addition, I would like to silence the
# output of time in the screen.
execut_cmd(sys.argv[1])
print ('Your command wall-clock time is '+utime)
print ('Your command cpu time is '+ cputime)
How can I accomplish this? Also, if there is a better method than using 'time' I am open to try it.
From Python Documentation for wall time:
... On Windows, time.clock() has microsecond granularity, but time.time()’s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise. On either platform, default_timer() measures wall clock time, not the CPU time. This means that other processes running on the same computer may interfere with the timing.
For wall time you can use timeit.default_timer() which gets the timer with best granularity described above.
From Python 3.3 and above you can use time.process_time() or time.process_time_ns() . Below is the documentation entry for process_time method:
Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
To provide the current wall time, time.time() can be used to get the epoch time.
To provide the elapsed wall time, time.perf_counter() can be used at the start and end of the operation with the difference in results reflecting the elapsed time. The results cannot be used to give an absolute time as the reference point is undefined. As mentioned in other answers, you can use timeit.default_time() but this will always return time.perf_counter() as of python 3.3
To provide the elapsed CPU time, time.process_time() can be used in a similar manner to time.perf_counter(). This will provide the sum of the system and user CPU time.
With the little time I have spent using the timing functions on Linux systems. I have observed that
timeit.default_timer() and time.perf_counter() numerically gives the same result.
Also, when measuring the duration of a time interval, timeit.default_timer(), time.perf_counter() and time.time() all virtually gives the same result. So this means that any of these functions can be used to measure the elapsed time or wall time for any process.
I think I should also mention that the difference between time.time() and others is that it gives the current time in seconds from epoch which is from 1 January 1970 12:00AM
time.clock() and time.process_time() also gives the same numerical value
time.process_time() is most suitable for measuring the cpu time since time.clock() is already deprecated in python 3

how do I make a Timer in Python

How do you create a timer in python? My project is a speed typing test and the timer is there to time the length it takes the user to type. The first task the user types is the alphabet, as fast as they can and then the second task is to type as quickly as possible again for a group of words in set in a random order
The time module
The time module allows the user to directly get the time, in seconds, since 1970 (See: https://docs.python.org/3/library/time.html). This means that we can subtract the time before from time after to see how long it has been, namely how long it took the user to finish the typing test. From there, it is as easy as printing the result. You can round the time using int to get a purely seconds result without milliseconds.
The code
# Import the time library
import time
# Calculate the start time
start = time.time()
# Code here
# Calculate the end time and time taken
end = time.time()
length = start - end
# Show the results : this can be altered however you like
print("It took", start-end, "seconds!")
You can use the build in time libary:
import time
strToType="The cat is catching a mouse."
start_time = time.perf_counter()
print("Type: '"+strToType+"'.")
typedstring=input()
if typedstring==strToType:
end_time = time.perf_counter()
run_time = end_time - start_time
print("You typed '"+strToType+"' in "+str(run_time)+" seconds.")

why execution time for this python code increases each call?

import time
word = {"success":0, "desire":0, "effort":0, ...}
def cleaner(x):
dust = ",./<>?;''[]{}\=+_)(*&^%$##!`~"
for letter in x:
if letter in dust:
x = x[0:x.index(letter)]+x[x.index(letter)+1:]
else:
pass
return x #alhamdlillah it worked 31.07.12
print "input text to analyze"
itext = cleaner(raw_input()).split()
t = time.clock()
for iword in itext:
if iword in word:
word[iword] += 1
else:
pass
print t
print len(itext)
every time i call the code, t will increase. can anyone explain the underlying concept/reason behind this. perhaps in terms of system process? thank you very much, programming lads.
Because you're printing out the current time each time you run the script
That's how time works, it advances, constantly.
If you want to measure the time taken for your for loop (between the first call to time.clock() and the end), print out the difference in times:
print time.clock() - t
You are printing the current time... of course it increases every time you run the code.
From the python documentation for time.clock():
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of “processor time”, depends on that of the C function
of the same name, but in any case, this is the function to use for
benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
time.clock() returns the elapsed CPU time since the process was created. CPU time is based on how many cycles the CPU spent in the context of the process. It is a monotonic function during the lifetime of a process, i.e. if you call time.clock() several times in the same execution, you will get a list of increasing numbers. The difference between two successive invocations of clock() could be less than the elasped wall-clock time or more, depending on wheather the CPU was not running at 100% (e.g. there was some waiting for I/O) or if you have a multithreaded program which consumes more than 100% of CPU time (e.g. multicore CPU with 2 threads using 75% each -> you'd get 150% of the wall-clock time). But if you call clock() once in one process, then rerun the program again, you might get lower value than the one before, if it takes less time to process the input in the new process.
What you should be doing instead is to use time.time() which returns the current Unix timestamp with fractional (subsecond) precision. You should call it once before the processing is started and once after that and subtract the two values in order to compute the wall-clock time elapsed between the two invocations.
Note that on Windows time.clock() returns the elapsed wall-clock time since the process was started. It is like calling time.time() immediately at the beginning of the script and then subtracting the value from later calls to time.time().
There is a really good library called jackedCodeTimerPy that works better than the time module. It also has some clever error checking so you may want to try it out.
Using jackedCodeTimerPy your code should look like this:
# import time
from jackedCodeTimerPY import JackedTiming
JTimer = JackedTiming()
word = {"success":0, "desire":0, "effort":0}
def cleaner(x):
dust = ",./<>?;''[]{}\=+_)(*&^%$##!`~"
for letter in x:
if letter in dust:
x = x[0:x.index(letter)]+x[x.index(letter)+1:]
else:
pass
return x #alhamdlillah it worked 31.07.12
print "input text to analyze"
itext = cleaner(raw_input()).split()
# t = time.clock()
JTimer.start('timer_1')
for iword in itext:
if iword in word:
word[iword] += 1
else:
pass
# print t
JTimer.stop('timer_1')
print JTimer.report()
print len(itext)
It gives really good reports like
label min max mean total run count
------- ----------- ----------- ----------- ----------- -----------
imports 0.00283813 0.00283813 0.00283813 0.00283813 1
loop 5.96046e-06 1.50204e-05 6.71864e-06 0.000335932 50
I like how it gives you statistics on it and the number of times the timer is run.
It's simple to use. If i want to measure the time code takes in a for loop i just do the following:
from jackedCodeTimerPY import JackedTiming
JTimer = JackedTiming()
for i in range(50):
JTimer.start('loop') # 'loop' is the name of the timer
doSomethingHere = 'This is really useful!'
JTimer.stop('loop')
print(JTimer.report()) # prints the timing report
You can can also have multiple timers running at the same time.
JTimer.start('first timer')
JTimer.start('second timer')
do_something = 'amazing'
JTimer.stop('first timer')
do_something = 'else'
JTimer.stop('second timer')
print(JTimer.report()) # prints the timing report
There are more use example in the repo. Hope this helps.
https://github.com/BebeSparkelSparkel/jackedCodeTimerPY

execution time in python

Through a python program, sending a command to specific device and that device is responding on the behalf of the command. Now I have to calculate timing between send and receive (means how much time taking to response of the command ).
Ex.
device ip - 10.0.0.10
transmitting 'L004' command through our local system to 10.0.10.
Receving 'L' response from 10.0.0.10.
So now I have to calculate time difference between start time and end time.
Please provide an API through that I can calculate.
import time
t1 = time.time()
# some time-demanding operations
t2 = time.time()
print "operation took around {0} seconds to complete".format(t2 - t1)
time.time() returns the current unix timestamp as a float number. Store this number at given points of your code and calculate the difference. You will get the time difference in seconds (and fractions).
The timeit standard module makes it easy to do this kind of task.
Just Use "timeit" module. It works with both Python 2 And Python 3
import timeit
start = timeit.default_timer()
#ALL THE PROGRAM STATEMETNS
stop = timeit.default_timer()
execution_time = stop - start
print("Program Executed in "+execution_time) #It returns time in sec
It returns in Seconds and you can have your Execution Time. Simple but you should write these in Main Function which starts program execution. If you want to get the Execution time even when you get error then take your parameter "Start" to it and calculate there like
`def sample_function(start,**kwargs):
try:
#your statements
Except:
#Except Statements
stop = timeit.default_timer()
execution_time = stop - start

Run a python function every second

What I want is to be able to run a function every second, irrelevant of how long the function takes (it should always be under a second). I've considered a number of options but not sure which is best.
If I just use the delay function it isn't going to take into account the time the function takes to run.
If I time the function and then subtract that from a second and make up the rest in the delay it's not going to take into account the time calculations.
I tried using threading.timer (I'm not sure about the ins and outs of how this works) but it did seem to be slower than the 1s.
Here's the code I tried for testing threading.timer:
def update(i):
sys.stdout.write(str(i)+'\r')
sys.stdout.flush()
print i
i += 1
threading.Timer(1, update, [i]).start()
Is there a way to do this irrelevant of the length of the time the function takes?
This will do it, and its accuracy won't drift with time.
import time
start_time = time.time()
interval = 1
for i in range(20):
time.sleep(start_time + i*interval - time.time())
f()
The approach using a threading.Timer (see code below) should in fact not be used, as a new thread is launched at every interval and this loop can never be stopped cleanly.
# as seen here: https://stackoverflow.com/a/3393759/1025391
def update(i):
threading.Timer(1, update, [i+1]).start()
# business logic here
If you want a background loop it is better to launch a new thread that runs a loop as described in the other answer. Which is able to receive a stop signal, s.t. you can join() the thread eventually.
This related answer seems to be a great starting point to implement this.
if f() always takes less than a second then to run it on a one second boundary (without a drift):
import time
while True:
time.sleep(1 - time.monotonic() % 1)
f()
The idea is from #Dave Rove's answer to a similar question.
To understand how it works, consider an example:
time.monotonic() returns 13.7 and time.sleep(0.3) is called
f() is called around (±some error) 14 seconds (since time.monotonic() epoch)
f() is run and it takes 0.1 (< 1) seconds
time.monotonic() returns around 14.1 seconds and time.sleep(0.9) is called
Step 2. is repeated around 15 seconds (since time.monotonic() epoch)
f() is run and it takes 0.3 (< 1) seconds (note: the value is different from Step 2.)
time.monotonic() returns around 15.3 and time.sleep(0.7) is called
f() is called around 16 seconds and the loop is repeated.
At each step f() is called on a one second boundary (according to time.monotonic() timer). The errors do not accumulate. There is no drift.
See also: How to run a function periodically in python (using tkinter).
How about this: After each run, sleep for (1.0 - launch interval) seconds. You can change the terminate condition by changing while True:. Although if the your function takes more than 1 second to run, this will go wrong.
from time import time, sleep
while True:
startTime = time()
yourFunction()
endTime = time()-startTime
sleep(1.0-endTime)
Threading may be a good choice. The basic concept is as follows.
import threading
def looper():
# i as interval in seconds
threading.Timer(i, looper).start()
# put your action here
foo()
#to start
looper()
I would like to recommend the following code. You can replace the True with any condition if you want.
while True:
time.sleep(1) #sleep for 1 second
func() #function you want to trigger
Tell me if it works.

Categories

Resources