Why is time.time() result decreases to 50.85 seconds? [duplicate] - python

I want to log how long something takes in real walltime. Currently I'm doing this:
startTime = time.time()
someSQLOrSomething()
print "That took %.3f seconds" % (time.time() - startTime)
But that will fail (produce incorrect results) if the time is adjusted while the SQL query (or whatever it is) is running.
I don't want to just benchmark it. I want to log it in a live application in order to see trends on a live system.
I want something like clock_gettime(CLOCK_MONOTONIC,...), but in Python. And preferably without having to write a C module that calls clock_gettime().

That function is simple enough that you can use ctypes to access it:
#!/usr/bin/env python
__all__ = ["monotonic_time"]
import ctypes, os
CLOCK_MONOTONIC_RAW = 4 # see <linux/time.h>
class timespec(ctypes.Structure):
_fields_ = [
('tv_sec', ctypes.c_long),
('tv_nsec', ctypes.c_long)
]
librt = ctypes.CDLL('librt.so.1', use_errno=True)
clock_gettime = librt.clock_gettime
clock_gettime.argtypes = [ctypes.c_int, ctypes.POINTER(timespec)]
def monotonic_time():
t = timespec()
if clock_gettime(CLOCK_MONOTONIC_RAW , ctypes.pointer(t)) != 0:
errno_ = ctypes.get_errno()
raise OSError(errno_, os.strerror(errno_))
return t.tv_sec + t.tv_nsec * 1e-9
if __name__ == "__main__":
print monotonic_time()

Now, in Python 3.3 you would use time.monotonic.

As pointed out in this question, avoiding NTP readjustments on Linux requires CLOCK_MONOTONIC_RAW. That's defined as 4 on Linux (since 2.6.28).
Portably getting the correct constant #defined in a C header from Python is tricky; there is h2py, but that doesn't really help you get the value at runtime.

Here's how I get monotonic time in Python 2.7:
Install the monotonic package:
pip install monotonic
Then in Python:
import monotonic; mtime = monotonic.time.time #now mtime() can be used in place of time.time()
t0 = mtime()
#...do something
elapsed = mtime()-t0 #gives correct elapsed time, even if system clock changed.
EDIT: check that the above works on your target OS before trusting it. The monotonic library seems to handle clock changes in some OSes and not others.

time.monotonic() might be useful:
Return the value (in fractional seconds) of a monotonic clock, i.e. a clock that cannot go backwards. The clock is not affected by system clock updates. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.

Related

How to time out python function after X milliseconds?

I'm calling a function A(i) in python . I want it to be terminated if it executes for more than X milliseconds. I've looked at some ways of timing out including signal.alarm(), but they all take the time in integral seconds. I want to do something like:
signal.alarm(0.26) //time out after 0.26 seconds
How do I do this?
Aside from the solution I linked as a possible duplicate, you could also use ctypes to use ualarm if your system supports it.
For example, on macOS:
import ctypes
libc = ctypes.CDLL("libc.dylib")
libc.ualarm(10000, 0) # 10,000 microseconds
singal.settimer requires unix.
Else you can use time.time()
import time
start = time.time()
now = time.time()
while now - start < 0.26: # 0.26 seconds
now = time.time()
print (now-start)
you can use a decorator, put your function under the while loop or raise an exception when timer ends (and probably others).
You may have to thread the timer.

time.time() drift over repeated calls

I am getting a timestamp every time a key is pressed like this:
init_timestamp = time.time()
while (True):
c = getch()
offset = time.time() - init_timestamp
print("%s,%s" % (c,offset), file=f)
(getch from this answer).
I am verifying the timestamps against an audio recording of me actually typing the keys. After lining the first timestamp up with the waveform, subsequent timestamps drift slighty but consistently. By this I mean that the saved timestamps are later than the keypress waveforms and get later and later as time goes on.
I am reasonably sure the waveform timing is correct (i.e. the recording is not fast or slow), because in the recording I also included the ticking of a very accurate clock which lines up perfectly with the second markers.
I am aware that there are unavoidable limits to the accuracy of time.time(), but this does not seem to account for what I'm seeing - if it was equally wrong on both sides that would be acceptable, but I do not want it to gradually diverge more and more from the truth.
Why would I be seeing this drifting behaviour and what can I do to avoid it?
Just solved this by using time.monotonic() instead of time.time(). time.time() seems to use gettimeofday (at least here it does) which is apparently really bad for measuring walltime differences because of NTP syncing issues:
gettimeofday() and time() should only be used to get the current time if the current wall-clock time is actually what you want. They should never be used to measure time or schedule an event X time into the future.
You usually aren't running NTP on your wristwatch, so it probably won't jump a second or two (or 15 minutes) in a random direction because it happened to sync up against a proper clock at that point. Good NTP implementations try to not make the time jump like this. They instead make the clock go faster or slower so that it will drift to the correct time. But while it's drifting you either have a clock that's going too fast or too slow. It's not measuring the passage of time properly.
(link). So basically measuring differences between time.time() calls is a bad idea.
Depending on which OS you are using you will either need to use time.time() or time.clock().
For windows OS's you will need to use time.clock this give you will clock seconds as a float. time.time() on windows if I remember correctly time.time() is only accurate within 16ms.
For posix systems (linux, osx) you should be using time.time() this is a float which returns the number of seconds since the epoch.
In your code add the following to make your application a little more cross system compatible.
import os
if os.name == 'posix':
from time import time as get_time
else:
from time import clock as get_time
# now use get_time() to return the timestamp
init_timestamp = get_time()
while (True):
c = getch()
offset = get_time() - init_timestamp
print("%s,%s" % (c,offset), file=f)
...

Square waveform using Python and pyparallel

I want to generate square clock waveform to external device.
I use python 2.7 with Windows 7 32bit on an old PC with a LPT1 port.
The code is simple:
import parallel
import time
p = parallel.Parallel() # open LPT1
x=0
while (x==0):
p.setData(0xFF)
time.sleep(0.0005)
p.setData(0x00)
I do see the square wave (using scope) but with not expected time period.
I will be gratefull for any help
It gives an expected performance for a while... Continue to reduce times
import parallel
import time
x=0
while (x<2000):
p = parallel.Parallel()
time.sleep(0.01) # open LPT1
p.setData(0xFF)
p = parallel.Parallel() # open LPT1
time.sleep(0.01)
p.setData(0x00)
x=x+1
To generate signals like that is hard. To mention one reason why it is hard might be that the process gets interrupted returns when the sleep time is exceeded.
Found this post about sleep precision with an accepted answer that is great:
How accurate is python's time.sleep()?
another source of information: http://www.pythoncentral.io/pythons-time-sleep-pause-wait-sleep-stop-your-code/
What the information tells you is that Windows will be able to do a sleep for a minimum ~10ms, in Linux the time is approximately 1ms, but may vary.
Update
I made function that make possible to sleep less then 10ms. But the precision is very sketchy.
In the attached code I included a test that presents how the precision behaves. If you want higher precision, I strongly recommend you read the links I attached in my original answer.
from time import time, sleep
import timeit
def timer_sleep(duration):
""" timer_sleep() sleeps for a given duration in seconds
"""
stop_time = time() + duration
while (time() - stop_time) < 0:
# throw in something that will take a little time to process.
# According to measurements from the comments, it will take aprox
# 2useconds to handle this one.
sleep(0)
if __name__ == "__main__":
for u_time in range(1, 100):
u_constant = 1000000.0
duration = u_time / u_constant
result = timeit.timeit(stmt='timer_sleep({time})'.format(time=duration),
setup="from __main__ import timer_sleep",
number=1)
print('===== RUN # {nr} ====='.format(nr=u_time))
print('Returns after \t{time:.10f} seconds'.format(time=result))
print('It should take\t{time:.10f} seconds'.format(time=duration))
Happy hacking

Use python to test the load time of windows applications

I want to use python to test the time it takes for various windows applications (one being Acrobat Reader X) to load a file.
I can test the load time in opening and closing of files within Python code.
I know how to start a subprocess using Python and open a windows application that way, but that is not useful in this context because python calls the subprocess and continues through the script so the timer always reads 0.
Is there another method I can use to open a windows application, test it's state (has the file loaded) and time that whole process?
There are different timing functions, with different semantics.
time.clock() return the time elapsed in the current process, while time.time() return the number of seconds since the epoch. This means that if you want to time an other process you should use time.time() and not time.clock().
Example:
>>> import time
>>> import subprocess
>>> def time_time():
... t1 = time.time()
... subprocess.call(['python', 'a.py'])
... return time.time() - t1
...
>>> def time_clock():
... t1 = time.clock()
... subprocess.call(['python', 'a.py'])
... return time.clock() - t1
...
>>> time_time()
0.12334513664245605
>>> time_clock()
0.0
Probably there exist better solutions(*), and take into account PEP418 for python3.3.
Also I'd like to point out that it's better to use the profile/cProfile or the hotshot modules to profile code. They give you plenty of more informations about timings.
(*) time.time() is affected if the administrator of the system changes the time of the computer, so it's not guaranteed that time.time() - time.time() will return a value greater or equal to zero, and also, even if it is positive, you cannot be sure if the timing is correct.
Even though in normal situations, where the administrator wont change the time while you are profiling, this wont happen.

accurately measure time python function takes

I need to measure the time certain parts of my program take (not for debugging but as a feature in the output). Accuracy is important because the total time will be a fraction of a second.
I was going to use the time module when I came across timeit, which claims to avoid a number of common traps for measuring execution times. Unfortunately it has an awful interface, taking a string as input which it then eval's.
So, do I need to use this module to measure time accurately, or will time suffice? And what are the pitfalls it refers to?
Thanks
According to the Python documentation, it has to do with the accuracy of the time function in different operating systems:
The default timer function is platform
dependent. On Windows, time.clock()
has microsecond granularity but
time.time()‘s granularity is 1/60th of
a second; on Unix, time.clock() has
1/100th of a second granularity and
time.time() is much more precise. On
either platform, the default timer
functions measure wall clock time, not
the CPU time. This means that other
processes running on the same computer
may interfere with the timing ... On Unix, you can
use time.clock() to measure CPU time.
To pull directly from timeit.py's code:
if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time
In addition, it deals directly with setting up the runtime code for you. If you use time you have to do it yourself. This, of course saves you time
Timeit's setup:
def inner(_it, _timer):
#Your setup code
%(setup)s
_t0 = _timer()
for _i in _it:
#The code you want to time
%(stmt)s
_t1 = _timer()
return _t1 - _t0
Python 3:
Since Python 3.3 you can use time.perf_counter() (system-wide timing) or time.process_time() (process-wide timing), just the way you used to use time.clock():
from time import process_time
t = process_time()
#do some stuff
elapsed_time = process_time() - t
The new function process_time will not include time elapsed during sleep.
Python 3.7+:
Since Python 3.7 you can also use process_time_ns() which is similar to process_time()but returns time in nanoseconds.
You could build a timing context (see PEP 343) to measure blocks of code pretty easily.
from __future__ import with_statement
import time
class Timer(object):
def __enter__(self):
self.__start = time.time()
def __exit__(self, type, value, traceback):
# Error handling here
self.__finish = time.time()
def duration_in_seconds(self):
return self.__finish - self.__start
timer = Timer()
with timer:
# Whatever you want to measure goes here
time.sleep(2)
print timer.duration_in_seconds()
The timeit module looks like it's designed for doing performance testing of algorithms, rather than as simple monitoring of an application. Your best option is probably to use the time module, call time.time() at the beginning and end of the segment you're interested in, and subtract the two numbers. Be aware that the number you get may have many more decimal places than the actual resolution of the system timer.
I was annoyed too by the awful interface of timeit so i made a library for this, check it out its trivial to use
from pythonbenchmark import compare, measure
import time
a,b,c,d,e = 10,10,10,10,10
something = [a,b,c,d,e]
def myFunction(something):
time.sleep(0.4)
def myOptimizedFunction(something):
time.sleep(0.2)
# comparing test
compare(myFunction, myOptimizedFunction, 10, input)
# without input
compare(myFunction, myOptimizedFunction, 100)
https://github.com/Karlheinzniebuhr/pythonbenchmark
Have you reviewed the functionality provided profile or cProfile?
http://docs.python.org/library/profile.html
This provides much more detailed information than just printing the time before and after a function call. Maybe worth a look...
The documentation also mentions that time.clock() and time.time() have different resolution depending on platform. On Unix, time.clock() measures CPU time as opposed to wall clock time.
timeit also disables garbage collection when running the tests, which is probably not what you want for production code.
I find that time.time() suffices for most purposes.
From Python 2.6 on timeit is not limited to input string anymore. Citing the documentation:
Changed in version 2.6: The stmt and setup parameters can now also take objects that are callable without arguments. This will embed calls to them in a timer function that will then be executed by timeit(). Note that the timing overhead is a little larger in this case because of the extra function calls.

Categories

Resources