I have to functions and need to compare them for efficiency purposes (who is faster), what is the best way to do it?
Simplest way is that you can use time function from time library.
import time
start = time.time()
my_function() # This is the task which I have done
end = time.time()
print(end - start)
What about using something like that?
import time
start = time.time()
print("hello")
end = time.time()
print(end - start)
Based on the solution provided here: Solution
Depends on how intense your function is. If it is something simple and you want to compare between some functions, you should run them a few times
import time
t0 = time.time()
for i in range(1,10000):
yourfunction()
t1 = time.time()
for i in range(1,10000):
yourotherfunction()
t2 = time.time()
print(t1-t0, t2-t1)
You want the timeit function. It will run your test case a number of times and give back the timings. You will often see people quoting the results from timeit when they are doing performance comparisons between different approaches.
You can find the docs on it here
I want to log how long something takes in real walltime. Currently I'm doing this:
startTime = time.time()
someSQLOrSomething()
print "That took %.3f seconds" % (time.time() - startTime)
But that will fail (produce incorrect results) if the time is adjusted while the SQL query (or whatever it is) is running.
I don't want to just benchmark it. I want to log it in a live application in order to see trends on a live system.
I want something like clock_gettime(CLOCK_MONOTONIC,...), but in Python. And preferably without having to write a C module that calls clock_gettime().
That function is simple enough that you can use ctypes to access it:
#!/usr/bin/env python
__all__ = ["monotonic_time"]
import ctypes, os
CLOCK_MONOTONIC_RAW = 4 # see <linux/time.h>
class timespec(ctypes.Structure):
_fields_ = [
('tv_sec', ctypes.c_long),
('tv_nsec', ctypes.c_long)
]
librt = ctypes.CDLL('librt.so.1', use_errno=True)
clock_gettime = librt.clock_gettime
clock_gettime.argtypes = [ctypes.c_int, ctypes.POINTER(timespec)]
def monotonic_time():
t = timespec()
if clock_gettime(CLOCK_MONOTONIC_RAW , ctypes.pointer(t)) != 0:
errno_ = ctypes.get_errno()
raise OSError(errno_, os.strerror(errno_))
return t.tv_sec + t.tv_nsec * 1e-9
if __name__ == "__main__":
print monotonic_time()
Now, in Python 3.3 you would use time.monotonic.
As pointed out in this question, avoiding NTP readjustments on Linux requires CLOCK_MONOTONIC_RAW. That's defined as 4 on Linux (since 2.6.28).
Portably getting the correct constant #defined in a C header from Python is tricky; there is h2py, but that doesn't really help you get the value at runtime.
Here's how I get monotonic time in Python 2.7:
Install the monotonic package:
pip install monotonic
Then in Python:
import monotonic; mtime = monotonic.time.time #now mtime() can be used in place of time.time()
t0 = mtime()
#...do something
elapsed = mtime()-t0 #gives correct elapsed time, even if system clock changed.
EDIT: check that the above works on your target OS before trusting it. The monotonic library seems to handle clock changes in some OSes and not others.
time.monotonic() might be useful:
Return the value (in fractional seconds) of a monotonic clock, i.e. a clock that cannot go backwards. The clock is not affected by system clock updates. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
I want to calculate the execution time of code of various languages such as java, python, javascript. How to get the execution time of these codes. Is there any tool available in python packages or any other to calculate the execution time by passing the file(any file java or python) path. Please share your suggestion.
I am aware of getting execution time by using time module in python code. How to execute Javascript and java codes in python and get the execution time in common function.
I tried in below method.
import time
def get_exectime(file_path): # pass path of any file python,java,javascript, html, shell
start_time=time.time()
# execute the file given here. How to execute all file types here?
end_time=time.time()
exec_time=end_time-start_time
print(exec_time)
Is there any other method available to achieve this?
You can do that using the time module:
import time
start_time = time.time()
# your code
end_time = time.time()
print("Total execution time: {} seconds".format(end_time - start_time))
Contrary to other answers, I suggest using timeit, which was designed with the very purpose of measuring execution times in mind, and can also be used as a standalone tool: https://docs.python.org/3/library/timeit.html
It will give you not only the real time of execution, but also CPU time used, which is not necessarily the same thing.
import time
start_time = time.time()
#code here
print("--- %s seconds ---" % (time.time() - start_time))
I think you might need time module. This is the simplest way to measure execution time inn python. Take a look at my example.
import time
start_time = time.time()
a=1
for i in range(10000):
a=a+1
end_time = time.time()
total_time = end_time-start_time
print("Execution time in seconds: %s ",total_time)
Output:
Execution time in seconds: %s 0.0038547515869140625
>>>
First install "humanfriendly" package in python by opening Command Prompt (CMD) as administrator and type -
pip install humanfriendly
Code:
from humanfriendly import format_timespan
import time
begin_time = time.time()
# Put your code here
end_time = time.time() - begin_time
print("Total execution time: ", format_timespan(end_time))
Output:
I am interested in measuring the time elapsed during a (synchronous) HTTP request and/or a (synchronous) request to a database on a remote server. After reading this page, my understanding is that time.clock() is an accurate measure of the processor time. But I don't know if "processor time" is relevant in my case, since the CPU would be idling while waiting for the response. In other words:
s0 = time.time()
# send a HTTP request
s1 = time.time()
t0 = time.clock()
# send a HTTP request
t1 = time.clock()
Which one actually measures what I want?
For measuring HTTP response time, I think time.time() is enough.
As others suggested, use timeit if you want to do benchmarking.
I personally haven't used time.clock() before, but after reading the example :
#!/usr/bin/python
import time
def procedure():
time.sleep(2.5)
# measure process time
t0 = time.clock()
procedure()
print time.clock() - t0, "seconds process time"
# measure wall time
t0 = time.time()
procedure()
print time.time() - t0, "seconds wall time"
I don't think time.clock() is appropriate measuring HTTP response time.
One approach is to use New Relic for python. You just install it and enable in application. After that, you will be able to see such charts in your New Relic account. It has a free plan.
Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
start = time.clock()
... do something
elapsed = (time.clock() - start)
vs.
start = time.time()
... do something
elapsed = (time.time() - start)
As of 3.3, time.clock() is deprecated, and it's suggested to use time.process_time() or time.perf_counter() instead.
Previously in 2.7, according to the time module docs:
time.clock()
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of “processor time”, depends on that of the C function
of the same name, but in any case, this is the function to use for
benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Additionally, there is the timeit module for benchmarking code snippets.
The short answer is: most of the time time.clock() will be better.
However, if you're timing some hardware (for example some algorithm you put in the GPU), then time.clock() will get rid of this time and time.time() is the only solution left.
Note: whatever the method used, the timing will depend on factors you cannot control (when will the process switch, how often, ...), this is worse with time.time() but exists also with time.clock(), so you should never run one timing test only, but always run a series of test and look at mean/variance of the times.
Others have answered re: time.time() vs. time.clock().
However, if you're timing the execution of a block of code for benchmarking/profiling purposes, you should take a look at the timeit module.
One thing to keep in mind:
Changing the system time affects time.time() but not time.clock().
I needed to control some automatic tests executions. If one step of the test case took more than a given amount of time, that TC was aborted to go on with the next one.
But sometimes a step needed to change the system time (to check the scheduler module of the application under test), so after setting the system time a few hours in the future, the TC timeout expired and the test case was aborted. I had to switch from time.time() to time.clock() to handle this properly.
clock() -> floating point number
Return the CPU time or real time since the start of the process or since
the first call to clock(). This has as much precision as the system
records.
time() -> floating point number
Return the current time in seconds since the Epoch.
Fractions of a second may be present if the system clock provides them.
Usually time() is more precise, because operating systems do not store the process running time with the precision they store the system time (ie, actual time)
Depends on what you care about. If you mean WALL TIME (as in, the time on the clock on your wall), time.clock() provides NO accuracy because it may manage CPU time.
time() has better precision than clock() on Linux. clock() only has precision less than 10 ms. While time() gives prefect precision.
My test is on CentOS 6.4, python 2.6
using time():
1 requests, response time: 14.1749382019 ms
2 requests, response time: 8.01301002502 ms
3 requests, response time: 8.01491737366 ms
4 requests, response time: 8.41021537781 ms
5 requests, response time: 8.38804244995 ms
using clock():
1 requests, response time: 10.0 ms
2 requests, response time: 0.0 ms
3 requests, response time: 0.0 ms
4 requests, response time: 10.0 ms
5 requests, response time: 0.0 ms
6 requests, response time: 0.0 ms
7 requests, response time: 0.0 ms
8 requests, response time: 0.0 ms
As others have noted time.clock() is deprecated in favour of time.perf_counter() or time.process_time(), but Python 3.7 introduces nanosecond resolution timing with time.perf_counter_ns(), time.process_time_ns(), and time.time_ns(), along with 3 other functions.
These 6 new nansecond resolution functions are detailed in PEP 564:
time.clock_gettime_ns(clock_id)
time.clock_settime_ns(clock_id, time:int)
time.monotonic_ns()
time.perf_counter_ns()
time.process_time_ns()
time.time_ns()
These functions are similar to the version without the _ns suffix, but
return a number of nanoseconds as a Python int.
As others have also noted, use the timeit module to time functions and small code snippets.
The difference is very platform-specific.
clock() is very different on Windows than on Linux, for example.
For the sort of examples you describe, you probably want the "timeit" module instead.
I use this code to compare 2 methods .My OS is windows 8 , processor core i5 , RAM 4GB
import time
def t_time():
start=time.time()
time.sleep(0.1)
return (time.time()-start)
def t_clock():
start=time.clock()
time.sleep(0.1)
return (time.clock()-start)
counter_time=0
counter_clock=0
for i in range(1,100):
counter_time += t_time()
for i in range(1,100):
counter_clock += t_clock()
print "time() =",counter_time/100
print "clock() =",counter_clock/100
output:
time() = 0.0993799996376
clock() = 0.0993572257367
time.clock() was removed in Python 3.8 because it had platform-dependent behavior:
On Unix, return the current processor time as a floating point number expressed in seconds.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number
print(time.clock()); time.sleep(10); print(time.clock())
# Linux : 0.0382 0.0384 # see Processor Time
# Windows: 26.1224 36.1566 # see Wall-Clock Time
So which function to pick instead?
Processor Time: This is how long this specific process spends actively being executed on the CPU. Sleep, waiting for a web request, or time when only other processes are executed will not contribute to this.
Use time.process_time()
Wall-Clock Time: This refers to how much time has passed "on a clock hanging on the wall", i.e. outside real time.
Use time.perf_counter()
time.time() also measures wall-clock time but can be reset, so you could go back in time
time.monotonic() cannot be reset (monotonic = only goes forward) but has lower precision than time.perf_counter()
On Unix time.clock() measures the amount of CPU time that has been used by the current process, so it's no good for measuring elapsed time from some point in the past. On Windows it will measure wall-clock seconds elapsed since the first call to the function. On either system time.time() will return seconds passed since the epoch.
If you're writing code that's meant only for Windows, either will work (though you'll use the two differently - no subtraction is necessary for time.clock()). If this is going to run on a Unix system or you want code that is guaranteed to be portable, you will want to use time.time().
Short answer: use time.clock() for timing in Python.
On *nix systems, clock() returns the processor time as a floating point number, expressed in seconds. On Windows, it returns the seconds elapsed since the first call to this function, as a floating point number.
time() returns the the seconds since the epoch, in UTC, as a floating point number. There is no guarantee that you will get a better precision that 1 second (even though time() returns a floating point number). Also note that if the system clock has been set back between two calls to this function, the second function call will return a lower value.
To the best of my understanding, time.clock() has as much precision as your system will allow it.
Right answer : They're both the same length of a fraction.
But which faster if subject is time ?
A little test case :
import timeit
import time
clock_list = []
time_list = []
test1 = """
def test(v=time.clock()):
s = time.clock() - v
"""
test2 = """
def test(v=time.time()):
s = time.time() - v
"""
def test_it(Range) :
for i in range(Range) :
clk = timeit.timeit(test1, number=10000)
clock_list.append(clk)
tml = timeit.timeit(test2, number=10000)
time_list.append(tml)
test_it(100)
print "Clock Min: %f Max: %f Average: %f" %(min(clock_list), max(clock_list), sum(clock_list)/float(len(clock_list)))
print "Time Min: %f Max: %f Average: %f" %(min(time_list), max(time_list), sum(time_list)/float(len(time_list)))
I am not work an Swiss labs but I've tested..
Based of this question : time.clock() is better than time.time()
Edit : time.clock() is internal counter so can't use outside, got limitations max 32BIT FLOAT, can't continued counting if not store first/last values. Can't merge another one counter...
Comparing test result between Ubuntu Linux and Windows 7.
On Ubuntu
>>> start = time.time(); time.sleep(0.5); (time.time() - start)
0.5005500316619873
On Windows 7
>>> start = time.time(); time.sleep(0.5); (time.time() - start)
0.5