I'd like to test the speed of a bash script and a Python script. How would I get the time it took to run them?
If you're on Linux (or another UN*X), try time:
The time command runs the specified program command with the given
arguments. When command finishes, time writes a message to standard
error giving timing statistics about this program run. These statis-
tics consist of (i) the elapsed real time between invocation and termi-
nation, (ii) the user CPU time (the sum of the tms_utime and tms_cutime
values in a struct tms as returned by times(2)), and (iii) the system
CPU time (the sum of the tms_stime and tms_cstime values in a struct
tms as returned by times(2)).
Note that you need to eliminate outer effects - e.g. other processes using the same resources can skew the measurement.
I guess that you can use
time ./script.sh
time python script.py
At the beginning of each script output the start time and at the end of each script output the end time. Subtract the times and compare. Or use the time command if it is available as others have answered.
Related
I am executing a simple python (v 2.7.17) script which finds the square roots of numbers between 1 - 1000000. it does this 1000000 times for a single execution. This is then repeated 100 times. The output is the time taken to execute each cycle.
When I execute this script in a Linux shell, each execution time is printed one after the other. They vary, but the average across the total 100 executions is 0.126154s.
When I run the exact same script within a docker container, there is no output until the end of all 100 executions where the output for all 100 is displayed all at one. The execution times are quicker when compared to native an average of 100 docker executions is 0.095896s.
When I apply various stresses to the system when executing the script both natively and in docker, the average execution times differ greatly. When I stress the CPU, I get an average across 100 executions of
native average 0.506660s
docker average 0.190208s
I am curious as to why my python script runs quicker when in a container. Any thoughts would be greatly appreciated. Python code is:
import timeit
mycode = """
def example():
mylist = []
for x in range(1000000):
mylist.append(sqrt(x))
"""
mysetup = "from math import sqrt"
print timeit.timeit(setup = mysetup,stmt = mycode,number = 1000000)
I did more digging and found out why I had better execution times whilst running the script in a container.
When I start the script in a container on my system (4 core) it looks like a whole core or a percentage of all four is dedicated or reserved to running that container, the rest of the systems running processes are then divided up with what CPU availability is left.
When running the script natively the script has to compete with everything else running on the system. So when I applied the stress tests (stress-ng) to the CPU - each stress test is a new process where the available processor time is dividied into equal amounts for each stress process. The more stresses I applied to the system the slower the script was executing but when executing in a container this did not apply due to a large chunk of processor available all the time to the container.
More out of curiosity, I was wondering how might I make a python script sleep for 1 second without using the time module?
Is there a computation that can be conducted in a while loop which takes a machine of n processing power a designated and indexable amount of time?
As mentioned in comments for your second part of question:
The processing time is depends on the machine(computer and its configuration) you are working with and active processes on it. There isnt fixed amount of time for an operation.
It's been a long time since you could get a reliable delay out of just trying to execute code that would take a certain time to complete. Computers don't work like that any more.
But to answer your first question: you can use system calls and open a os process to sleep 1 second like:
import subprocess
subprocess.run(["sleep", "1"])
I'm trying to profile a script to see why it's taking so long and I'm wondering if I'm not seeing what's taking the most time.
The call (python -m profile scpt.py) takes 27671 seconds to run according to my own timing of the script, but when I sum the tottime column of the output, I get 13410.423 seconds. That's a little shy of half the total runtime.
Can I rest assured that all that can be optimized is what's reported and that I'm not missing anything significant? Where is the rest of the time taken up? Is it the profiler code which is doubling the actual time it takes to run the script without the profiler? If not, is there a way to obtain running time stats that I'm missing?
The missing time is time when the program was blocked on IO.
The profile module only measures CPU time. It is not an IO profiler.
This is the difference between wall time and CPU time.
I have spent a long time profiling some of my Django code (trying to find the root cause of a performance issue), which converts a queryset to a list of HTML tags.
The queryset in question has around 8000 records, but profiling has shown that the database/SQL is not the issue, as the actual query executes in around 1 millisecond.
After many hours of trying to find the problem I accidentally stumbled upon the fact that running the specific code via Apache/WSGI seems to play a big role in the performance problem.
To narrow the issue down to a single line of Python code, I have wrapped it in a "clock" to measure it's performance, like so:
import time
start_time = time.clock()
records = list(query_set) # <--- THIS LINE IS THE CULPRIT
total_time = time.clock() - start_time
with open('some_path/debug.txt', 'a') as f:
f.write('%f\n'%total_time)
Ignoring the fact that I am converting an 8000+ record queryset to a Python list (I have reasons for doing so), the part I want to focus on is the timing differences:
Apache/WSGI: 1.184240000000000403446165365 (seconds)
From a shell: 0.6034849999999999381472548521 (seconds)
The code in question lives in a function that I can conveniently call from a Django management shell and run manually, which allows me to get the second timing metric.
So can someone please tell me what the heck is causing that doubling in execution time of that single line. Obviously a fair bit of built-in Django code is invoked by that line, but how can the way the code is executed make such a massive difference?
How can I get the same performance in Apache/WSGI as I get in the shell?
Note that I am running the Django application in daemon mode (not embedded mode) and the rest of my application is not suffering from performance issues. It's just that I never thought the difference between shell and Apache/WSGI would be a cause of performance difference, for this particular code, let alone any.
Update 1
I forgot to mention that I tried running the same code in nginx/uWSGI and the same problem occurred, so it doesn't seem to be Apache that's at fault. Possibly the way WSGI works itself?
Update 2
So the plot thickens...
That list(query_set) line of code is not multi-threaded by me, but if I alter my implementation slightly and do the actual iteration of the querset in multi-threaded code:
def threaded_func(query_set_slice): # Gets invoked by multiple threads created by me
start_time = time.clock()
for record in query_set_slice:
... do stuff ...
total_time = time.clock() - start_time
with open('some_path/debug.txt', 'a') as f:
f.write('%f\n'%total_time)
... there is still an execution time difference (between shell and Apache/WSGI), but this time the ratio is reversed.
I.e. The numbers are:
Apache/WSGI: 0.824394 (seconds)
Shell: 1.890437 (seconds)
Funnily enough, as the ratio of the threaded function is the reverse of the previous ratio, the execution time of the function that invokes the threaded function is the same for the two types of invocation (Apache/WSGI and Shell).
Update 3
After doing some research on time.clock() it seems that it can't be trusted all the time, so I tried time.time() and that produced roughly a 1:1 ratio.
I also tried LineProfiler and that gave a 1:1 ratio, so my guess is that time.clock() is not taking into account context switches and things like that.
I am trying to execute certain number of python scripts at certain intervals. Each script takes a lot of time to execute and hence I do not want to waste time in waiting to run them sequentially. I tired this code but it is not executing them simultaneously and is executing them one by one:
Main_file.py
import time
def func(argument):
print 'Starting the execution for argument:',argument
execfile('test_'+argument+'.py')
if __name__ == '__main__':
arg = ['01','02','03','04','05']
for val in arg:
func(val)
time.sleep(60)
What I want is to kick off by starting the executing of first file(test_01.py). This will keep on executing for some time. After 1 minute has passed I want to start the simultaneous execution of second file (test_02.py). This will also keep on executing for some time. Like this I want to start the executing of all the scripts after gaps of 1 minute.
With the above code, I notice that the execution is happening one after other file and not simultaneously as the print statements which are there in these files appear one after the other and not mixed up.
How can I achieve above needed functionality?
Using python 2.7 on my computer, the following seems to work with small python scripts as test_01.py, test_02.py, etc. when threading with the following code:
import time
import thread
def func(argument):
print('Starting the execution for argument:',argument)
execfile('test_'+argument+'.py')
if __name__ == '__main__':
arg = ['01','02','03']
for val in arg:
thread.start_new_thread(func, (val,))
time.sleep(10)
However, you indicated that you kept getting a memory exception error. This is likely due to your scripts using more stack memory than was allocated to them, as each thread is allocated 8 kb by default (on Linux). You could attempt to give them more memory by calling
thread.stack_size([size])
which is outlined here: https://docs.python.org/2/library/thread.html
Without knowing the number of threads that you're attempting to create or how memory intensive they are, it's difficult to if a better solution should be sought. Since you seem to be looking into executing multiple scripts essentially independently of one another (no shared data), you could also look into the Multiprocessing module here:
https://docs.python.org/2/library/multiprocessing.html
If you need them to run parallel you will need to look into threading. Take a look at https://docs.python.org/3/library/threading.html or https://docs.python.org/2/library/threading.html depending on the version of python you are using.