I want to generate square clock waveform to external device.
I use python 2.7 with Windows 7 32bit on an old PC with a LPT1 port.
The code is simple:
import parallel
import time
p = parallel.Parallel() # open LPT1
x=0
while (x==0):
p.setData(0xFF)
time.sleep(0.0005)
p.setData(0x00)
I do see the square wave (using scope) but with not expected time period.
I will be gratefull for any help
It gives an expected performance for a while... Continue to reduce times
import parallel
import time
x=0
while (x<2000):
p = parallel.Parallel()
time.sleep(0.01) # open LPT1
p.setData(0xFF)
p = parallel.Parallel() # open LPT1
time.sleep(0.01)
p.setData(0x00)
x=x+1
To generate signals like that is hard. To mention one reason why it is hard might be that the process gets interrupted returns when the sleep time is exceeded.
Found this post about sleep precision with an accepted answer that is great:
How accurate is python's time.sleep()?
another source of information: http://www.pythoncentral.io/pythons-time-sleep-pause-wait-sleep-stop-your-code/
What the information tells you is that Windows will be able to do a sleep for a minimum ~10ms, in Linux the time is approximately 1ms, but may vary.
Update
I made function that make possible to sleep less then 10ms. But the precision is very sketchy.
In the attached code I included a test that presents how the precision behaves. If you want higher precision, I strongly recommend you read the links I attached in my original answer.
from time import time, sleep
import timeit
def timer_sleep(duration):
""" timer_sleep() sleeps for a given duration in seconds
"""
stop_time = time() + duration
while (time() - stop_time) < 0:
# throw in something that will take a little time to process.
# According to measurements from the comments, it will take aprox
# 2useconds to handle this one.
sleep(0)
if __name__ == "__main__":
for u_time in range(1, 100):
u_constant = 1000000.0
duration = u_time / u_constant
result = timeit.timeit(stmt='timer_sleep({time})'.format(time=duration),
setup="from __main__ import timer_sleep",
number=1)
print('===== RUN # {nr} ====='.format(nr=u_time))
print('Returns after \t{time:.10f} seconds'.format(time=result))
print('It should take\t{time:.10f} seconds'.format(time=duration))
Happy hacking
Related
We are a team of bachelor students currently working on building a legged robot. At the moment our interface to the robot is written in python using an sdk from the master board we are using.
In order to communicate with the master board sdk, we need to send a command every millisecond.
To allow us to send tasks periodically, we have applied the rt-preempt patch to our linux kernel. (Ubuntu LTS 20.04, kernel 5.10.27-rt36)
We are very new to writing real time applications, and have run into some issues where our task sometimes will have a much smaller time step than specified. In the figure below we have plotted the time of each cycle of the while loop where the command is being sent to the sdk. (x axis is time in seconds and y axis is the elapsed time of an iteration, also in seconds)
As seen in the plot, one step is much smaller than the rest. This seems to happen at the same exact time mark every time we run the script.
cyclic_task_plot
We set the priority of the entire script using:
pid = os.getpid()
sched = os.SCHED_FIFO
param = os.sched_param(98)
os.sched_setscheduler(pid, sched, param)
Our cyclic task looks like this:
dt is set to 0.001
while(_running):
if direction:
q = q + 0.0025
if (q > np.pi/2).any():
direction = False
else:
q = q - 0.0025
if (q < -np.pi/2).any():
direction = True
master_board.track_reference(q, q_prime)
#Terminate if duration has passed
if (time.perf_counter() - program_start > duration):
_running = False
cycle_time = time.perf_counter() - cycle_start
time.sleep(dt - cycle_time)
cycle_start = time.perf_counter()
timestep_end = time.perf_counter()
time_per_timestep_array.append(timestep_end - timestep_start)
timestep_start = time.perf_counter()
We suspect the issue has to do with the way we define the sleep amount. Cycle_time is meant to be the time that the calculations above time.sleep() takes, so that: sleep time + cycle time = 1ms. However, we are not sure how to properly do this, and we're struggling with finding resources on the subject.
How should one properly define a task such as this for a real time application?
We have quite loose requirements (several milliseconds), but it is very important to us that it is deterministic, as this is part of our thesis and we need to understand what is going on.
Any answers to our question or relevant resources are greatly appreciated.
Link to the full code: https://drive.google.com/drive/folders/12KE0EBaLc2rkTZK2FuX_goMF4MgWtknS?usp=sharing
timestep_end = time.perf_counter()
time_per_timestep_array.append(timestep_end - timestep_start)
timestep_start = time.perf_counter()
You're recording the time between timestep_start from the previous cycle and timestep_end from the current cycle. This interval does not accurately represent the cycle time step (even if we assume that no task preemption takes place); it excludes the time consumed by the array append function. Since the outlier seems to happen at the same exact time mark every time we run the script, we could suspect that at this point the array exceeds a certain size where an expensive memory reallocation has to take place. Regardless of the real reason, you should remove such timing inaccuracies by recording the time between cycle starts:
timestep_end = cycle_start
time_per_timestep_array.append(timestep_end - timestep_start)
timestep_start = cycle_start
So what I am trying to do is have a bit of code check the time and at a a given time do something, the current part I am working on is small but I want it to run as efficiently as possible because the program will be running for long amounts of time when its finished. I've noticed on on task manager when I run a file with only the bit of code I will show soon my cpu usage is over 15% with an i7 7700 cpu, is there any way to make this code more efficient?
import datetime
import webbrowser
#loop to run until desired time
while True:
#checks current time to see if it is the desired time
if str(datetime.datetime.now().time()) == "11:00:00":
#opens a link when its the desired time
webbrowser.open('https://www.youtube.com/watch?v=q05NxtGgNp4')
break
If your program can remain idle until calling the browser, you can use sleep, for the time difference between now and 11:00:00:
import datetime
import webbrowser
# loop to run until desired time
def find_time_between_now__and_11():
"""returns the time in ms between now and 11"""
return datetime.datetime.now().time() - 11 # pseudocode, you need to figure out how to do that
lag = find_time_between_now__and_11()
time.sleep(lag)
# opens a link when its the desired time
webbrowser.open('https://www.youtube.com/watch?v=q05NxtGgNp4')
15% imho means you have one core filled 100%, because you're continuously looping. You can sleep() for 1+ seconds so the CPU is not busy looping and you need to add a fuzzy comparison for:
str(datetime.datetime.now().time()) == "11:00:00"
I'd go for something like:
def run_task(alarm):
last_run = None
while True:
now = datetime.datetime.now()
if now > alarm && last_run != now:
last_run = now
# Do whatever you need
webbrowser.open('https://www.youtube.com/watch?v=q05NxtGgNp4')
time.sleep(10) # Sleep 10 seconds
It's a bit convoluted bout you can extend to support multiple alarm times and change the if logic to suit your needs.
the problem is that when i run my script it takes longer than the expected time 1 second before it says the next command. i think this has something to do with the speech command. what can i do to optimize this?
edit: link to the sppech module https://pypi.python.org/pypi/speech/0.5.2
edit2: per request i measured the sleep time only using datetime.
2016-06-29 18:39:42.953000
2016-06-29 18:39:43.954000
i found that it was pretty accurate
edit3: i tried the build in import win32com.client and it didnt work either
import speech
import time
import os
def exercise1():
speech.say("exercise1")
time.sleep(0.5)
for n in range(0, rep*2):
speech.say("1")
t ime.sleep(1)
speech.say("2")
time.sleep(1)
speech.say("3")
time.sleep(1)
speech.say("switch")
Refer the post here How accurate is python's time.sleep()?
It says:
"The accuracy of the time.sleep function depends on the accuracy of
your underlying OS's sleep accuracy. For non-realtime OS's like a
stock Windows the smallest interval you can sleep for is about
10-13ms. I have seen accurate sleeps within several milliseconds of
that time when above the minimum 10-13ms."
As you say in the comments, sleep(1) is fairly accurately 1s.
What you want to do to make each part take 1s, is time the "say" call, and then wait the remaining time to fill out the second. Something like this:
start = time.time()
speech.say("whatever")
end = time.time()
sleep(1 - (end - start)) # Wait however long will bring the time up to 1 second total
I am getting a timestamp every time a key is pressed like this:
init_timestamp = time.time()
while (True):
c = getch()
offset = time.time() - init_timestamp
print("%s,%s" % (c,offset), file=f)
(getch from this answer).
I am verifying the timestamps against an audio recording of me actually typing the keys. After lining the first timestamp up with the waveform, subsequent timestamps drift slighty but consistently. By this I mean that the saved timestamps are later than the keypress waveforms and get later and later as time goes on.
I am reasonably sure the waveform timing is correct (i.e. the recording is not fast or slow), because in the recording I also included the ticking of a very accurate clock which lines up perfectly with the second markers.
I am aware that there are unavoidable limits to the accuracy of time.time(), but this does not seem to account for what I'm seeing - if it was equally wrong on both sides that would be acceptable, but I do not want it to gradually diverge more and more from the truth.
Why would I be seeing this drifting behaviour and what can I do to avoid it?
Just solved this by using time.monotonic() instead of time.time(). time.time() seems to use gettimeofday (at least here it does) which is apparently really bad for measuring walltime differences because of NTP syncing issues:
gettimeofday() and time() should only be used to get the current time if the current wall-clock time is actually what you want. They should never be used to measure time or schedule an event X time into the future.
You usually aren't running NTP on your wristwatch, so it probably won't jump a second or two (or 15 minutes) in a random direction because it happened to sync up against a proper clock at that point. Good NTP implementations try to not make the time jump like this. They instead make the clock go faster or slower so that it will drift to the correct time. But while it's drifting you either have a clock that's going too fast or too slow. It's not measuring the passage of time properly.
(link). So basically measuring differences between time.time() calls is a bad idea.
Depending on which OS you are using you will either need to use time.time() or time.clock().
For windows OS's you will need to use time.clock this give you will clock seconds as a float. time.time() on windows if I remember correctly time.time() is only accurate within 16ms.
For posix systems (linux, osx) you should be using time.time() this is a float which returns the number of seconds since the epoch.
In your code add the following to make your application a little more cross system compatible.
import os
if os.name == 'posix':
from time import time as get_time
else:
from time import clock as get_time
# now use get_time() to return the timestamp
init_timestamp = get_time()
while (True):
c = getch()
offset = get_time() - init_timestamp
print("%s,%s" % (c,offset), file=f)
...
I need to measure the time certain parts of my program take (not for debugging but as a feature in the output). Accuracy is important because the total time will be a fraction of a second.
I was going to use the time module when I came across timeit, which claims to avoid a number of common traps for measuring execution times. Unfortunately it has an awful interface, taking a string as input which it then eval's.
So, do I need to use this module to measure time accurately, or will time suffice? And what are the pitfalls it refers to?
Thanks
According to the Python documentation, it has to do with the accuracy of the time function in different operating systems:
The default timer function is platform
dependent. On Windows, time.clock()
has microsecond granularity but
time.time()‘s granularity is 1/60th of
a second; on Unix, time.clock() has
1/100th of a second granularity and
time.time() is much more precise. On
either platform, the default timer
functions measure wall clock time, not
the CPU time. This means that other
processes running on the same computer
may interfere with the timing ... On Unix, you can
use time.clock() to measure CPU time.
To pull directly from timeit.py's code:
if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time
In addition, it deals directly with setting up the runtime code for you. If you use time you have to do it yourself. This, of course saves you time
Timeit's setup:
def inner(_it, _timer):
#Your setup code
%(setup)s
_t0 = _timer()
for _i in _it:
#The code you want to time
%(stmt)s
_t1 = _timer()
return _t1 - _t0
Python 3:
Since Python 3.3 you can use time.perf_counter() (system-wide timing) or time.process_time() (process-wide timing), just the way you used to use time.clock():
from time import process_time
t = process_time()
#do some stuff
elapsed_time = process_time() - t
The new function process_time will not include time elapsed during sleep.
Python 3.7+:
Since Python 3.7 you can also use process_time_ns() which is similar to process_time()but returns time in nanoseconds.
You could build a timing context (see PEP 343) to measure blocks of code pretty easily.
from __future__ import with_statement
import time
class Timer(object):
def __enter__(self):
self.__start = time.time()
def __exit__(self, type, value, traceback):
# Error handling here
self.__finish = time.time()
def duration_in_seconds(self):
return self.__finish - self.__start
timer = Timer()
with timer:
# Whatever you want to measure goes here
time.sleep(2)
print timer.duration_in_seconds()
The timeit module looks like it's designed for doing performance testing of algorithms, rather than as simple monitoring of an application. Your best option is probably to use the time module, call time.time() at the beginning and end of the segment you're interested in, and subtract the two numbers. Be aware that the number you get may have many more decimal places than the actual resolution of the system timer.
I was annoyed too by the awful interface of timeit so i made a library for this, check it out its trivial to use
from pythonbenchmark import compare, measure
import time
a,b,c,d,e = 10,10,10,10,10
something = [a,b,c,d,e]
def myFunction(something):
time.sleep(0.4)
def myOptimizedFunction(something):
time.sleep(0.2)
# comparing test
compare(myFunction, myOptimizedFunction, 10, input)
# without input
compare(myFunction, myOptimizedFunction, 100)
https://github.com/Karlheinzniebuhr/pythonbenchmark
Have you reviewed the functionality provided profile or cProfile?
http://docs.python.org/library/profile.html
This provides much more detailed information than just printing the time before and after a function call. Maybe worth a look...
The documentation also mentions that time.clock() and time.time() have different resolution depending on platform. On Unix, time.clock() measures CPU time as opposed to wall clock time.
timeit also disables garbage collection when running the tests, which is probably not what you want for production code.
I find that time.time() suffices for most purposes.
From Python 2.6 on timeit is not limited to input string anymore. Citing the documentation:
Changed in version 2.6: The stmt and setup parameters can now also take objects that are callable without arguments. This will embed calls to them in a timer function that will then be executed by timeit(). Note that the timing overhead is a little larger in this case because of the extra function calls.