For example, if I do time.sleep(100) and immediately hibernate my computer for 99 seconds, will the next statement be executed in 1 second or 100 seconds after waking up?
If the answer is 1 second, how do you "sleep" 100 seconds, regardless of the length of hibernate/standby?
time.sleep(N) attempts to sleep at least N seconds of elapsed, AKA "wall-clock" time - of course there can be no guarantee that the sleep will last exactly N seconds; for example, the thread becomes ready to execute again at that time, but it cannot necessarily preempt whatever other thread is executing at that time -- that's the operating system's decision to make, not any programming language's; on the other hand, sleep may be prematurely interrupted by various kinds of events (such as interrupts).
If you can find on your operating system some clock-like thingy that only advances when the system's state is the one you care about (e.g. "not hybernated", in your case), then of course you can go back to sleep if you wake up again "too early".
For example, on Windows 7, QueryUnbiasedInterruptTime is specifically documented to "not include time the system spends in sleep or hibernation" and to use units of 100 nanoseconds. So if you call that, e.g. through ctypes, you can achieve the effect you want:
def unbiasedsleep(n):
start = kernel32.QueryUnbiasedInterruptTime()
target = start + n * 10 * 1000 * 1000
while True:
timeleft = target - kernel32.QueryUnbiasedInterruptTime()
if timeleft > 0:
time.sleep(timeleft / (10 * 1000 * 1000.0))
I don't know how to get the equivalent of QueryUnbiasedInterruptTime on other releases of Windows or other operating systems, but then, you don't tell us what operating system(s) you're interested in, so it would be pretty pointless anyway to present a long laundry lists of approaches which may work similarly in different environments.
I don't know exactly what you are trying to achieve, but
for i in range(100):sleep(1)
might work, as the hibernate would only use up to 1 seconds worth of the sleep
Clearly, you must sleep according to real, elapsed time.
The alternative (sleeping according to some other clock that "somehow" started and stopped) would be unmanageable. How would your application (which is sleeping) be notified of all this starting and stopping activity? Right, it would have to be woken up to be told that it was not supposed to run because the system was hibernating.
Or, perhaps, some super-sophisticated OS-level scheduler could be used to determine if some time the system was "busy" vs. "hibernating" counted against the schedules of various sleeping processes.
All too complex.
Indeed, if you check carefully, sleep is pretty approximate and any Unix Signal will interrupt it. So it's possible to wake early for lots of reasons. Control-C being the big example.
Related
I want to measure the time delay of a signal. To do that the signal is put on a speaker an the delay when it gets captured by a microphone is estimated. The delay is expected to be in the range of milliseconds, so it is crucial to start the speaker signal and the measurement at the exact same time.
My question is if that can be achieved by using threads:
def play_sound():
# play sound
def record():
# start recording
if __name__ == '__main__':
t1 = threading.Thread(target=play_sound())
t2 = threading.Thread(target=record())
t1.start()
t2.start()
or is there a better way to d it?
I would start the recording thread first and look for the first peak in the signal captured by the mic. This will tell you how many ms after recording started the first sound was detected. For this you probably need to know the sampling rate of the mic etc- here is a good starting point.
The timeline is something like this
---- recording start ------- playback start -------- sound first detected ----
You want to find out how many ms after you start recording a sound was picked up ((first_peak - recording_start) in the code below), and then subtract the time it took to start the playback ((playback_start - recording_start) below)
Here's a rough code outline
from datetime import datetime
recording_start, playback_start, first_peak = None, None, None
def play_sound():
nonlocal playback_start
playback_start = datetime.now()
def record():
nonlocal recording_start, first_peak
recording_start = datetime.now()
first_peak = find_peak_location_in_ms() # implement this
Thread(target=record()).start() # note recording starts first
Thread(target=play_sound()).start()
# once the threads are finished
delay = (first_peak - recording_start) - (playback_start - recording_start)
PS one of the other answers correctly points out that you need to worry about the global interpreter lock. You can likely bypass it by using c-level APIs to record/play the sound without blocking other threads, but you may find Python's not the right tool for that job
It won't be 100% concurrent real-time, but no solution for desktop will ever be. The question then becomes if it is accurate enough for your application. To know this you should simply run a few tests with known delays and see if it works.
You should know about the global interpreter lock: https://docs.python.org/3.3/glossary.html#term-global-interpreter-lock. This means that even on a multicore pc you code won't run truly concurrent.
If this solution is not accurate enough, you should look into the multiprocessing package. https://docs.python.org/3.3/library/multiprocessing.html
Edit: Well, in order to truly get them to start simultaneously you can't start them sequentially after each other like that. You need to use multiprocessing, create the two threads, and then create some kind of interrupt that will start the two threads at the same time. And I think even then you can't be truly sure they will start at the same time because the OS can switch in other stuff (multitasking), and even if that goes fine in the processors itself things might be reordered differently, different code might be cached, etc. On a desktop you can never have the guarantuee that two programs start simultaneously. So the question then becomes if they are consistently simultaneous enough for your purpose. To answer that you will need to find someone with experience in this, or just run a few tests.
I'm wondering how accurate python's time.sleep() method is for longer time periods spanning from a few minutes up to a few days.
My concern is, that there might be a drift which will add up when using this method for longer time periods.
Alternatively I have come up with a different solution to end a loop after a certain amount of time has passed:
end = time.time() + 10000
while 1:
if time.time() > end:
break
This is accurate down to a few milliseconds which is fine for my use case and won't drift over time.
Python's time.sleep() function is accurate and should be used in this case as it is simpler and easier to run. An example is
time.sleep(10000) # will stop all running scripts in the same pid
Using a bare while statement without any threshold will use a lot of your resources, which is why you should use a time.sleep expression to reduce this. You also should have used the while statement condition statement as this will make sure your while statement closes.
As shown below
end = time.time() + 10000
while end > time.time(): # ensures to end when time.time (now) is more than end
time.sleep(0.001) # creates a 1 ms gap to decrease cpu usage
I would recommend using the pause module, you can get millisecond precision over a period of days. No need to roll your own here.
https://github.com/jgillick/python-pause
Python's time.sleep() is accurate for any length of time with two little flaws:
The time t must be considered "at least t seconds" as there may be a
number of system events that are scheduled to start at the precise moment
"time when started" + t.
The sleep may be interrupted if the signal handler raises an exception.
I think, but am not certain, that these flaws are found in most programming languages.
More out of curiosity, I was wondering how might I make a python script sleep for 1 second without using the time module?
Is there a computation that can be conducted in a while loop which takes a machine of n processing power a designated and indexable amount of time?
As mentioned in comments for your second part of question:
The processing time is depends on the machine(computer and its configuration) you are working with and active processes on it. There isnt fixed amount of time for an operation.
It's been a long time since you could get a reliable delay out of just trying to execute code that would take a certain time to complete. Computers don't work like that any more.
But to answer your first question: you can use system calls and open a os process to sleep 1 second like:
import subprocess
subprocess.run(["sleep", "1"])
I have a Threaded timer that fires every second and updates a clock, the problem is that sometimes the clock will appear to be unstable and it can jump 2 seconds instead of a steady 1 second increment.
The problem of course is that the initial (or subsequent) timer is not triggered at exactly 0:000 seconds and therefore it is possible that updates to the clock appear to jitter.
Is there any way of preventing this ?
from threading import Timer
def timer():
Timer(1.00, timer).start()
STAT['ftime'] = time.strftime("%H:%M:%S")
start_time = time.time()
interval = 1
for i in range(20):
time.sleep(start_time + i*interval - time.time())
# do a thing
Replace '20' with however many seconds you want to time.
There are various approaches how to schedule, some designs may even provide measures to be able to deliver some acceptable kind of remedy for a blocked / failed initiation on a planned scheduling time-line -- what may help is finer timing / hierarchical timing / external synchronisation / asynchronous operations.
Without more details, there would be a poor practice to "recommend", but one may get ispired:
if RealTime constraints allow to bear a bit more overhead, one may go to a "supersampled" elastic, failure-resilient scheduling scenario, so as to avoid a 2 second gap ( in case of a failed one .Timer() initiation ), where threading.Timer() model fires each 50 msec, and an embedded logic decides, if it is the right-time ( not farther than a half of one scheduling interval from an idealised one second edge ) and does the real-job, that was intended to be run, or just return, in case the RTC is not "near" the planned idealised scheduling time.
a good python design also cannot forget about problems with GIL-lock issues, with avoiding blockingIO(s), with implementing a reasonable task-segmentation for CPU-bound parts of the code
I've created a script to monitor the output of a serial port that receives 3-4 lines of data every half hour - the script runs fine and grabs everything that comes off the port which at the end of the day is what matters...
What bugs me, however, is that the cpu usage seems rather high for a program that's just monitoring a single serial port, 1 core will always be at 100% usage while this script is running.
I'm basically running a modified version of the code in this question: pyserial - How to Read Last Line Sent from Serial Device
I've tried polling the inWaiting() function at regular intervals and having it sleep when inWaiting() is 0 - I've tried intervals from 1 second down to 0.001 seconds (basically, as often as I can without driving up the cpu usage) - this will succeed in grabbing the first line but seems to miss the rest of the data.
Adjusting the timeout of the serial port doesn't seem to have any effect on cpu usage, nor does putting the listening function into it's own thread (not that I really expected a difference but it was worth trying).
Should python/pyserial be using this much cpu? (this seems like overkill)
Am I wasting my time on this quest / Should I just bite the bullet and schedule the script to sleep for the periods that I know no data will be coming?
Maybe you could issue a blocking read(1) call, and when it succeeds use read(inWaiting()) to get the right number of remaining bytes.
Would a system style solution be better? Create the python script and have it executed via Cron/Scheduled Task?
pySerial shouldn't be using that much CPU but if its just sitting there polling for an hour I can see how it may happen. Sleeping may be a better option in conjunction with periodic wakeup and polls.