Currently making use of datetime.datetime.now() from python code to obtain the date/time stamp. However this is the date/timezone that is set on the system which can be changed/altered.
How can i retrieve the real time i.e RTC time from python. Any help will be greatly appreciated.
If you are using Linux/Unix, you can get the system-wide real time clock with the time module from the standard library, as follows:
import time
rtc = time.clock_gettime(time.CLOCK_REALTIME)
print("System RTC = {rtc}")
> System RTC = 1549619678.899073
For Windows, RTC is not available. There are a variety of other clocks you could use, depending on your application, which wouldn't be affected by updates to system time. For instance, if you are trying to measure time between two separate calls, and don't want this to be affected by changes to system datetime/timezone, you can use time.monotonic() which is available on Windows as well. However, is only useful relative to another call to time.monotonic() (i.e. for measuring duration) and does not have a defined reference point, so you can't do a call to time.monotonic() to ask "what time is it?"
Related
The schedule library allows you to schedule jobs at specific times of the day. For example:
schedule.every().day.at("10:30").do(job)
However its documentation doesn't mention the time reference it uses.
Is it based on UTC time?
Or is it based on the time zone of the server that runs my python script?
If you look at the code of the library, you will find that the function datetime.datetime.now() is used (here fore example).
When looking at the now documentation, you will see that without argument, it is the local time that it is used.
Thus, schedule.every().day.at("10:30").do(job) will use the server time zone.
I am currently developing a IoT sensor value simulator using the PyCharm IDE (along with pygame). Essentially, I am trying to produce/send data to Microsoft Azure IoT platform while there is a GUI available for users, in which they can see the temperatures of each sensor, change the sensor outputs, etc.
Since I do not want to spam Azure with messages, I use sleep function between every messages sent to limit the rate of messages being sent. As a result, this slows down the whole application and it is a bit cumbersome. Is there a way to get around this where I can send messages without affecting the user experience on the GUI? Thanks!
As Ted pointed out, multithreading is definitely an option, but may be a bit overkill depending on your case.
As an alternative solution you can use the time module of python to calculate the time since the last message was sent and only send a new message if enough time has passed. This way your other processes will continue to run as expected and you don't have to sleep / freeze your program.
import time
start = time.time()
message_interval = 5 # in seconds
while True:
# other application logic
if time.time() - start >= message_interval:
send_message()
start = time.time() # reset timer
You could potentially even combine it with another check to see if it is even necessary to send a message.
import time
start = time.time()
message_interval = 5 # in seconds
update_available = true
while True:
if time.time() - start >= message_interval and update_available:
send_update_message()
start = time.time() # reset timer
update_available = false # reset variable
they mention in the Docs a debugger which you can run locally
If I get the current time using either:
datetime.datetime.now()
or,
time.time()
Does it make a system call to get the time? Or does the Python process have its own time service that is called to get the time?
The reason I am asking this question is to understand whether Python has to make a context switch to get the current time. Because if it makes a system call to get the current time, then that's quite an expensive operation. But if it has its own time service that, then the operation is not as expensive.
I have a scheduling function and a scheduler with a queue of future events ordered by time. I'm using UNIX timestamps and the regular time.time(). One fragment of the scheduler is roughly equivalent to this:
# select the nearest event (eventfunc at eventime)
sleeptime = eventtime - time.time()
# if the sleep gets interrupted,
# the whole block will be restarted
interruptible_sleep(sleeptime)
eventfunc()
where the eventtime could be computed either based on a delay:
eventtime = time.time() + delay_seconds
or based on an exact date and time, e.g.:
eventtime = datetime(year,month,day,hour,min).timestamp()
Now we have the monotonic time in Python. I'm considering to modify the scheduler to use the monotonic time. Schedulers are supposed to use the monotonic time they say.
No problem with delays:
sleeptime = eventtime - time.monotonic()
where:
eventtime = time.monotonic() + delay_seconds
But with the exact time I think the best way is to leave the code as it is. Is that correct?
If yes, I would need two event queues, one based on monotonic time and one based on regular time. I don't like that idea much.
As I said in the comment, your code duplicates the functionality of the sched standard module - so you can as well use solving this problem as a convenient excuse to migrate to it.
That said,
what you're supposed to do if system time jumps forward or backward is task-specific.
time.monotonic() is designed for cases when you need to do things with set intervals between them regardless of anything
So, if your solution is expected to instead react to time jumps by running scheduled tasks sooner or later than it otherwise would, in accordance with the new system time, you have no reason to use monotonic time.
If you wish to do both, then you either need two schedulers, or tasks with timestamps of the two kinds.
In the latter case, the scheduler will need to convert one type to the other (every time it calculates how much to wait/whether to run the next task) - for which time provides no means.
I have a program that counts pulses (Hall effect sensor) on a rain gauge to measure precipitation. It runs continuously and counts a number of pulses every 5 minutes that then translates into a rain amount. After an hour (12 - 5min. measurements, I add the total and this is the hourly rainfall. I have structure this program so that it drops the oldest measurement and adds the new one each 5 minutes after an hour, and so I have a running hourly rain output, termed "totalrainlasthour".
My problem is that I want to upload this data to weather underground using a separate program that includes other data such as wind speed, temp, etc. This upload takes place every 5 minutes. I want to include the current value of "totalrainlasthour", and use it in the upload.
I tried a "from import" command but the more I read, that doesn't look like it would work.
from rainmodule import totalrainlasthour
print totalrainlasthour
Is there a way can I pull in the current value of a variable from a separate program?
As far as I know, there's no good way for a python script that just starts up to access the values from inside an already-running Python instance. However, there are a few workarounds that you can try.
If it's acceptable for your weather uploading script to be running constantly, you could structure it to look something like this:
import time
import rainmodule
import windmodule
# etc
def start():
# instantiate classes so you can keep track of state
rain = rainmodule.RainCollection()
wind = windmodule.WindCollection()
# etc
prev_time = time.time()
while True:
rain.loop()
wind.loop()
# etc
now = time.time()
if now - prev_time > (60*60*5):
prev_time = now
totalrainlasthour = rain.totalrainlasthour
winddata = wind.data
# upload code here
if __name__ == '__main__':
start()
This method assumes that every one of your data collection modules can be modified to run iteratively within a "master" while loop.
If you can't wrangle your code to fit this format, (or the loop methods for some modules take a long time to execute) then you could perhaps launch each of your modules as a process using the multiprocessing or threading modules, and communicate using some synchronized data structure or a queue.
An alternative solution might be to create a database of some sort (Python comes bundled with sqlite, which could work), and have each of the scripts write to that database. That way, any arbitrary script could run and grab what it needs to from the database without having to tie in to the other data collection modules.
The only potential issue with using sqlite is that since it's lightweight, it supports only one writer at a time, so if you're making a huge amount of changes and additions to the database, it may end up being a bottleneck.