Running two files on a single project on PyCharm - python

I am currently developing a IoT sensor value simulator using the PyCharm IDE (along with pygame). Essentially, I am trying to produce/send data to Microsoft Azure IoT platform while there is a GUI available for users, in which they can see the temperatures of each sensor, change the sensor outputs, etc.
Since I do not want to spam Azure with messages, I use sleep function between every messages sent to limit the rate of messages being sent. As a result, this slows down the whole application and it is a bit cumbersome. Is there a way to get around this where I can send messages without affecting the user experience on the GUI? Thanks!

As Ted pointed out, multithreading is definitely an option, but may be a bit overkill depending on your case.
As an alternative solution you can use the time module of python to calculate the time since the last message was sent and only send a new message if enough time has passed. This way your other processes will continue to run as expected and you don't have to sleep / freeze your program.
import time
start = time.time()
message_interval = 5 # in seconds
while True:
# other application logic
if time.time() - start >= message_interval:
send_message()
start = time.time() # reset timer
You could potentially even combine it with another check to see if it is even necessary to send a message.
import time
start = time.time()
message_interval = 5 # in seconds
update_available = true
while True:
if time.time() - start >= message_interval and update_available:
send_update_message()
start = time.time() # reset timer
update_available = false # reset variable

they mention in the Docs a debugger which you can run locally

Related

Is there a liveness probe in Kubernetes that can catch when a python container freezes?

I have a python program that runs an infinite loop, however, every once in a while the code freezes. No errors are raised or any other message that would alert me something's wrong. I was wondering if Kubernetes has any liveness probe that could possibly help catch when the code freezes so it can kill and restart that container.
I have an idea of having the python code make a periodic log every time it completes the loop. This way I can have a liveness probe check the log file every 30 seconds or so to see if the file has been updated. If the file has not been updated after the allotted time, then its is assumed the program froze and the container is killed and restarted.
I am currently using the following python code to test with:
#Libraries
import logging
import random as r
from time import sleep
#Global Veriables
FREEZE_TIME = 60
'''Starts an infinate loop that has a 10% chance of
freezing...........................................'''
def main():
#Create .log file to hold logged info.
logging.basicConfig(filename="freeze.log", level=logging.INFO)
#Start infinate loop
while True:
freeze = r.randint(1, 10) #10% chance of freezing.
sleep(2)
logging.info('Running infinate loop...')
print("Running infinate loop...")
#Simulate a freeze.
if freeze == 1:
print(f"Simulating freeze for {FREEZE_TIME} sec.")
sleep(FREEZE_TIME)
#Start code with main()
if __name__ == "__main__":
main()
If anyone could tell me how to implement this log idea or if there is a better way to do this I would be most grateful! I am currently using Kubernetes on Docker-Desktop for windows 10 if this makes a difference. Also, I am fairly new to this so if you could keep your answers to a "Kubernetes for dummies" level I would appreciate it.
A common approach to liveness probes in Kubernetes is to access an HTTP endpoint (if the application has it). Kubernetes checks whether response status code falls into 200-399 range (=success) or not (=failure). Running a HTTP server is not mandatory as you can run a command or sequence of commands instead. In this case health status is based on the exit code (0 - ok, anything else - failure).
Given the nature of your script and the idea with the log, I would wrote another python script to read the last line of that log and parse the timestamp. Then, if the difference between current time and the timestamp is greater than [insert reasonable amount] then exit(1), else exit(0).
If you have prepared the health-check script, you can enable it in this way:
spec:
containers:
- name: my_app
image: my_image
livenessProbe:
exec:
command: # the command to run
- python3
- check_health.py
initialDelaySeconds: 5 # wait 5 sec after start for the log to appear
periodSeconds: 5 # run every 5 seconds
The documentation has detailed explanation with some great examples.

How to synchronize the start time of python threads?

I want to measure the time delay of a signal. To do that the signal is put on a speaker an the delay when it gets captured by a microphone is estimated. The delay is expected to be in the range of milliseconds, so it is crucial to start the speaker signal and the measurement at the exact same time.
My question is if that can be achieved by using threads:
def play_sound():
# play sound
def record():
# start recording
if __name__ == '__main__':
t1 = threading.Thread(target=play_sound())
t2 = threading.Thread(target=record())
t1.start()
t2.start()
or is there a better way to d it?
I would start the recording thread first and look for the first peak in the signal captured by the mic. This will tell you how many ms after recording started the first sound was detected. For this you probably need to know the sampling rate of the mic etc- here is a good starting point.
The timeline is something like this
---- recording start ------- playback start -------- sound first detected ----
You want to find out how many ms after you start recording a sound was picked up ((first_peak - recording_start) in the code below), and then subtract the time it took to start the playback ((playback_start - recording_start) below)
Here's a rough code outline
from datetime import datetime
recording_start, playback_start, first_peak = None, None, None
def play_sound():
nonlocal playback_start
playback_start = datetime.now()
def record():
nonlocal recording_start, first_peak
recording_start = datetime.now()
first_peak = find_peak_location_in_ms() # implement this
Thread(target=record()).start() # note recording starts first
Thread(target=play_sound()).start()
# once the threads are finished
delay = (first_peak - recording_start) - (playback_start - recording_start)
PS one of the other answers correctly points out that you need to worry about the global interpreter lock. You can likely bypass it by using c-level APIs to record/play the sound without blocking other threads, but you may find Python's not the right tool for that job
It won't be 100% concurrent real-time, but no solution for desktop will ever be. The question then becomes if it is accurate enough for your application. To know this you should simply run a few tests with known delays and see if it works.
You should know about the global interpreter lock: https://docs.python.org/3.3/glossary.html#term-global-interpreter-lock. This means that even on a multicore pc you code won't run truly concurrent.
If this solution is not accurate enough, you should look into the multiprocessing package. https://docs.python.org/3.3/library/multiprocessing.html
Edit: Well, in order to truly get them to start simultaneously you can't start them sequentially after each other like that. You need to use multiprocessing, create the two threads, and then create some kind of interrupt that will start the two threads at the same time. And I think even then you can't be truly sure they will start at the same time because the OS can switch in other stuff (multitasking), and even if that goes fine in the processors itself things might be reordered differently, different code might be cached, etc. On a desktop you can never have the guarantuee that two programs start simultaneously. So the question then becomes if they are consistently simultaneous enough for your purpose. To answer that you will need to find someone with experience in this, or just run a few tests.

Schedule alert for a time

From python I'd been triggering notifications like this:
def show_alert(message, title="Flashlight"):
"""Display a macOS notification."""
message = json.dumps(message)
title = json.dumps(title)
script = 'display notification {0} with title {1}'.format(message, title)
os.system("osascript -e {0}".format(pipes.quote(script)))
but now I want to be able to trigger these alerts some time in the future.
I had a method using time, time.sleep(60) would trigger an alert a minute in the future.
The problem with that is that if the script is ended, or the computer sleeps, I'm not sure how realiable it would be.
Is there a way I can use python (maybe with applescripts, or other macOS tools) to schedule a notification for some arbitrary time in the future?
Assuming you are on macOS, you can use crontab. It executes processes at a given time. For example, every 5 hours, or every monday at 10pm.
Also, you should take a look at terminal-notifier. Here you have an example.

How to set an multiple alarm in python

I want to set multiple alarm in python. What is the recommended way of setting it up? My use-case is that I've threshold time for N variables. When the current time reaches the threshold value, I want all the variables with that threshold values.
Here's my apprach:-
threshold_time_list = [get list all times from the DB]
current_time = datetime.now()
[i for i in threshold_time_list if i==current_time]
But this is very inefficient way of doing it since I might have 250+ variables like a/b/c.
And also I have to check this condition every second(cronjob). Is there a better way of doing it?
I found on SO, this can be done using threading and making the thread to go to sleep for threshod - current_time. But running 250 threads parallely is again an issue, since I've been facing an issue in my production where Django gets hanged (dont know why) and I need to restart the server to make it work again. We're asssuming that Django might get out of threads for processing, hence making 250 more threads is cumbersome.
Also if someone knows , why does Django gets hang in b/w the running live product it will be beneficial.
Can this alarm question be done in celery?
Use the sched module. This lets you create any number of scheduled tasks, then run them when they kick off, one at a time.

Mocking "sleep"

I am writing an application that would asynchronously trigger some events. The test looks like this: set everything up, sleep for sometime, check that event has triggered.
However because of that waiting the test takes quite a time to run - I'm waiting for about 10 seconds on every test. I feel that my tests are slow - there are other places where I can speed them up, but this sleeping seems the most obvious place to speed it up.
What would be the correct way to eliminate that sleep? Is there some way to cheat datetime or something like that?
The application is a tornado-based web app and async events are triggered with IOLoop, so I don't have a way to directly trigger it myself.
Edit: more details.
The test is a kind of integration test, where I am willing to mock the 3rd party code, but don't want to directly trigger my own code.
The test is to verify that a certain message is sent using websocket and is processed correctly in the browser. Message is sent after a certain timeout which is started at the moment the client connects to the websocket handler. The timeout value is taken as a difference between datetime.now() at the moment of connection and a value in database. The value is artificially set to be datetime.now() - 5 seconds before using selenium to request the page. Since loading the page requires some time and could be a bit random on different machines I don't think reducing the 5 seconds time gap would be wise. Loading the page after timeout will produce a different result (no websocket message should be sent).
So the problem is to somehow force tornado's IOLoop to send the message at any moment after the websocket is connected - if that happened in 0.5 seconds after setting the database value, 4.5 seconds left to wait and I want to try and eliminate that delay.
Two obvious places to mock are IOLoop itself and datetime.now(). the question is now which one I should monkey-patch and how.
I you want to mock sleep then you must not use it directly in your application's code. I would create a class method like System.sleep() and use this in your application. System.sleep() can be mocked then.
Use the built in tornado testing tools. Each test gets it's own IOLoop, and you use self.stop and self.wait to get results from it, e.g. (from the docs):
client = AsyncHTTPClient(self.io_loop)
# call self.stop on fetch completion
client.fetch("http://www.tornadoweb.org/", self.stop)
response = self.wait()

Categories

Resources