Connection checker Script on Python - python

I was working on a connection everytime checker script on Python, trying to implement it by crontabs on a Raspberry Pi 4 with Ubuntu desktop 21.04. The code is the next:
from gpiozero import LED
from ping3 import ping
from time import sleep
program_flag=LED(24)
while True:
ping_test=ping('8.8.8.8')
if isinstance(ping_test, float):
program_flag.on()
else:
program_flag.off()
print(ping_test)
print(program_flag)
sleep(3)
This code works fine for me, but the problem comes when I try to put this script on crontab. I read something about infinite loops or while loops on crontabs, and I think that they doesn't work. What is the best solution for this? The target is configure a gpio with a 1 or 0 depending if connection works fine or not.
EDIT: my crontab line: #reboot python3 /bin/connection_test.py &
Thank you so much!

I have checked line by line, and the while loop doesn't work porpertly on crontab because of this line: ping_test=ping('8.8.8.8').
I supose that the problem comes from ping3 module.
Thank you for your answer Teejay Bruno.

Related

Python Serial() not etablishing on start up

I've made a python script to read information from Serial port. I want this script to run at the boot of my ubuntu.
I've made my python file executable, and added a lign to the crontab : #reboot sleep 10s; python /home/MyUser/Documents/MyScript.py &
The script is indeed started when i'm booting but it's seems like the specific line ser = serial.Serial('/dev/ttyACM0', 9600) won't work. Indeed, if I try to execute something before, it will work, but not after.
I can't figure out why it's not working. When i'm executing my script from the terminal, it does work without problem.
I believe it may be because of the fact that the serial port may not be ready yet when the script start but even when adding a longer sleep before the script start, it doesn't work. I'm running out of ideas and I really need your help.
Here is my python script :
#!/usr/bin/env python
import serial
import subprocess
import os
import time
ser = serial.Serial('/dev/ttyACM0', 9600)
os.system('export DISPLAY=:0 && xset dpms force off &') #this line won't work, that's why I know the problem is coming from the serial beginning.
time.sleep(60)
urgency = 0
diffusion = 0
while 1 :
line = ser.readline().strip()
if line == "start":
#do something
else
#do something else

Start a Python script in a shell with SSH

I want to start a Python script with paramiko which connects to my raspberry wich acts as a server. Then after the conection to the raspberry it starts a script like this(to send data to an arduino from another pc):
import tty
import sys
import termios
import serial
import os
arduino = serial.Serial('/dev/ttyUSB0' , 9600)
x = "./mjpg_streamer -i \"./input_uvc.so -d /dev/video0 -y\" -o \"./output_http.so -w ./www\""
os.system(x)
orig_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin)
x = 0
while x != chr(27): # ESC
x=sys.stdin.read(1)[0]
arduino.write(x)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, orig_settings)
This code works okay, kind of a raw_input just to simplify.
I want to connect automatically to the raspberry by ssh, and start a python script that will ask for an input -which in the code above is a constant-.
I thought something like opening a new shell with the script above already iniciated or something like that...
Quick answer, there is not any options that could help you insert password into ssh command. You have to set up a share key-pair to use ssh without password prompt. Searching on the internet can give you ton of answers, for example: http://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id
So at first, set up key-pairs. Then use normal ssh to check whether it was successful. Then finally, in your python script, add some codes to deal with ssh.

What could be causing python script to loop?

Alright, I so here are the facts. I have 2 python scripts and I want Script1 to trigger Script2. I have tried the following ways to do this:
from subprocess import call
call(["python3", "script2.py"])
The dreaded exec call:
exec(open("script2.py").read())
And finally:
os.system("script2.py 1")
So just to make sure I am giving you all the info needed. I want to run script1 first then once it is finished processing I want script1 to trigger script2. Currently no matter what I have tried, I get stuck in a loop where script one, just simply keeps running over and over again.
Any ideas?
Here is the actual code for script1:
import os
"""This looks like it is unnecessary but I can't include its context
in this post. Just know it has an actual purpose."""
input_file = "gs://link_to_audio_file.m4a"
audio = input_file
output_format = os.path.basename(input_file).replace("m4a", "flac")
os.system('ffmpeg -i %s -ar 16000 -ac 1 %s' % (audio,output_format))
os.system("python3 script2.py")
Make sure the first script runs cleanly by itself by commenting out the call to the second script. If it still seems to run forever there's an issue other than trying to call a second script. If you have a IDE, you can step through the code to discover where it hangs. If you're not using an IDE, place print statements in the script so you can see the execution path. Do you possibly have a cyclic call? So the first python script is calling the second and the second python script is in turn calling the first?
When using os.system, I believe you'd need to include python as in
os.system("python script2.py 1")
I can't tell why you're in a loop without seeing the scripts.
I have finally solved this issue! I was actually using an import statement in the second script that was trying to import a variable from the first script, but instead it was importing the entire script, causing it to run in an endless loop. Just like LAS had suggested, nicely done! Thank you all for all your help on this!

Restart python script

I have a script that collects data from the streaming API. I'm getting an error at random that I believe it's coming from twitter's end for whatever reason. It doesn't happen at specific time, I've been seen it as early as 10 minutes after running my script, and other times after 2 hours.
My question is how do I create another script (outside the running one) that can catch if it terminated with an error, then restart after a delay.
I did some searching and most were related to using bash on linux, I'm on windows. Other suggestions were to use Windows Task Scheduler but that can only be set for a known time.
I came across the following code:
import os, sys, time
def main():
print "AutoRes is starting"
executable = sys.executable
args = sys.argv[:]
args.insert(0, sys.executable)
time.sleep(1)
print "Respawning"
os.execvp(executable, args)
if __name__ == "__main__":
main()
If I'm not mistaken that runs inside the code correct? Issue with that is my script is currently collecting data and I can't terminate to edit.
How about this?
from os import system
from time import sleep
while True: #manually terminate when you want to stop streaming
system('python streamer.py')
sleep(300) #sleep for 5 minutes
In the meanwhile, when something goes wrong in streamer.py , end it from there by invoking sys.exit(1)
Make sure this and streamer.py are in the same directory.

Restart Python.py when it stops working

I am using Cherrypy framework to run my python code on server. But the process stops working when the load increases.
Every time this happens I have to manually go and start the python code. Is there any way i can use Gunicorn with Cherrypy so that Gunicorn can start the code automatically when it stops working.
Any other solution will also work in this case. Just want to make sure that the python program does not stop working.
I use a cron that checks the memory load every few minutes and resets cherrypy when the memory exceeds 500MB -- so that the web host doesn't complain to me with emails. Something on my server doesn't release memory when a function ends as it should, so this is a pragmatic work around.
This hack may be weird because I reset it using an HTTP request, but that's because I spent hours trying to figure out how to do this withing the BASH and gave up. It works.
CRON PART
*/2 * * * * /usr/local/bin/python2.7 /home/{mypath}/cron_reset_cp.py > $HOME/cron.log 2>&1
And code inside cron_reset_cp.py...
#cron for resetting cherrypy /cp/ when 500+ MB
import os
#assuming starts in /home/my_username/
os.chdir('/home/my_username/cp/')
import mem
C = mem.MemoryMonitor('my_username') #this function adds up all the memory
memory = int(float(C.usage()))
if memory > 500:#MB
#### Tried: pid = os.getpid() #current process = cronjob --- THIS approach did not work for me.
import urllib2
cp = urllib2.urlopen('http://myserver.com/cp?reset={password}')
Then I added this function to reset the cherrypy via cron OR after a github update from any browser (assuming only I know the {password})
The reset url would be http://myserver.com/cp?reset={password}
def index(self, **kw):
if kw.get('reset') == '{password}':
cherrypy.engine.restart()
ip = cherrypy.request.headers["X-Forwarded-For"] #get_client_ip
return 'CherryPy RESETTING for duty, sir! requested by '+str(ip)
The MemoryMonitor part is from here:
How to get current CPU and RAM usage in Python?
Python uses many error handling strategies to control flow. A simple try/except statement could throw an exception if, say, your memory overflowed, a load increased, or any number of issues making your code stall (hard to see without the actual code).
In the except clause, you could clear any memory you allocated and restart your processes again.
Depending on your OS, try the following logic:
Implement a os.pid() > /path/pid.file
Create a service script that connected to your web-port
Try to fetch data
If no data was recieved, kill PID #/path/pid.file
restart script
Your main script:
import os
with open('./pidfile.pid', 'wb') as fh:
fh.write(str(os.getpid()))
... Execute your code as per normal ...
service.py script:
from socket import *
from os import kill
s = socket()
try:
s.connect(('127.0.0.1', 80))
s.send('GET / HTTP/1.1\r\n\r\n')
len = len(s.recv(8192))
s.close()
except:
len = 0
if len <= 0:
with open('/path/to/pidfile.pid', 'rb') as fh:
kill(int(fh.read()))
And have a cronjob (execute in a console):
sudo export EDITOR=nano; crontab -e
Now you're in the text-editor editing your cronjobs, write the following two lines at the bottom:
*/2 * * * * cd /path/to/service.py; python service.py
*/5 * * * * cd /path/to/main/script.py; python script.py
Press Ctrl+X and when asked to save changes, write Y and enjoy.
Also, instead of restarting your script from within service.py script, i'd suggest that service.py kills the PID located in /path/pid.file and let your OS handle starting up your script if the PID is missing in /path/, Linux at least have very nifty features for this.
Best practice Ubuntu
It's more than considered best practice to use the systems service status apache2 for instance, the service scripts lets you reload, stop, start, get job states and what not.
Check out: http://upstart.ubuntu.com/cookbook/#service-job
And check the service scripts for all the other applications and not only use them as skeletons but make sure your application follows the same logic.
Perhaps you need supervisord to monitor your gunicorn process and restart it when it's necessary:
http://www.onurguzel.com/managing-gunicorn-processes-with-supervisor/

Categories

Resources