Stop raspbian/debian from within python - python

When I am away from my mountain, I monitor my Photo-voltaic system with Raspberry Pi and a small python script to read data and send it to my web page every hour. That is launched by a electo-mechanical switch that lights up the stuff for 15 minutes. But during that period, the script may run twice, which I would like to prevent as the result is messy (lsdx.eu/GPV_ZSDX).
I want to add some line at the end of the script to stop it once it has run once and possibly stop raspbian as well for a clean exit before the power is off.
- "exit" only exits a loop but the script is still running
- of course Ctrl+C won't do as I am away;
Could not find any tip in these highly technical messages in StackOverflow or in Rasbian help either.
Any tip?
Thanks

The exit() command should exit the program (break is the statement that will exit a loop). What behavior are you seeing?
To shut down , try :
python3:
from subprocess import run
run('poweroff', shell=True)
python2:
from subprocess import call
call('poweroff')
Note: poweroff may be called shutdown on your system and might require additional command line switches to force a shutdown (if needed).

For your case, structure the python script as a function using the following construct:
def read_data():
data_reading_voodo
return message_to_be_sent
def send_message(msg):
perform_message_sending_voodo
log_message_sending_voodoo_success_or_failure
return None
if __name__ == "__main__":
msg = read_data()
send_message(msg)
Structured like this, the python script should exit after running.
Next create a shell sript like follows (assuming bash and python, but modify according to your usage)
#!/bin/bash
python -m /path/to/your/voodo/script && sudo shutdown -h 6
The sudo shudown -h 6 shuts down the raspberrypi 6 minutes after the script is run. This option helps so you have some time after startup to remove the sript if you ever want to stop the run-restart cycle.
Make the shell script executable: chmod 755 run_py_script_then_set_shutdown see man chmod for details
Now create a cronjob to run run_py_script_then_set_shutdown on startup.
crontab -e Then add the following line to your crontab
#reboot /path/to/your/shell/script
Save, reboot the pi, and you're done.
Every time the rpi starts up, the python script should run and exit. Then the rpi will shutdown 6 minutes after the python script exits.
You can (should) adjust the 6 minutes for your purposes.

Thanks for all these answers that help me learn Pyth and Deb.
I finally opted for a very simple solution at the end of the script:
import os
os.system('sudo shutdown now')
But I keep in mind these other solutions
Thanks again,
Lionel

Related

continue program even after logout [duplicate]

I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')

Unable to execute a python3 script from rc.local

The Python script is unable to work in rc.local, as soon as it nevet gets executed. My idea is to run the script when the Raspberry Pi gets boot.
I have tested it with this sentence. The log.txt file only appears when I execute the program manually.
f = open("log.txt", "w")
f.write("log is working")
f.close()
Before that, I have tried to insert a time.sleep(30), to use usr/bin/python3, to change the head of the script to #!/usr/bin/env python3, change the user is executing the program to -u pi and a lot of things I can't even remember.
The final sentence is had before exit(0) is
sudo /usr/bin/python3 /home/pi/script.py &
rc.local is working as soon as it runs an echo that I have create in the file.
Finally the problem I had was that the script needed network, so I added it to crontab -e.
It still didn't work so I changed the raspi-config, as there is an option for network wait to network, but without success.
Finally, as that solution didn't work too, I added an sleep in the command as follows to wait for network:
#reboot sleep 40 && /usr/bin/python3 /home/pi/script.py
That finally worked.

Problem running python script at boot raspberry pi

I'm having an insane amount of trouble with my script at startup, I've tried countless ways and spent hours trying to get this to work.
I have a python script that I need to run at startup. However, it needs access to the internet, so I have to wait for the network.
I have tried many ways in the several tutorials I have tried, with crontab, by making a service with systemd, with rc.local however none of these have worked.
The only way that I was able to work was by doing a .desktop Desktop Entry, but that only worked for me while I had an external monitor plugged in, and my raspberry pi will be running without one.
Also, I was able to make my script run using the service method and now the rc.local
by adding this line:
sudo bash -c '/usr/bin/python3 /home/pi/Projects/capstone/main.py > /home/pi/capstone.log 2>&1' &
However, in the python script that I am trying to run, I have the following code:
os.system("sudo killall servod")
time.sleep(1)
os.system('sudo ~/PiBits/ServoBlaster/user/./servod')
And for some reason, it's not running my script correctly because I get the following error in my logs:
servod: no process found
sudo: /root/PiBits/ServoBlaster/user/./servod: command not found
The first one is expected because I run sudo killall servod when it may or may not be started, but the second one "Command not found" is what is the issue, if that bit of code doesn't get executed my program doesn't work.
Anyone out there could help me out with this?
Replace:
os.system('sudo ~/PiBits/ServoBlaster/user/./servod')
with:
os.system('sudo /home/pi/PiBits/ServoBlaster/user/./servod')
Double check the path and try to use absolute path
Make sure you script has permission 644
You can also try to copy the script to /etc/init.d and run it as init.d script. You do need to add the following to your script:
### BEGIN INIT INFO
# Provides: scriptname
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
sudo chmod +x <yourscript.py>
sudo update-rc.d <yourscript.py> defaults
One way you could easily wait for the network inside your python script is to ping a server until successful -- in this case Google.
def wait_for_network():
while os.system("ping -c 1 8.8.8.8") != 0:
time.sleep(1)
return
As for running the script at startup, I recommend editing /etc/xdg/lxsession/LXDE-pi/autostart and adding your python script in there with the format
#python3 home/pi/your_script.py

Running a shell script through Fabric which wants to run a background running command

I am a newbie in Fabric, and want to run one command in a background, it is written in shell script, and I have to run that command via Fabric, so lets assume I have a command in shell script as:
#!/bin/bash/
java &
Consider this is a file named myfile.sh
Now in Fabric I am using this code to run my script as:
put('myfile.sh', '/root/temp/')
sudo('sh /root/temp/myfile.sh')
Now this should start the Java process in background but when I login to the Machine and see the jobs using jobs command, nothing is outputted.
Where is the problem please shed some light.
Use it with
run('nohup PATH_TO_JMETER/Jmetercommand & sleep 5; exit 0)
maybe the process exists before you return. when you type in java, normally it shows up help message and exits. Try a sleep statement or something that lingers. and if you want to run it in the background, you could also append & to the sudo call
I use run("screen -d -m sh /root/temp/myfile.sh",pty=False). This starts a new screen session in detached mode, which will continue running after the connection is lost. I use the pty=False option because I found that when connecting to several hosts, the process would not be started in all of them without this option.

Shell script to run task and shutdown after task is done

I want to run a shell script that runs a python program and shutdowns after the program is done. Here is what I wrote
#!/bin/bash
python program
sudo shutdown -h now
This just shutdowns the system without waiting for the program to complete. Is there a different command to use that waits for the program to complete?
What you have in your example should actually only shutdown once the python command has completed, unless the python program forks or backgrounds early.
Another way to run it would be to make the shutdown conditional upon the success of the first command
python command && sudo shutdown -h now
Of course this still will not help you if the python program does anything like forking or daemonizing. Simply try running the python script alone and take note if control returns immediately to the console or not.
You could run the command halt to stop your system:
#!/bin/sh
python program
sudo halt
The python program is running first, and halt would run after its completion (you might test the exit code of your python program). If it does not behave like expected, try to understand why. You could add a logger command before the halt to write something in the system logs.
Alternatively, you can use command substitution like this:
$(command to run your program)
The script waits until the wrapped command finishes before moving onto the next one!
#!/bin/sh
$(python program.py)
sudo shutdown -P 0

Categories

Resources