Problem running python script at boot raspberry pi - python

I'm having an insane amount of trouble with my script at startup, I've tried countless ways and spent hours trying to get this to work.
I have a python script that I need to run at startup. However, it needs access to the internet, so I have to wait for the network.
I have tried many ways in the several tutorials I have tried, with crontab, by making a service with systemd, with rc.local however none of these have worked.
The only way that I was able to work was by doing a .desktop Desktop Entry, but that only worked for me while I had an external monitor plugged in, and my raspberry pi will be running without one.
Also, I was able to make my script run using the service method and now the rc.local
by adding this line:
sudo bash -c '/usr/bin/python3 /home/pi/Projects/capstone/main.py > /home/pi/capstone.log 2>&1' &
However, in the python script that I am trying to run, I have the following code:
os.system("sudo killall servod")
time.sleep(1)
os.system('sudo ~/PiBits/ServoBlaster/user/./servod')
And for some reason, it's not running my script correctly because I get the following error in my logs:
servod: no process found
sudo: /root/PiBits/ServoBlaster/user/./servod: command not found
The first one is expected because I run sudo killall servod when it may or may not be started, but the second one "Command not found" is what is the issue, if that bit of code doesn't get executed my program doesn't work.
Anyone out there could help me out with this?

Replace:
os.system('sudo ~/PiBits/ServoBlaster/user/./servod')
with:
os.system('sudo /home/pi/PiBits/ServoBlaster/user/./servod')

Double check the path and try to use absolute path
Make sure you script has permission 644
You can also try to copy the script to /etc/init.d and run it as init.d script. You do need to add the following to your script:
### BEGIN INIT INFO
# Provides: scriptname
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
sudo chmod +x <yourscript.py>
sudo update-rc.d <yourscript.py> defaults

One way you could easily wait for the network inside your python script is to ping a server until successful -- in this case Google.
def wait_for_network():
while os.system("ping -c 1 8.8.8.8") != 0:
time.sleep(1)
return
As for running the script at startup, I recommend editing /etc/xdg/lxsession/LXDE-pi/autostart and adding your python script in there with the format
#python3 home/pi/your_script.py

Related

Unable to execute a python3 script from rc.local

The Python script is unable to work in rc.local, as soon as it nevet gets executed. My idea is to run the script when the Raspberry Pi gets boot.
I have tested it with this sentence. The log.txt file only appears when I execute the program manually.
f = open("log.txt", "w")
f.write("log is working")
f.close()
Before that, I have tried to insert a time.sleep(30), to use usr/bin/python3, to change the head of the script to #!/usr/bin/env python3, change the user is executing the program to -u pi and a lot of things I can't even remember.
The final sentence is had before exit(0) is
sudo /usr/bin/python3 /home/pi/script.py &
rc.local is working as soon as it runs an echo that I have create in the file.
Finally the problem I had was that the script needed network, so I added it to crontab -e.
It still didn't work so I changed the raspi-config, as there is an option for network wait to network, but without success.
Finally, as that solution didn't work too, I added an sleep in the command as follows to wait for network:
#reboot sleep 40 && /usr/bin/python3 /home/pi/script.py
That finally worked.

Auto run commands on Raspberry pi

i wanted to know what would be the best approach to solving this issue. I would like to run a bunch of commands to run some python scripts and a service when the raspberry pi loads into the desktop. Here are my commands:
cd /var/www/html/
python servocontrol.py
cd /var/www/html/Misc
python temp1.py
python seven_segment.py
sudo /etc/init.d/livestream.sh start
My initial method which i read on most post was to add it in the rc.local by:
sudo nano /etc/rc.local
And paste the exact commands in it as follows:
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
sleep 15
cd /var/www/html/
python servocontrol.py
cd /var/www/html/Misc
python temp1.py
python seven_segment.py
sudo /etc/init.d/livestream.sh start
exit 0
Sadly it didn't work. Kindly if any one can point out what i am missing or if there are additional steps that need to be done for this to work. If there are other methods i am also open to them!
Thanks
you can use crontab on raspberry pi (The crontab (short for "cron table") is a list of commands that are scheduled to run at regular time intervals on your computer system. The crontab command opens the crontab for editing, and lets you add, remove, or modify scheduled tasks.)
AND you can create service for your script.
it's very easy just need 5 sec search to find instruction.
are you want execute a script in server start?
you can use #reboot in cron

Stop raspbian/debian from within python

When I am away from my mountain, I monitor my Photo-voltaic system with Raspberry Pi and a small python script to read data and send it to my web page every hour. That is launched by a electo-mechanical switch that lights up the stuff for 15 minutes. But during that period, the script may run twice, which I would like to prevent as the result is messy (lsdx.eu/GPV_ZSDX).
I want to add some line at the end of the script to stop it once it has run once and possibly stop raspbian as well for a clean exit before the power is off.
- "exit" only exits a loop but the script is still running
- of course Ctrl+C won't do as I am away;
Could not find any tip in these highly technical messages in StackOverflow or in Rasbian help either.
Any tip?
Thanks
The exit() command should exit the program (break is the statement that will exit a loop). What behavior are you seeing?
To shut down , try :
python3:
from subprocess import run
run('poweroff', shell=True)
python2:
from subprocess import call
call('poweroff')
Note: poweroff may be called shutdown on your system and might require additional command line switches to force a shutdown (if needed).
For your case, structure the python script as a function using the following construct:
def read_data():
data_reading_voodo
return message_to_be_sent
def send_message(msg):
perform_message_sending_voodo
log_message_sending_voodoo_success_or_failure
return None
if __name__ == "__main__":
msg = read_data()
send_message(msg)
Structured like this, the python script should exit after running.
Next create a shell sript like follows (assuming bash and python, but modify according to your usage)
#!/bin/bash
python -m /path/to/your/voodo/script && sudo shutdown -h 6
The sudo shudown -h 6 shuts down the raspberrypi 6 minutes after the script is run. This option helps so you have some time after startup to remove the sript if you ever want to stop the run-restart cycle.
Make the shell script executable: chmod 755 run_py_script_then_set_shutdown see man chmod for details
Now create a cronjob to run run_py_script_then_set_shutdown on startup.
crontab -e Then add the following line to your crontab
#reboot /path/to/your/shell/script
Save, reboot the pi, and you're done.
Every time the rpi starts up, the python script should run and exit. Then the rpi will shutdown 6 minutes after the python script exits.
You can (should) adjust the 6 minutes for your purposes.
Thanks for all these answers that help me learn Pyth and Deb.
I finally opted for a very simple solution at the end of the script:
import os
os.system('sudo shutdown now')
But I keep in mind these other solutions
Thanks again,
Lionel

Recover python process after ssh broken pipe [duplicate]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm working on a Linux machine through SSH (Putty). I need to leave a process running during the night, so I thought I could do that by starting the process in background (with an ampersand at the end of the command) and redirecting stdout to a file.
To my surprise, that doesn't work. As soon as I close the Putty window, the process is stopped.
How can I prevent that from happening??
Check out the "nohup" program.
I would recommend using GNU Screen. It allows you to disconnect from the server while all of your processes continue to run. I don't know how I lived without it before I knew it existed.
When the session is closed the process receives the SIGHUP signal which it is apparently not catching. You can use the nohup command when launching the process or the bash built-in command disown -h after starting the process to prevent this from happening:
> help disown
disown: disown [-h] [-ar] [jobspec ...]
By default, removes each JOBSPEC argument from the table of active jobs.
If the -h option is given, the job is not removed from the table, but is
marked so that SIGHUP is not sent to the job if the shell receives a
SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all
jobs from the job table; the -r option means to remove only running jobs.
daemonize? nohup? SCREEN? (tmux ftw, screen is junk ;-)
Just do what every other app has done since the beginning -- double fork.
# ((exec sleep 30)&)
# grep PPid /proc/`pgrep sleep`/status
PPid: 1
# jobs
# disown
bash: disown: current: no such job
Bang! Done :-) I've used this countless times on all types of apps and many old machines. You can combine with redirects and whatnot to open a private channel between you and the process.
Create as coproc.sh:
#!/bin/bash
IFS=
run_in_coproc () {
echo "coproc[$1] -> main"
read -r; echo $REPLY
}
# dynamic-coprocess-generator. nice.
_coproc () {
local i o e n=${1//[^A-Za-z0-9_]}; shift
exec {i}<> <(:) {o}<> >(:) {e}<> >(:)
. /dev/stdin <<COPROC "${#}"
(("\$#")&) <&$i >&$o 2>&$e
$n=( $o $i $e )
COPROC
}
# pi-rads-of-awesome?
for x in {0..5}; do
_coproc COPROC$x run_in_coproc $x
declare -p COPROC$x
done
for x in COPROC{0..5}; do
. /dev/stdin <<RUN
read -r -u \${$x[0]}; echo \$REPLY
echo "$x <- main" >&\${$x[1]}
read -r -u \${$x[0]}; echo \$REPLY
RUN
done
and then
# ./coproc.sh
declare -a COPROC0='([0]="21" [1]="16" [2]="23")'
declare -a COPROC1='([0]="24" [1]="19" [2]="26")'
declare -a COPROC2='([0]="27" [1]="22" [2]="29")'
declare -a COPROC3='([0]="30" [1]="25" [2]="32")'
declare -a COPROC4='([0]="33" [1]="28" [2]="35")'
declare -a COPROC5='([0]="36" [1]="31" [2]="38")'
coproc[0] -> main
COPROC0 <- main
coproc[1] -> main
COPROC1 <- main
coproc[2] -> main
COPROC2 <- main
coproc[3] -> main
COPROC3 <- main
coproc[4] -> main
COPROC4 <- main
coproc[5] -> main
COPROC5 <- main
And there you go, spawn whatever. the <(:) opens an anonymous pipe via process substitution, which dies, but the pipe sticks around because you have a handle to it. I usually do a sleep 1 instead of : because its slightly racy, and I'd get a "file busy" error -- never happens if a real command is ran (eg, command true)
"heredoc sourcing":
. /dev/stdin <<EOF
[...]
EOF
This works on every single shell I've ever tried, including busybox/etc (initramfs). I've never seen it done before, I independently discovered it while prodding, who knew source could accept args? But it often serves as a much more manageable form of eval, if there is such a thing.
nohup blah &
Substitute your process name for blah!
Personally, I like the 'batch' command.
$ batch
> mycommand -x arg1 -y arg2 -z arg3
> ^D
This stuffs it in to the background, and then mails the results to you. It's a part of cron.
As others have noted, to run a process in the background so that you can disconnect from your SSH session, you need to have the background process properly disassociate itself from its controlling terminal - which is the pseudo-tty that the SSH session uses.
You can find information about daemonizing processes in books such as Stevens' "Advanced Network Program, Vol 1, 3rd Edn" or Rochkind's "Advanced Unix Programming".
I recently (in the last couple of years) had to deal with a recalcitrant program that did not daemonize itself properly. I ended up dealing with that by creating a generic daemonizing program - similar to nohup but with more controls available.
Usage: daemonize [-abchptxV][-d dir][-e err][-i in][-o out][-s sigs][-k fds][-m umask] -- command [args...]
-V print version and exit
-a output files in append mode (O_APPEND)
-b both output and error go to output file
-c create output files (O_CREAT)
-d dir change to given directory
-e file error file (standard error - /dev/null)
-h print help and exit
-i file input file (standard input - /dev/null)
-k fd-list keep file descriptors listed open
-m umask set umask (octal)
-o file output file (standard output - /dev/null)
-s sig-list ignore signal numbers
-t truncate output files (O_TRUNC)
-p print daemon PID on original stdout
-x output files must be new (O_EXCL)
The double-dash is optional on systems not using the GNU getopt() function; it is necessary (or you have to specify POSIXLY_CORRECT in the environment) on Linux etc. Since double-dash works everywhere, it is best to use it.
You can still contact me (firstname dot lastname at gmail dot com) if you want the source for daemonize.
However, the code is now (finally) available on GitHub in my SOQ (Stack
Overflow Questions) repository as file daemonize-1.10.tgz in the
packages
sub-directory.
For most processes you can pseudo-daemonize using this old Linux command-line trick:
# ((mycommand &)&)
For example:
# ((sleep 30 &)&)
# exit
Then start a new terminal window and:
# ps aux | grep sleep
Will show that sleep 30 is still running.
What you have done is started the process as a child of a child, and when you exit, the nohup command that would normally trigger the process to exit doesn't cascade down to the grand-child, leaving it as an orphan process, still running.
I prefer this "set it and forget it" approach, no need to deal with nohup, screen, tmux, I/o redirection, or any of that stuff.
On a Debian-based system (on the remote machine)
Install:
sudo apt-get install tmux
Usage:
tmux
run commands you want
To rename session:
Ctrl+B then $
set Name
To exit session:
Ctrl+B then D
(this leaves the tmux session). Then, you can log out of SSH.
When you need to come back/check on it again, start up SSH, and enter
tmux attach session_name
It will take you back to your tmux session.
If you use screen to run a process as root, beware of the possibility of privilege elevation attacks. If your own account gets compromised somehow, there will be a direct way to take over the entire server.
If this process needs to be run regularly and you have sufficient access on the server, a better option would be to use cron the run the job. You could also use init.d (the super daemon) to start your process in the background, and it can terminate as soon as it's done.
nohup is very good if you want to log your details to a file. But when it goes to background you are unable to give it a password if your scripts ask for. I think you must try screen. its a utility you can install on your linux distribution using yum for example on CentOS yum install screen then access your server via putty or another software, in your shell type screen. It will open screen[0] in putty. Do your work. You can create more screen[1], screen[2], etc in same putty session.
Basic commands you need to know:
To start screen
screen
To create next screen
ctrl+a+c
To move to next screen you created
ctrl+a+n
To detach
ctrl+a+d
During work close your putty. And next time when you login via putty type
screen -r
To reconnect to your screen, and you can see your process still running on screen. And to exit the screen type #exit.
For more details see man screen.
Nohup allows a client process to not be killed if a the parent process is killed, for argument when you logout. Even better still use:
nohup /bin/sh -c "echo \$\$ > $pidfile; exec $FOO_BIN $FOO_CONFIG " > /dev/null
Nohup makes the process you start immune to termination which your SSH session and its child processes are kill upon you logging out. The command i gave provides you with a way you can store the pid of the application in a pid file so that you can correcly kill it later and allows the process to run after you have logged out.
Use screen. It is very simple to use and works like vnc for terminals.
http://www.bangmoney.org/presentations/screen.html
There's also the daemon command of the open-source libslack package.
daemon is quite configurable and does care about all the tedious daemon stuff such as automatic restart, logging or pidfile handling.
If you're willing to run X applications as well - use xpra together with "screen".
i would also go for screen program (i know that some1 else answer was screen but this is a completion)
not only the fact that &, ctrl+z bg disown, nohup, etc. may give you a nasty surprise that when you logoff job will still be killed (i dunno why, but it did happened to me, and it didn't bother with it be cause i switched to use screen, but i guess anthonyrisinger solution as double forking would solve that), also screen have a major advantage over just back-grounding:
screen will background your process without losing interactive control to it
and btw, this is a question i would never ask in the first place :) ... i use screen from my beginning of doing anything in any unix ... i (almost) NEVER work in a unix/linux shell without starting screen first ... and i should stop now, or i'll start an endless presentation of what good screen is and what can do for ya ... look it up by yourself, it is worth it ;)
Append this string to your command: >&- 2>&- <&- &. >&- means close stdout. 2>&- means close stderr. <&- means close stdin. & means run in the background. This works to programmatically start a job via ssh, too:
$ ssh myhost 'sleep 30 >&- 2>&- <&- &'
# ssh returns right away, and your sleep job is running remotely
$
Accepted answer suggest using nohup. I would rather suggest using pm2. Using pm2 over nohup has many advantages, like keeping the application alive, maintain log files for application and lot more other features. For more detail check this out.
To install pm2 you need to download npm. For Debian based system
sudo apt-get install npm
and for Redhat
sudo yum install npm
Or you can follow these instruction.
After installing npm use it to install pm2
npm install pm2#latest -g
Once its done you can start your application by
$ pm2 start app.js # Start, Daemonize and auto-restart application (Node)
$ pm2 start app.py # Start, Daemonize and auto-restart application (Python)
For process monitoring use following commands:
$ pm2 list # List all processes started with PM2
$ pm2 monit # Display memory and cpu usage of each app
$ pm2 show [app-name] # Show all informations about application
Manage processes using either app name or process id or manage all processes together:
$ pm2 stop <app_name|id|'all'|json_conf>
$ pm2 restart <app_name|id|'all'|json_conf>
$ pm2 delete <app_name|id|'all'|json_conf>
Log files can be found in
$HOME/.pm2/logs #contain all applications logs
Binary executable files can also be run using pm2. You have to made a change into the jason file. Change the "exec_interpreter" : "node", to "exec_interpreter" : "none". (see the attributes section).
#include <stdio.h>
#include <unistd.h> //No standard C library
int main(void)
{
printf("Hello World\n");
sleep (100);
printf("Hello World\n");
return 0;
}
Compiling above code
gcc -o hello hello.c
and run it with np2 in the background
pm2 start ./hello
I used screen command. This link has detail as to how to do this
https://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/#starting
On systemd/Linux, systemd-run is a nice tool to launch session-independent processes.

Daemon with python 3

I am writing a script in python3 for Ubuntu that should be executed all X Minutes and should automatic start after logging in. Therefore I want to create a daemon (is it the right solution for that?) but I haven't found any modules / examples for python3, just for python 2.X. Do you know something what I can work with?
Thank you,
I would simply make the script, and have it somewhere, and then add a line to the crontab of the user who you want to run the script. This may be the root.
sudo crontab -e
To start the editor of the crontab
X * * * * /usr/bin/python /path/to/the/script
This way the script will be executed every X minutes. No need to daemonize, no need to make your own timer in the script.
Suppose for python script name is monitor. use following steps:
copy monitor script in /usr/local/bin/ (not necessary)
Also add a copy in /etc/init.d/
Then execute following command to make it executable
sudo -S chmod "a+x" "/etc/init.d/monitor"
At last run update.rc command
sudo -S update-rc.d "monitor" "defaults" "98"
this will execute you monitor whenever you login for all tty.

Categories

Resources