I know this is an exact copy of this question, but I've been trying different solutions for a while and didn't come up with anything.
I have this simple script that uses PRAW to find posts on Reddit. It takes a while, so I need it to stay alive when I log out of the shell as well.
I tried to set it up as a start-up script, to use nohup in order to run it in the background, but none of this worked. I followed the quickstart and I can get the hello word app to run, but all these examples are for web applications and all I want is start a process on my VM and keep it running when I'm not connected, without using .yaml configuration files and such. Can somebody please point me in the right direction?
Well, at the end using nohup was the answer. I'm new to the GNU environment and I just assumed it didn't work when I first tried. My program was exiting with an error, but I didn't check the nohup.out file so I was unaware of it..
Anyway here is a detailed guide for future reference (Using Debian Stretch):
Make your script an executable
chmod +x myscript.py
Run the nohup command to execute the script in the background. The & option ensures that the process stays alive after exiting. I've added the shebang line to my python script so there's no need to call python here
nohup /path/to/script/myscript.py &
Logout from the shell if you want
logout
Done! Now your script is up and running. You can login back and make sure that your process is still alive by checking the output of this command:
ps -e | grep myscript.py
Related
The python script script.py is located in /usr/bin/monitor/scripts and it's main function is to use subprocess.check_call() and subprocess.check_output() to call various administrative tools (both c programs located in /usr/bin/monitor/ created specifically for the machine, and linux executables in /sbin like fdisk -l and df -h). It was written to run as root and print output from these programs in a useful way to the command line.
My project is to make the output from this script viewable through a webpage. I'm on a Beaglebone Black using Apache2, which executes files as user www-data from its DocumentRoot, /var/www/html/. The webpage is set up like this:
index.html uses an iframe to display the output of a python CGI script which is also located in /var/www/html/
script.cgi attempts to call/display output from script.py output using the subprocess module
The problem is that script.py is being called just fine, but each of the calls within script.py fail and return script.py's error messages because I presume they need to be run as root when apache is running them as user www-data.
To try to get around this, I created a new group called bbb, added www-data to the group, then ran chown :bbb script.py to change its group to bbb. Unfortunately it was still causing the same problems, so I tried changing permissions from 755 to 775, which didn’t work either. I tried running chown :bbb * on the files/programs that script.py uses, also to no avail. Also, some of the executables script.py uses are in /sbin and I am cautious to just give it blanket root access to directories like this.
Since my attempts at fixing ownership issues felt a bit like 1000 monkey code, I created new version of the script in which I create a list of html output, and after each print statement in the original code, I append the same line of text as a string with html tags to the html output list, then at the end of the script (in whatami) I have it create and write to a .txt file in /var/www/html/, and call os.chmod("/var/www/html/data.txt", 0o755) to give apache access. The CGI then calls subprocess.check_call() on script.py, then opens, reads, and prints each line with html formatting to the iframe in the webpage. This attempt at least resulted in accurate output but... it only updates when it is run in terminal as root, rather than re-running script.py ever time the page is refreshed, which kind of undermines the point of the webpage. I assume this means the subprocess check_call in the CGI script is not working correctly, but for some reason, the subprocess call itself doesn’t throw any errors or indications of failure, yet the text file returns without being updated. Even with the subprocess call in a “try” block succeeded by a “print(‘call successful’)”, it returns the success message and then the not updated text file.
I’m a bit at a loss trying to figure out how to just force the script to run and do it’s thing in the background so that the file will update without just giving apache root access. I've read a few things about either wrapping the python script in a shell that causes it to be run as root, or to change sudoers to give www-data sudo priviledges, but I do not want to introduce security issues or make what was intended to be a simple script allowing output to a webpage to become more convoluted than it already has. Any advice or direction would be greatly appreciated.
Best way IMO would be to "decouple" execution, by creating a localhost-only service which you "call" from the apache process by connecting via a local socket.
E.g. if using systemd:
Create: /etc/systemd/system/my-svc.socket
[Unit]
Description=My svc socket
[Socket]
ListenStream=127.0.0.1:1234
Accept=yes
[Install]
WantedBy=sockets.target
Create: /etc/systemd/system/my-svc#.service
[Unit]
Description=My Service
Requires=my-svc.socket
[Service]
Type=simple
ExecStart=/opt/my-service/script.sh %i
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
Create /opt/my-service/script.sh:
#!/bin/sh
echo "id=$(id)"
echo "args=$*"
Finish setup with:
$ sudo chmod +x /opt/my-service/script.sh
$ sudo systemctl daemon-reload
Try it out:
$ nc 127.0.0.1 1234
id=uid=0(root) gid=0(root) groups=0(root)
args=55-127.0.0.1:1234-127.0.0.1:32938
Then from your cgi, you'll need to do the equivalent of the nc command above (just a tcp connection).
--jjo
I am currently using linux. I have a python script which I want to run as a background service such as the script should start to run when I start my machine.
Currently I am using python 2.7 and the command 'python myscripy.py' to run the script.
Can anyone give an idea about how to do this.
Thank you.
It depends on where in the startup process you want your script to run. If you want your script to start up during the init process, then you can incorporate it into the init scripts in /etc/init.d/ The details will depend on what init system your system is running. You might be on a system V init (https://en.wikipedia.org/wiki/Init) or on systemd (https://wiki.debian.org/systemd), or possibly some other one. If you don't need your script to run at the system level, then you could kick the script off when you log in. To do that, you'd put it in ~/.profile if you log in using a terminal. Or, if you use a desktop environment, then you're going to be doing something in ~/.local/XSession (if I recall correctly). Different desktop environments are going to have different ways to specify what happens when a user logs in.
Hope this helps! Maybe clarify your needs if you want more detail.
You can create init script in /etc/init/ directory
Example:
start on runlevel [2345]
stop on runlevel [!2345]
kill timeout 5
respawn
script
exec /usr/bin/python /path/to/script.py
end script
Save with .conf extension
I'm running Flask on an AWS instance. My goal is to be able to have Flask running on its own, without me having to ssh into it and run
python app.py
Is there a way to have this command run every time the AWS instance itself reboots?
Yes there is a way to start the python script on reboot.
On linux you will find /etc/init.d directory. You will need to write your own init.d script and put it inside /etc/init.d directory,which will indeed start your python script. Ahh ! wait its not goning to be magic. Dont worry, there is fixed format of init.d script. Script contains some basic tasks like start(),stop(),reload() etc. Just add the code that you want to run on start in start() block.
Some reference link : https://bash.cyberciti.biz/guide//etc/init.d
Try this:
(crontab -l 2>/dev/null; echo '#reboot python /path/to/app.py') | crontab -
I created a Daemon process with these liblinktosite
I connect trough ssh and start the process with python myDaemon.py start.
I use a loop within the daemon method to do my tasks. But as soon as I logout the daemon stops(dies).
Does this happen because I save the PID file on my user and not in the root folder?
Anyone a idea. I can deliver code but now on Thread creation.(+3h)
Use the shebang line in your python script. Make it executable using the command.
chmod +x test.py
Use no hangup to run a program in background even if you close your terminal.
nohup /path/to/test.py &
Do not forget to use & to put it in background.
To see the process again, use in terminal,
ps ax | grep test.py
Answer
Another way would be actually make it an upstart script
I have 3-4 python scripts which I want to run continuously on a remote server(which i am accessing through ssh login).The script might crash due to exceptions(handling all the exceptions is not practical now), and in such cases i want to immediately restart the script.
What i have done till now.
I have written a .conf file in /etc/init
chdir /home/user/Desktop/myFolder
exec python myFile.py
respawn
This seemed to work fine for about 4 hours and then it stopped working and i could not start the .conf file.
Suggest changes to this or i am also open to a new approach
Easiest way to do it - run in infinite bash loop in screen. Also it's the worst way to do it:
screen -S sessionName bash -c 'while true; python myFile.py ; done'
You can also use http://supervisord.org/ or daemon by writing init.d script for it http://www.linux.com/learn/tutorials/442412-managing-linux-daemons-with-init-scripts
If your script is running on an Ubuntu machine you have the very convenient Upstart, http://upstart.ubuntu.com/
Upstart does a great job running services on boot and respawning a process that died.