I am new to python and have pretty basic knowledge of Linux.
I need to start a script at boot time on a Ubuntu 14.04.3 server.
The only thing is, the script is a monitoring tool and should be running all the time so I can't just make a periodical cron call.
I found this at first : running a python script with cron
I have tried to add this in crontab :
#reboot python /path/to/script.py &
And also this :
#reboot /path/to/script.py &
but it doesn't seems to work.
I have also seen this : How to make a python script run like a service or daemon in linux
The main answer is cron or a change in the python code.
So my question is : Is there another way to run my script at boot and let it run "forever" without changing the code ?
I assure you if I don't want to change the code it's not by lazyness but I will if it's the only option.
Other information (don't know if it's necessary), I am running Windows and have access to the server via PuTTY.
The version of Python is 2.7
UPDATE
Here is the cron log :
Nov 27 15:57:03 trustyovh cron[760]: (CRON) INFO (pidfile fd = 3)
Nov 27 15:57:03 trustyovh cron[798]: (CRON) STARTUP (fork ok)
Nov 27 15:57:03 trustyovh cron[798]: (CRON) INFO (Running #reboot jobs)
Nov 27 15:57:03 trustyovh CRON[807]: (administrateur) CMD (/home/administrateur/scuMonitor/main.py &)
Nov 27 15:57:03 trustyovh CRON[800]: (CRON) info (No MTA installed, discarding output)
Nov 27 16:09:01 trustyovh CRON[1792]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime))
Here is the crontab :
#reboot /home/administrateur/scuMonitor/main.py &
UPDATE 2
Well, it was actually working with the cron set to reboot, but, my script didn't put his log where I expected it to do (I wasn't understanding how the path work on Linux).
Thanks for all the answers everyone !
I would suggest you the same thing I have written here
Basically, you can run your python code as a service using systemd, all you have to do is to write a <your-app-name>.service file, like the one below
[Unit]
Description=Some kind of description
[Service]
Type=simple
ExecStart=<path to your bin with args if needed>
then, save it under /etc/systemd/system/. To check if everything is fine, run
sudo systemctl start <your-app-name>
and then
sudo systemctl status <your-app-name>
Finally run
sudo systemctl enable <your-app-name>
and the service will be executed at each system boot.
Taken from Run Python script at startup in Ubuntu.
You can start a service on Ubuntu by adding it to the /etc/init directory.
Put this in /etc/init
mystartupscript.conf
start on runlevel [2345]
stop on runlevel [!2345]
exec /path/to/script.py
As far as i know the only way to keep it checking for what ever you want it to check for is by implementing a loop in the code/daemonizing.
Related
I need help with bash and systemctl service...
My problem is that my company requires me to automate the boot of all virtual machines running on our KVM server (CentOS), we have 5 physical hosts in total; however, I came across an issue on one of them, which I recently updated my VM versions, where my service <vmx.service> located under /etc/systemd/system does not run my ExecStart=/bin/bash -c './vmx.sh -lv --start --cfg config/pod8/vmx1.conf' bash script.
file: vmx.service
[Unit]
Description=Juniper vMX Router
Wants=network-online.target
After=network-online.target
After=libvirtd.service
[Service]
WorkingDirectory=/home/vMX-21.1R1/
Environment="PATH=/opt/rh/python27/root/usr/bin:/usr/lib64/qt3.3/bin:/usr/local/sbin:/usr/local/bin:/$
Type=oneshot
User=root
Group=root
RemainAfterExit=true
## Commands used to stop/start/restart the vMX
ExecStart=/bin/bash -c './path-python.sh'
ExecStart=/bin/bash -c './vmx.sh -lv --start --cfg config/pod8/vmx1.conf' <<<<<<<<<<<<<<<<<<<<<<<
ExecStart=/bin/bash -c './vmx.sh -lv --start --cfg config/pod8/vmx2.conf'
ExecStart=/bin/bash -c './vmx.sh -lv --start --cfg config/pod8/vr-device.conf'
[Install]
WantedBy=multi-user.target
Here is my service status:
Loaded: loaded (/etc/systemd/system/vmx.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2021-11-29 14:04:37 AEST; 2h 4min ago
Process: 2324 ExecStart=/bin/bash -c ./vmx.sh -lv --start --cfg config/pod8/vmx1.conf (code=exited, status=2)
Process: 1877 ExecStart=/bin/bash -c ./path-python.sh (code=exited, status=0/SUCCESS)
Main PID: 2324 (code=exited, status=2)
Nov 29 14:04:37 Juniper-KVM3 bash[2324]: import netifaces as ni
Nov 29 14:04:37 Juniper-KVM3 bash[2324]: ImportError: No module named netifaces
Nov 29 14:04:37 Juniper-KVM3 bash[2324]: Log file........................................../home....log
Nov 29 14:04:37 Juniper-KVM3 bash[2324]: ==================================================
Nov 29 14:04:37 Juniper-KVM3 bash[2324]: Aborted!. 1 error(s) and 0 warning(s)
Nov 29 14:04:37 Juniper-KVM3 bash[2324]: ==================================================
Nov 29 14:04:37 Juniper-KVM3 systemd[1]: vmx.service: main process exited, code=exited, status=2...MENT
Nov 29 14:04:37 Juniper-KVM3 systemd[1]: Failed to start Juniper vMX Router.
Nov 29 14:04:37 Juniper-KVM3 systemd[1]: Unit vmx.service entered failed state.
Nov 29 14:04:37 Juniper-KVM3 systemd[1]: vmx.service failed.
I thought that the error: ImportError: No module named netifaces would be fixed by running
pip uninstall netifaces && pip install netifaces as read in this article.
However, it did not work either, Regardless, the weirdest thing is that when I run the same script in my terminal, it works:
[root#system]# ./vmx.sh -lv --stop --cfg config/pod8/vmx1.conf
[...]
==================================================
VMX Status Verification Completed.
==================================================
Log file........................................../home/vMX-21.1R1/build/8m1/logs/vmx_1638166233.log
==================================================
Thank you for using VMX
==================================================
I made sure that SElinux is disabled:
SELinux status: disabled
It is worth noting that ./path-python.sh script runs without a problem, this script allows me to change my python27 path, otherwise my VM Installation would fail, but I can certainly say that this problem has nothing to do with python itself, as the script works only when I run it within my terminal. (My Other servers works as expected using the same scripts and bash, I have no clue what is wrong).
The ExecStart commands are run separately, not in the same shell or somesuch.
In other words, if
ExecStart=/bin/bash -c './path-python.sh'
sets up some Python-related path tweaks, they will not be available for the subsequent ExecStart commands.
Instead of a script that "changes your python27 path" I would recommend you to set up a virtualenv for that Python environment with the packages you require installed in it, so you could just use that virtualenv's interpreter in your vmx.sh. If you can't edit vmx.sh, you could set up a script that does something like
#!/bin/bash
source /opt/myvirtualenv/bin/activate
./vmx.sh -lv --start --cfg config/pod8/vmx1.conf
# ...
and run that instead; the activate script ensures that virtualenv's Python is the primary one for that shell session.
So, after playing around with virtualenv I finally made it work! Previously, when running ./vmx.sh script, it would require the system to use Python2.7; however, every time my host would reboot, if I wanted to fully automate the boot of all of my VMs, I would require to change the PATH of Python by running manually the following command onto my terminal:
echo 'export PATH=/opt/rh/python27/root/usr/bin:$PATH' >> /etc/profile
PATH=/opt/rh/python27/root/usr/bin:$PATH
export PATH
cd /opt/rh/python27/ && . enable && pip install netifaces pyyaml
Instead of running the previous commands from a .sh script, which ended up not working for me, It was pointed out that "Python-related path tweaks, will not be available for the subsequent ExecStart commands". For that reason, I did the following:
Instead of running scl enable python27 bash (which would make as default my Python Version until you logout), I decided to keep the changes by creating a script under /etc/profile.d/
#!/bin/bash
source scl_source enable python27
I installed virtualenv pip install virtualenv
Created a virtual environment virtualenv --python=/usr/bin/python2.7 vMX-ENV which I would use to install all my packages and scripts to install my VMs
Activated my Virtual Environment source /home/vMX-ENV/bin/activate
Created a Directory (vMX-*) where ./vmx.sh script will be located.
# ls
bin lib lib64 pyvenv.cfg vMX-21.1R1
Inside of that Directory, I created a bash script that will be initiated by the service when on boot, I called it: pyrun.sh, this script runs the ./vmx.sh script "Credit: AKX"
#!/bin/bash
source /home/vMX-ENV/bin/activate
./vmx.sh -lv --start --cfg config/pod17/vmx1.conf
# ...
Created the vmxd.service and ExecStart=/bin/bash -c './pyrun.sh'
By creating a virtualenv I was able to isolate my Python project by only creating a unique environment with all the needed packages and running only Python2.7 which is required to run ./vmx.sh
After rebooting the host, here is the journalctl -u vmxd.service successful output as a result:
Starting Juniper vMX Router...
==================================================
Welcome to VMX
==================================================
[...]
I am running a Python script over SSH on an Ubuntu 18.04.2 server.
When I use ssh to login to the server and run the script, and then terminate the ssh session, the Python script also terminates as expected (I'm not using nohup, &, etc.) However, when I run the same script using Fabric, and then terminate the local Fabric process, the python process on the server gets reaped by systemd. This is what the systemd status looks like:
● session-219.scope - Session 219 of user root
Loaded: loaded (/run/systemd/transient/session-219.scope; transient)
Transient: yes
Active: active (abandoned) since Fri 2019-12-27 00:56:07 PST; 2min 55s ago
Tasks: 1
CGroup: /user.slice/user-0.slice/session-219.scope
└─6872 /root/peacock/bin/python3 -m src.main
Dec 27 00:56:07 master systemd[1]: Started Session 219 of user root.
Dec 27 00:57:52 master sshd[6783]: pam_unix(sshd:session): session closed for user root
Is there a way to prevent systemd from reaping the child process, similar to the behavior of ssh? And why does it only get reaped when using Fabric but not ssh directly?
More details:
The Python script is a simple flask app. The gist of it is:
flask_app = Flask('app')
#flask.route('/')
def index():
# ....
if __name__ == '__main__':
flask_app.run(host='0.0.0.0')
The Fabric script is roughly as follows:
server_conn = fabric.Connection('1.2.3.4')
with server_conn.cd('/root/peacock'):
server_conn.run('/root/peacock/bin/python3 -m src.main')
If you need to run a process as a daemon on the remote box, I would suggest that you make it a systemd unit. This way you can control it with standard commands and access its logs like any other service on the system.
Your config could look like (/etc/systemd/system/peacock.service):
[Unit]
Description=Peacock systemd service.
[Service]
Type=simple
ExecStart=/root/peacock/bin/python3 -m src.main
[Install]
WantedBy=multi-user.target
Remember to sudo chmod 644 /etc/systemd/system/peacock.service. Then your fabric script would look like:
server_conn = fabric.Connection('1.2.3.4')
with server_conn.cd('/root/peacock'):
server_conn.run('systemctl start peacock.service')
Later you can check status on this service. You will also be able to access logs with journalctl -u peacock
I followed multiple tutorials available on stackoverflow about starting a python script at startup but none of them works.
I need to activate a virtualenv then start a flask server
I tried
init.d method
I made an start.sh in /etc/init.d/
#!/bin/sh
### BEGIN INIT INFO
# Provides: skeleton
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $portmap
# Should-Stop: $portmap
# X-Start-Before: nis
# X-Stop-After: nis
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# X-Interactive: true
# Short-Description: Example initscript
# Description: This file should be used to construct scripts to be
# placed in /etc/init.d.
### END INIT INFO
cd /home/ion/
source /home/ion/py35/bin/activate
cd /home/ion/Desktop/flask/
nohup python main.py &
echo "Done"
Its permissions are chmod at +x
ion#aurora:/etc/init.d$ ll start.sh
-rwxr-xr-x 1 root root 625 Jun 25 19:10 start.sh*
Went to /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/etc/init.d/start.sh
exit 0
Didn't work
cronjob method
sudo crontab -e
and appended
#reboot sh '/etc/init.d/start.sh'
Didn't worked either , where am I wrong?
Manual triggered logs
(py35) ion#aurora:~/Desktop/flask$ python main.py
WARNING:tensorflow:From /home/ion/Desktop/flask/encoder.py:57: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
2018-06-25 19:34:05.511943: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://localhost:5505/ (Press CTRL+C to quit)
* Restarting with stat
1) Don't use old "init.d" method. Use something modern. If you have Ubuntu 15.04 and higher, you can use Systemd to create daemon that will be started automatically at startup. If you have for example Ubuntu older than 15.04 - use Upstart.
For Systemd:
Create unit file in /lib/systemd/system/you_service_name.service with the following content (as far as I can see your python script doesn't spawn new process while running, so Type should be simple. More info here):
[Unit]
Description=<your_service_name>
After=network.target network-online.target
[Service]
Type=simple
User=<required_user_name>
Group=<required_group_name>
Restart=always
ExecStartPre=/bin/mkdir -p /var/run/<your_service_name>
PIDFile=/var/run/<your_service_name>/service.pid
ExecStart=/path/to/python_executable /path/to/your/script.py
[Install]
WantedBy=multi-user.target
Save this file and reload systemd:
sudo systemctl daemon-reload
Then add your service to autostart:
sudo systemctl enable you_service_name.service
you should see that Systemd created required symlinks after enable action.
Reboot and see if it's up and running (ps aux | grep python or sudo systemctl status you_service_name.service). If there is something weird - check Systemd journal:
sudo journalctl -xe
UPD:
To launch your python script in desired virtualenv, just use this expression in your service unit file:
ExecStart=/venv_home/path/to/python /venv_home/path/to/your/script.py
2) You can also use crontab, but you need to specify full path for desired shell there, for example:
#reboot /bin/bash /path/to/script.sh
If you need additional help - just let me know here.
I have a python script. Script have selenium with Chrome and go to a website, take data and put in CSV file.
This is a very long work.
I put the script on the server. And run. All work.
But I need script work in the background.
chmod +x createdb.py
nohup python ./createdb.py &
And I see
(env)$ nohup ./createdb.py &
[1] 32257
(env)$ nohup: ignoring input and appending output to 'nohup.out'
Press Enter.
(env)$ nohup ./createdb.py &
[1] 32257
(env)$ nohup: ignoring input and appending output to 'nohup.out'
[1]+ Exit 1 nohup ./createdb.py
Then it runs and immediately writes errors to the file, that Chrome did not start or there was no click.
I want to remind you that if you start without nohup, then everything will work.
What am I doing wrong? How to run a script?
Thank you very much.
You could create a background daemon (service)
You taged Ubuntu 16.04 it means you got systemd, for more information on how to set it up, please visit this link
create a file called <my_service>.system
and put it there: /etc/systemd/system
you systemd unit could look like this:
[Unit]
Description=my service
After=graphical.target
[Service]
Type=simple
WorkingDirectory=/my_dir
ExecStart=python my_script.py
[Install]
WantedBy=multi-user.target
then all you have to do is, reload systemd manage and start your service:
sudo systemctl daemon-reload
sudo systemctl myservice start
You can use the screen command, it works perfectly.
Here is a very good link: https://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/
You can use a simple command, from the env directory:
(env)$ python /path/to/createdb.py > logger.txt 2>&1 &
This will help for storing the program logs in a defined file called "logger.txt"
I know that I can run the scheduler manually by using
python web2py.py -K myapp
But where should this be specified in production environment? I am using standard web2py deployment script for apache, on ubuntu.
Just to round the picture. Using Debian or other Linux distributions after 2015, the way to go is systemd. For systemd the following steps have to be taken:
Create the file /etc/systemd/system/web2py-sched.service
Containing the following
[Unit]
Description=Web2Py scheduler service
[Service]
ExecStart=/usr/bin/python /home/www-data/web2py/web2py.py -K <yourapp>
Type=simple
[Install]
WantedBy=multi-user.target
Then install the service calling:
sudo systemctl enable /etc/systemd/system/web2py-sched.service
With Ubuntu 12.04 I make it manually:
in /etc/init directory create web2py-scheduler.conf file:
description "Web2py scheduler"
start on filesystem or runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 8 60
exec sudo -u user <path_to_web2py>/web2py.py -K <your_app>
in /etc/init.d exec:
ln -s /lib/init/upstart-job web2py-scheduler
(optional, only if you want manual startup) in /etc/init directory create the web2py-scheduler.override file:
manual
Please see
Web2Py Book
which worked for me running Ubuntu 14:
Start the scheduler as a Linux service (upstart)
To install the scheduler as a permanent daemon on Linux (w/ Upstart), put the following into /etc/init/web2py-scheduler.conf, assuming your web2py instance is installed in user's home directory, running as user, with app myapp, on network interface eth0.
description "web2py task scheduler"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on shutdown
respawn limit 8 60 # Give up if restart occurs 8 times in 60 seconds.
exec sudo -u <user> python /home/<user>/web2py/web2py.py -K <myapp>
respawn
You can then start/stop/restart/check status of the daemon with:
sudo start web2py-scheduler
sudo stop web2py-scheduler
sudo restart web2py-scheduler
sudo status web2py-scheduler