I am trying to start a python script on a raspby via a systemd service, but it cannot find any of the modules installed via pip3 and gives the error:
raspberrypi python3[1017]: ModuleNotFoundError: No module named 'paho'
Running the same script via SSH terminal works fine. From my research, it could relate to the PYTHONPATH, though I have been unable to find it in .bashrc
The modules that cannot be found are installed here:
./.local/lib/python3.7/site-packages (1.5.0)
This is the service file in /etc/systemd/user/mytest.service which starts the script unsuccessfully:
[Unit]
Description=TestScript Service
After=network-online.target
[Service]
Type=idle
ExecStart=/usr/bin/python3 /home/pi/MyProject/my_script.py > /home/pi/my_script.log 2>&1
[Install]
WantedBy=network-online.target
How can I let the service know, where the modules are located?
Kind regards
Here's is a quick fix to the problem:
By specifying a User in the .service file under [Service], the python script will find all installed libraries.
[Service]
User=pi
Related
I am executing a Python script through service file. The python script is responsible to create 3 more scripts and then execute them one by one. I am also giving permission to all of them and also to a folder in my home directory.
The problem here is that on executing the service file none of the Python files or the folder is getting the permissions. I am giving 777 permission
Following is my service file
[Unit]
Description=systemd service to run upload script
[Service]
Type=simple
User=jetson
ExecStart=/usr/bin/python3 /home/project/file_upload.py
[Install]
WantedBy=multi-user.target
The folder I am trying to give permission to is created by the azure Iotedge module
Please let me know if I need to make any changes in the service file.
I resolved this issue by creating the folder before starting the service file and gave it permission so now there are no issues.
I have deployed a flask application with uwsgi and nginx
The following is the .ini file for uwsgi
[uwsgi]
;module = name of file which contains the application object in this case wsgi.py
LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
chdir=/home/ansible/apps/payment_handler
module = wsgi:application
;tell uWSGI (the service) to start in master mode and spawn 5 worker *processes* to serve requests
master = true
processes = 5
;a socket is much faster than a port, and since we will be using nginx to exppose the application this is better
socket = 0.0.0.0:8001
vaccum = true
die-on-term = true
When I run this from the command line like so
uwsgi --ini payment_app.ini
It works !
However I would like to run the application using a service, the following is the service file
[Unit]
Description=uWSGI instance to serve service app
After=network.target
[Service]
User=root
WorkingDirectory=/home/ansible/apps/payment_handler
Environment="PATH=/home/ansible/apps/payment_handler/env/bin"
ExecStart=/home/ansible/apps/payment_handler/env/bin/uwsgi --ini payment_app.ini
[Install]
WantedBy=multi-user.target
However it does not work because it cannot find the libs for cx_oracle
I have it set in my bashrc file
export LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
However since the service file does not use this to load it's env variables it seems to not find it
Error log
Jun 17 09:58:06 mau-app-036 uwsgi: cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/odpi/doc/installation.html#linux for help
I have tried setting it in the .ini file (as seen above)
LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
I have also tried setting it in my init.py file using the os module
os.environ['LD_LIBRARY_PATH'] = '/usr/lib/oracle/18.3/client64/lib'
Both to no avail, any help would be great thanks Centos 7 btw
Problems like this are why the Instant Client installation instructions recommend running:
sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > \
/etc/ld.so.conf.d/oracle-instantclient.conf"
sudo ldconfig
This saves you having to work out how & where to set LD_LIBRARY_PATH.
Note that the 19.3 Instant Client RPM packages automatically runs this for you. Some background is in the Instant Client 19c Linux x64 release announcement blog.
Im using supervisor to run the django websocket in system startup .
When I start the supervisor it will raise
ModuleNotFoundError: No module named 'django'
in the log file .
Here is supervisor conf:
[fcgi-program:myProject]
environment=HOME="/home/ubuntu/envFiles/myProject/bin"
# TCP socket used by Nginx backend upstream
socket=tcp://0.0.0.0:8000
directory=/home/ubuntu/projects/myProject
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers myProject.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
process_name=asgi%(process_num)d
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/logs/project.log
redirect_stderr=true
when i try to restart the supervisor by supervisorctl restart all , it has import module error again.
Error log :
ModuleNotFoundError: No module named 'django'
I think it uses system python path But i defined environment in config file so supervisor must use there environment .
whats the problem ?
How can i set my django environment files in supervisor conf ?
Just try to install package into another python directory, i had same problem with supervisor and it was solved after this:
sudo pip install --target=/usr/local/lib/python3.6/dist-packages <packagename>
I have written a simple code with python and pygame to display some images at my monitor. When I run it, everything works fine. i tried to make it run at system startup with systemd services. here is my service:
[Unit]
Description=Starts pygame
[Service]
User=rplab
WorkingDirectory=/home/myuser/
ExecStart=/bin/bash /home/myuser/MyPygame.sh
KillMode=process
[Install]
WantedBy=multi-user.target
When the system boots, it will start the service but unfortunately when I checked the service with systemd status, it gives this error:
pygame.error: No available video device
it seems as if it starts too soon that can't find my monitor, is it possible that I make the service start after the user logs in so that it can find my monitor?
The service file needs to tell systemd that it needs to start after user-session and graphical environment.
[Unit]
Description=Starts pygame
Wants=systemd-logind.service systemd-user-sessions.service display-manager.service
After=systemd-logind.service systemd-user-sessions.service display-manager.service
[Service]
....
....
[Install]
WantedBy=graphical.target
Make sure you are running a graphical.target as the default.
$ systemctl set-default graphical.target
I'm trying to host flask API which is using tensorflow libraries.I installed tensorflow gpu library with CUDA and cudnn libraries.I manually checked with the following command which is working fine.
/captcha/env/bin/gunicorn captcha:app -b 0.0.0.0:5124 -k gevent --worker-connections 1000
But when i add this systemd service im getting a tensorflow gpu error
systemd service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
PIDFile=/run/gunicorn/pid
User=root
Group=root
WorkingDirectory=/captcha/env
ExecStart=/captcha/env/bin/gunicorn captcha:app -b 0.0.0.0:5124 -k gevent --worker-connections 1000
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Error text in Log file:
Failed to load the native TensorFlow runtime.
See
https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Can anyone point me where I'm doing wrong?
Systemd seems to strip all the environment variables and TensorFlow needs to know where to find Cuda. Without a LD_LIBRARY_PATH it fails.
There are probably a handful of ways to do this, but this worked for me.
[Service]
Environment=LD_LIBRARY_PATH=/usr/local/cuda/lib64
ExecStart=/path/to/your/app
...
I'm using PM2 to manage tensorflow-flask api process.
http://pm2.keymetrics.io/
I Create a shell file with the following command as a content.
pm2 start run.sh