I have a python script inside a virtualenv which is started using systemd.
[Unit]
Description=app
After=network.target
[Service]
Type=simple
User=user
Group=user
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
WorkingDirectory=/home/user/Projects/app
ExecStart=/home/user/Projects/app/venv/bin/python app.py
[Install]
WantedBy=multi-user.target
The thing is that the script uses subprocess.Popen(['python', 'whatever.py']) to open another python script. I got a not found error, and discovered that python should be invoked with an absolute path, so I changed it and it worked well.
However, now I use a third party library, pygatt, which inside uses subprocess to open gatttool or hcitool which are in $PATH (system wide binaries, usually in /usr/bin).
So now I cannot change that library (I could by forking it, but I hope I don't have to).
How comes that systemd cannot spawn python subprocesses without using an absolute path? Without systemd (running from console), everything works.
I'm not sure but it is very possible that setting environment in one configuration line isn't taken into account in the following ones.
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
here you're expecting VIRTUAL_ENV to be set to $VIRTUAL_ENV is evaluated the next line, but that may not work. I would try hardcoding the second line:
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=/home/user/Projects/app/venv/bin:$PATH
Related
I have my_robot_ros.service file which auto run a launch during boot.
[Unit]
Description="bringup my_robot_ros"
After=network.target
[Service]
Type=simple
ExecStart=/usr/sbin/my_robot_ros-start
[Install]
WantedBy=multi-user.target
My launch file works just fine when I run it in the terminal but when it runs with my_robot_ros.service it has errors regarding permission in the folder as shown below.
clcik image
I think it is the reason why my imageprocessing node dies or stop working. Does anyone know how to solve this problem? Thank you
The "correct" solution here is to structure your code so it doesn't use pre-created files in /tmp. By default only the file creator can modify files directly in /tmp. And your error happens because running a service file will run commands under a different user. The quick workaround for this would be to remove the sticky bit in the folder via: sudo chmod -t /tmp.
Note that I wouldn't really recommend this in general, though.
I am a pretty new python programmer and am a little bit familiar with crontab. What I am trying to do is probably not the best practice but it is what I am most familiar with.
I have a raspberry pi with a couple python scripts I want to run on boot and stay running in the background. They are infinite loop programs. They are tested and working in a cmd terminal and have been function for a couple weeks. Just getting tired on manually starting them up. When the pi goes through a power cycle.
So I did a sudo crontab -e and added this line as my only entry
#reboot /usr/bin/python3 /usr/bin/script.py &
If I copy paste this exactly (minus the #reboot) it will run successfully in the cmd line.
I am using a cmd:
pgrep -af pythonto check to see if it is running. I normally see two scripts running there but not the one I am trying to add.
I am not sure where I am going wrong or my best method to troubleshoot my issue. From the research I have been doing it seems like it should work.
Thanks for your help
Kevin
You might find it easier to create a systemd service file for each program that you want to start when you Raspberry Pi boots. systemd comes with a few more tools to help you debug your configuration.
This is what an example systemd service file (located at /etc/systemd/system/myscript.service) would look like:
[Unit]
Description=My service
After=network.target
[Service]
ExecStart=/usr/bin/python3 /usr/bin/script.py
WorkingDirectory=/home/pi/myscript
StandardOutput=inherit
StandardError=inherit
Restart=always
User=pi
[Install]
WantedBy=multi-user.target
and then you can enable this program to run on boot with the command:
sudo systemctl enable myscript.service
These examples are from Raspberry Pi's documentation about systemd. But because systemd is widely used in the Linux world, so you can also follow documentation and StackOverflow answers for other Linux distributions.
Note: I'm new to everything so bare with me.
I'm using a RPi 4B with Buster. My goal is to automatically run 2 python scripts at the same time when the pi first boots up. Both scripts are in a virtual environment. The first script is called sensor.py which basically uses an ultrasonic distance sensor to continuously calculate distances between the sensor and an object. The other is an object recognition script from Tensorflow Lite called TFLite_detection_webcam.py that identifies objects from a camera feed. I can't use rc.local for autorunning because the object recognition script uses a picamera feed as an input, which rc.local doesn't support. So my preferred option is using autostart. I was able to successfully get the sensor.py script to autorun by issuing this in the terminal: sudo nano /etc/xdg/lxsession/LXDE-pi/autostart and adding this to it: /home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/sensor.py. In this case, tflite1-env is the virtual environment being activated. However, I don't know how to get the second script to run. To run it regularly, I would issue the following into the terminal and the camera feed would pop up on the screen as a window.
cd tflite1
source tflite1-env/bin/activate
python3 TFLite_detection_webcam.py --modeldir=TFLite_model
I've tried to get this script to run by adding this to the autostart file: /home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/TFLite_detection_webcam.py --modeldir=TFLite_model but it doesn't seem to be working. I've tried to run it using shell files, but every time that I run a shell file in the autostart file such as adding ./launch.sh to the bottom, nothing happens. Any help getting the second script to run at the same time as the first upon startup would be greatly appreciated. Thanks in advance.
Use Systemd. Set up Systemd unit files in /etc/systemd/system, e.g.
kitkats-sensor.unit
[Unit]
After=network.target
[Service]
ExecStart=/home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/sensor.py
WorkingDirectory=/home/pi/tflite1/
User=pi
Group=pi
kitkats-tflite.unit
[Unit]
After=network.target
[Service]
ExecStart=/home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/TFLite_detection_webcam.py --modeldir=TFLite_model
WorkingDirectory=/home/pi/tflite1/
User=pi
Group=pi
Then enable the unit files with systemctl enable kitkats-tflite and systemctl enable kitkats-sensor (to have them autostart) and systemctl start kitkats-tflite (and sensor) to start them right away.
You can then see them in e.g. systemctl, and their logs are diverted to journalctl.
I want to generate pop-ups for certain events in my python script. I am using 'notify-send' for that purpose.
subprocess.Popen(['notify-send', "Authentication", "True/False"])
The above command executes fine on terminal but when I run it from systemd-service it does not generate any pop-up.
When I see logs there are no errors.
You need to first set the environment variable so that the root can communicate with the currently logged user and send the notification in GUI.
In my case, I did it as follow:
[Unit]
Description=< write your description>
After=systemd-user-sessions.service,systemd-journald.service
[Service]
Type=simple
ExecStart=/bin/bash /<path to your script file>.sh
Restart=always
RestartSec=1
KillMode=process
IgnoreSIGPIPE=no
RemainAfterExit=yes
Environment="DISPLAY=:0" "XAUTHORITY=/home/<User name>/.Xauthority"
[Install]
WantedBy=multi-user.target
Here,
RemainAfterExit=yes
is very important to mention in service file.
make sure to change all the parameters like Description, User name and path to your script file.
also, make sure that script file has executable permission by executing the command
sudo chmod +x <path to your script file>.sh
Here my script file is written in bash which shows the notification by using the same 'notify-send' command.
Now here the Environment parameter is doing all the magic.
you can read more about this behavior and the problem discussed overhere.
I certainly don't know the complete working of these files or how this worked, but for me, it worked just fine.
So you can give it a try.
please let me know if this worked or not in your case.
Running graphical applications requires the DISPLAY environment variable to be set, which would be set when run it from the CLI, but not when run from systemd (unless you explicitly set it).
This issue is covered more in Writing a systemd service that depends on XOrg.
I agree with the general advise that systemd may not be the best tool for the job. You may be better off using an "auto start" feature of your desktop environment to run your app, which would set the correct things in the environment that you need.
If running notify-send for desktop notifications in cron, notify-send is sending values to dbus. So it needs to tell dbus to connect to the right bus. The address can be found by examining DBUS_SESSION_BUS_ADDRESS environment variable and setting it to the same value.
Copy the value of DISPLAY and DBUS_SESSION_BUS_ADDRESS from your running environment and set them in [Service].Environment section
More info on the Arch Wiki:
https://wiki.archlinux.org/index.php/Cron#Running_X.org_server-based_applications
The python script script.py is located in /usr/bin/monitor/scripts and it's main function is to use subprocess.check_call() and subprocess.check_output() to call various administrative tools (both c programs located in /usr/bin/monitor/ created specifically for the machine, and linux executables in /sbin like fdisk -l and df -h). It was written to run as root and print output from these programs in a useful way to the command line.
My project is to make the output from this script viewable through a webpage. I'm on a Beaglebone Black using Apache2, which executes files as user www-data from its DocumentRoot, /var/www/html/. The webpage is set up like this:
index.html uses an iframe to display the output of a python CGI script which is also located in /var/www/html/
script.cgi attempts to call/display output from script.py output using the subprocess module
The problem is that script.py is being called just fine, but each of the calls within script.py fail and return script.py's error messages because I presume they need to be run as root when apache is running them as user www-data.
To try to get around this, I created a new group called bbb, added www-data to the group, then ran chown :bbb script.py to change its group to bbb. Unfortunately it was still causing the same problems, so I tried changing permissions from 755 to 775, which didn’t work either. I tried running chown :bbb * on the files/programs that script.py uses, also to no avail. Also, some of the executables script.py uses are in /sbin and I am cautious to just give it blanket root access to directories like this.
Since my attempts at fixing ownership issues felt a bit like 1000 monkey code, I created new version of the script in which I create a list of html output, and after each print statement in the original code, I append the same line of text as a string with html tags to the html output list, then at the end of the script (in whatami) I have it create and write to a .txt file in /var/www/html/, and call os.chmod("/var/www/html/data.txt", 0o755) to give apache access. The CGI then calls subprocess.check_call() on script.py, then opens, reads, and prints each line with html formatting to the iframe in the webpage. This attempt at least resulted in accurate output but... it only updates when it is run in terminal as root, rather than re-running script.py ever time the page is refreshed, which kind of undermines the point of the webpage. I assume this means the subprocess check_call in the CGI script is not working correctly, but for some reason, the subprocess call itself doesn’t throw any errors or indications of failure, yet the text file returns without being updated. Even with the subprocess call in a “try” block succeeded by a “print(‘call successful’)”, it returns the success message and then the not updated text file.
I’m a bit at a loss trying to figure out how to just force the script to run and do it’s thing in the background so that the file will update without just giving apache root access. I've read a few things about either wrapping the python script in a shell that causes it to be run as root, or to change sudoers to give www-data sudo priviledges, but I do not want to introduce security issues or make what was intended to be a simple script allowing output to a webpage to become more convoluted than it already has. Any advice or direction would be greatly appreciated.
Best way IMO would be to "decouple" execution, by creating a localhost-only service which you "call" from the apache process by connecting via a local socket.
E.g. if using systemd:
Create: /etc/systemd/system/my-svc.socket
[Unit]
Description=My svc socket
[Socket]
ListenStream=127.0.0.1:1234
Accept=yes
[Install]
WantedBy=sockets.target
Create: /etc/systemd/system/my-svc#.service
[Unit]
Description=My Service
Requires=my-svc.socket
[Service]
Type=simple
ExecStart=/opt/my-service/script.sh %i
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
Create /opt/my-service/script.sh:
#!/bin/sh
echo "id=$(id)"
echo "args=$*"
Finish setup with:
$ sudo chmod +x /opt/my-service/script.sh
$ sudo systemctl daemon-reload
Try it out:
$ nc 127.0.0.1 1234
id=uid=0(root) gid=0(root) groups=0(root)
args=55-127.0.0.1:1234-127.0.0.1:32938
Then from your cgi, you'll need to do the equivalent of the nc command above (just a tcp connection).
--jjo