How to allows /tmp file access when running robot_upstart - python

I have my_robot_ros.service file which auto run a launch during boot.
[Unit]
Description="bringup my_robot_ros"
After=network.target
[Service]
Type=simple
ExecStart=/usr/sbin/my_robot_ros-start
[Install]
WantedBy=multi-user.target
My launch file works just fine when I run it in the terminal but when it runs with my_robot_ros.service it has errors regarding permission in the folder as shown below.
clcik image
I think it is the reason why my imageprocessing node dies or stop working. Does anyone know how to solve this problem? Thank you

The "correct" solution here is to structure your code so it doesn't use pre-created files in /tmp. By default only the file creator can modify files directly in /tmp. And your error happens because running a service file will run commands under a different user. The quick workaround for this would be to remove the sticky bit in the folder via: sudo chmod -t /tmp.
Note that I wouldn't really recommend this in general, though.

Related

Module not imported running python script at startup

For context, I am using Raspberry Pi model 3B+. Currently I am trying to run a python script at the Pi's boot up. The script uses the module face_recognition, and everything works fine when running it normally or through the terminal.
But as soon as I try running it automatically when the pi boots up I get the following error:
Traceback (most recent call last):
File "/home/pi/Desktop/code/please_work_2.py", line 6, in <module>
import face_recognition
ImportError: No module named face_recognition
I googled a bit and I think it has to do something with not setting the environment in the service file correctly. It is a bit messy as of right now, but I am new to working with these kind of files so I am struggling with finding out how to get it to work. My service file right now:
[Unit]
Description=Start Bling
[Service]
Environment=DISPLAY=:0
WorkingDirectory=/home/pi/facial_recognition
Environment=XAUTHORITY=/home/pi/.Xauthority
Environment="prog_path"=/home/pi/facial_recognition
ExecStart=/usr/bin/python /home/pi/facial_recognition/run_on_start.py
Restart=always
RestartSec=10s
KillMode=process
TimeoutSec=infinity
[Install]
WantedBy=graphical.target
The program does not necessarily need to run in Desktop auto-login so if there is a possible fix in the console version that is fine as well. I just have it this way currently, so it is easier to check if the program is working as intended.
EDIT:
I have also tried using crontab, but then nothing happened on reboot. Not with and not without the '&' at the end.
As of right now, as was suggested below is that when I add
User = pi
the program does launch, but I am struggling with getting it to launch once instead every 10s. Deleting that bit does not help since then it stops launching at all.
Can you try this, add the following User=pi under [Service] and see if it works.
I think the way you did it, you are trying to launch as another user or sudo user, you have your stuff only installed in your current pi user path, so try launching the script as the user pi.
Your service file should look like this.
[Unit]
Description=Start Bling
[Service]
User=pi
Environment=DISPLAY=:0
WorkingDirectory=/home/pi/facial_recognition
Environment=XAUTHORITY=/home/pi/.Xauthority
Environment="prog_path"=/home/pi/facial_recognition
ExecStart=/usr/bin/python /home/pi/facial_recognition/run_on_start.py
Restart=always
RestartSec=10s
KillMode=process
TimeoutSec=infinity
[Install]
WantedBy=graphical.target

Autostarting Python scripts on boot using crontab on rasbian

I am a pretty new python programmer and am a little bit familiar with crontab. What I am trying to do is probably not the best practice but it is what I am most familiar with.
I have a raspberry pi with a couple python scripts I want to run on boot and stay running in the background. They are infinite loop programs. They are tested and working in a cmd terminal and have been function for a couple weeks. Just getting tired on manually starting them up. When the pi goes through a power cycle.
So I did a sudo crontab -e and added this line as my only entry
#reboot /usr/bin/python3 /usr/bin/script.py &
If I copy paste this exactly (minus the #reboot) it will run successfully in the cmd line.
I am using a cmd:
pgrep -af pythonto check to see if it is running. I normally see two scripts running there but not the one I am trying to add.
I am not sure where I am going wrong or my best method to troubleshoot my issue. From the research I have been doing it seems like it should work.
Thanks for your help
Kevin
You might find it easier to create a systemd service file for each program that you want to start when you Raspberry Pi boots. systemd comes with a few more tools to help you debug your configuration.
This is what an example systemd service file (located at /etc/systemd/system/myscript.service) would look like:
[Unit]
Description=My service
After=network.target
[Service]
ExecStart=/usr/bin/python3 /usr/bin/script.py
WorkingDirectory=/home/pi/myscript
StandardOutput=inherit
StandardError=inherit
Restart=always
User=pi
[Install]
WantedBy=multi-user.target
and then you can enable this program to run on boot with the command:
sudo systemctl enable myscript.service
These examples are from Raspberry Pi's documentation about systemd. But because systemd is widely used in the Linux world, so you can also follow documentation and StackOverflow answers for other Linux distributions.

systemd service not executing notify-send

I want to generate pop-ups for certain events in my python script. I am using 'notify-send' for that purpose.
subprocess.Popen(['notify-send', "Authentication", "True/False"])
The above command executes fine on terminal but when I run it from systemd-service it does not generate any pop-up.
When I see logs there are no errors.
You need to first set the environment variable so that the root can communicate with the currently logged user and send the notification in GUI.
In my case, I did it as follow:
[Unit]
Description=< write your description>
After=systemd-user-sessions.service,systemd-journald.service
[Service]
Type=simple
ExecStart=/bin/bash /<path to your script file>.sh
Restart=always
RestartSec=1
KillMode=process
IgnoreSIGPIPE=no
RemainAfterExit=yes
Environment="DISPLAY=:0" "XAUTHORITY=/home/<User name>/.Xauthority"
[Install]
WantedBy=multi-user.target
Here,
RemainAfterExit=yes
is very important to mention in service file.
make sure to change all the parameters like Description, User name and path to your script file.
also, make sure that script file has executable permission by executing the command
sudo chmod +x <path to your script file>.sh
Here my script file is written in bash which shows the notification by using the same 'notify-send' command.
Now here the Environment parameter is doing all the magic.
you can read more about this behavior and the problem discussed overhere.
I certainly don't know the complete working of these files or how this worked, but for me, it worked just fine.
So you can give it a try.
please let me know if this worked or not in your case.
Running graphical applications requires the DISPLAY environment variable to be set, which would be set when run it from the CLI, but not when run from systemd (unless you explicitly set it).
This issue is covered more in Writing a systemd service that depends on XOrg.
I agree with the general advise that systemd may not be the best tool for the job. You may be better off using an "auto start" feature of your desktop environment to run your app, which would set the correct things in the environment that you need.
If running notify-send for desktop notifications in cron, notify-send is sending values to dbus. So it needs to tell dbus to connect to the right bus. The address can be found by examining DBUS_SESSION_BUS_ADDRESS environment variable and setting it to the same value.
Copy the value of DISPLAY and DBUS_SESSION_BUS_ADDRESS from your running environment and set them in [Service].Environment section
More info on the Arch Wiki:
https://wiki.archlinux.org/index.php/Cron#Running_X.org_server-based_applications

Allowing user www-data (apache) to call a python script that requires root privileges from a CGI script

The python script script.py is located in /usr/bin/monitor/scripts and it's main function is to use subprocess.check_call() and subprocess.check_output() to call various administrative tools (both c programs located in /usr/bin/monitor/ created specifically for the machine, and linux executables in /sbin like fdisk -l and df -h). It was written to run as root and print output from these programs in a useful way to the command line.
My project is to make the output from this script viewable through a webpage. I'm on a Beaglebone Black using Apache2, which executes files as user www-data from its DocumentRoot, /var/www/html/. The webpage is set up like this:
index.html uses an iframe to display the output of a python CGI script which is also located in /var/www/html/
script.cgi attempts to call/display output from script.py output using the subprocess module
The problem is that script.py is being called just fine, but each of the calls within script.py fail and return script.py's error messages because I presume they need to be run as root when apache is running them as user www-data.
To try to get around this, I created a new group called bbb, added www-data to the group, then ran chown :bbb script.py to change its group to bbb. Unfortunately it was still causing the same problems, so I tried changing permissions from 755 to 775, which didn’t work either. I tried running chown :bbb * on the files/programs that script.py uses, also to no avail. Also, some of the executables script.py uses are in /sbin and I am cautious to just give it blanket root access to directories like this.
Since my attempts at fixing ownership issues felt a bit like 1000 monkey code, I created new version of the script in which I create a list of html output, and after each print statement in the original code, I append the same line of text as a string with html tags to the html output list, then at the end of the script (in whatami) I have it create and write to a .txt file in /var/www/html/, and call os.chmod("/var/www/html/data.txt", 0o755) to give apache access. The CGI then calls subprocess.check_call() on script.py, then opens, reads, and prints each line with html formatting to the iframe in the webpage. This attempt at least resulted in accurate output but... it only updates when it is run in terminal as root, rather than re-running script.py ever time the page is refreshed, which kind of undermines the point of the webpage. I assume this means the subprocess check_call in the CGI script is not working correctly, but for some reason, the subprocess call itself doesn’t throw any errors or indications of failure, yet the text file returns without being updated. Even with the subprocess call in a “try” block succeeded by a “print(‘call successful’)”, it returns the success message and then the not updated text file.
I’m a bit at a loss trying to figure out how to just force the script to run and do it’s thing in the background so that the file will update without just giving apache root access. I've read a few things about either wrapping the python script in a shell that causes it to be run as root, or to change sudoers to give www-data sudo priviledges, but I do not want to introduce security issues or make what was intended to be a simple script allowing output to a webpage to become more convoluted than it already has. Any advice or direction would be greatly appreciated.
Best way IMO would be to "decouple" execution, by creating a localhost-only service which you "call" from the apache process by connecting via a local socket.
E.g. if using systemd:
Create: /etc/systemd/system/my-svc.socket
[Unit]
Description=My svc socket
[Socket]
ListenStream=127.0.0.1:1234
Accept=yes
[Install]
WantedBy=sockets.target
Create: /etc/systemd/system/my-svc#.service
[Unit]
Description=My Service
Requires=my-svc.socket
[Service]
Type=simple
ExecStart=/opt/my-service/script.sh %i
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
Create /opt/my-service/script.sh:
#!/bin/sh
echo "id=$(id)"
echo "args=$*"
Finish setup with:
$ sudo chmod +x /opt/my-service/script.sh
$ sudo systemctl daemon-reload
Try it out:
$ nc 127.0.0.1 1234
id=uid=0(root) gid=0(root) groups=0(root)
args=55-127.0.0.1:1234-127.0.0.1:32938
Then from your cgi, you'll need to do the equivalent of the nc command above (just a tcp connection).
--jjo

python initiated with systemd cannot start subprocess

I have a python script inside a virtualenv which is started using systemd.
[Unit]
Description=app
After=network.target
[Service]
Type=simple
User=user
Group=user
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
WorkingDirectory=/home/user/Projects/app
ExecStart=/home/user/Projects/app/venv/bin/python app.py
[Install]
WantedBy=multi-user.target
The thing is that the script uses subprocess.Popen(['python', 'whatever.py']) to open another python script. I got a not found error, and discovered that python should be invoked with an absolute path, so I changed it and it worked well.
However, now I use a third party library, pygatt, which inside uses subprocess to open gatttool or hcitool which are in $PATH (system wide binaries, usually in /usr/bin).
So now I cannot change that library (I could by forking it, but I hope I don't have to).
How comes that systemd cannot spawn python subprocesses without using an absolute path? Without systemd (running from console), everything works.
I'm not sure but it is very possible that setting environment in one configuration line isn't taken into account in the following ones.
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
here you're expecting VIRTUAL_ENV to be set to $VIRTUAL_ENV is evaluated the next line, but that may not work. I would try hardcoding the second line:
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=/home/user/Projects/app/venv/bin:$PATH

Categories

Resources