I want to generate pop-ups for certain events in my python script. I am using 'notify-send' for that purpose.
subprocess.Popen(['notify-send', "Authentication", "True/False"])
The above command executes fine on terminal but when I run it from systemd-service it does not generate any pop-up.
When I see logs there are no errors.
You need to first set the environment variable so that the root can communicate with the currently logged user and send the notification in GUI.
In my case, I did it as follow:
[Unit]
Description=< write your description>
After=systemd-user-sessions.service,systemd-journald.service
[Service]
Type=simple
ExecStart=/bin/bash /<path to your script file>.sh
Restart=always
RestartSec=1
KillMode=process
IgnoreSIGPIPE=no
RemainAfterExit=yes
Environment="DISPLAY=:0" "XAUTHORITY=/home/<User name>/.Xauthority"
[Install]
WantedBy=multi-user.target
Here,
RemainAfterExit=yes
is very important to mention in service file.
make sure to change all the parameters like Description, User name and path to your script file.
also, make sure that script file has executable permission by executing the command
sudo chmod +x <path to your script file>.sh
Here my script file is written in bash which shows the notification by using the same 'notify-send' command.
Now here the Environment parameter is doing all the magic.
you can read more about this behavior and the problem discussed overhere.
I certainly don't know the complete working of these files or how this worked, but for me, it worked just fine.
So you can give it a try.
please let me know if this worked or not in your case.
Running graphical applications requires the DISPLAY environment variable to be set, which would be set when run it from the CLI, but not when run from systemd (unless you explicitly set it).
This issue is covered more in Writing a systemd service that depends on XOrg.
I agree with the general advise that systemd may not be the best tool for the job. You may be better off using an "auto start" feature of your desktop environment to run your app, which would set the correct things in the environment that you need.
If running notify-send for desktop notifications in cron, notify-send is sending values to dbus. So it needs to tell dbus to connect to the right bus. The address can be found by examining DBUS_SESSION_BUS_ADDRESS environment variable and setting it to the same value.
Copy the value of DISPLAY and DBUS_SESSION_BUS_ADDRESS from your running environment and set them in [Service].Environment section
More info on the Arch Wiki:
https://wiki.archlinux.org/index.php/Cron#Running_X.org_server-based_applications
Related
I am a pretty new python programmer and am a little bit familiar with crontab. What I am trying to do is probably not the best practice but it is what I am most familiar with.
I have a raspberry pi with a couple python scripts I want to run on boot and stay running in the background. They are infinite loop programs. They are tested and working in a cmd terminal and have been function for a couple weeks. Just getting tired on manually starting them up. When the pi goes through a power cycle.
So I did a sudo crontab -e and added this line as my only entry
#reboot /usr/bin/python3 /usr/bin/script.py &
If I copy paste this exactly (minus the #reboot) it will run successfully in the cmd line.
I am using a cmd:
pgrep -af pythonto check to see if it is running. I normally see two scripts running there but not the one I am trying to add.
I am not sure where I am going wrong or my best method to troubleshoot my issue. From the research I have been doing it seems like it should work.
Thanks for your help
Kevin
You might find it easier to create a systemd service file for each program that you want to start when you Raspberry Pi boots. systemd comes with a few more tools to help you debug your configuration.
This is what an example systemd service file (located at /etc/systemd/system/myscript.service) would look like:
[Unit]
Description=My service
After=network.target
[Service]
ExecStart=/usr/bin/python3 /usr/bin/script.py
WorkingDirectory=/home/pi/myscript
StandardOutput=inherit
StandardError=inherit
Restart=always
User=pi
[Install]
WantedBy=multi-user.target
and then you can enable this program to run on boot with the command:
sudo systemctl enable myscript.service
These examples are from Raspberry Pi's documentation about systemd. But because systemd is widely used in the Linux world, so you can also follow documentation and StackOverflow answers for other Linux distributions.
The python script script.py is located in /usr/bin/monitor/scripts and it's main function is to use subprocess.check_call() and subprocess.check_output() to call various administrative tools (both c programs located in /usr/bin/monitor/ created specifically for the machine, and linux executables in /sbin like fdisk -l and df -h). It was written to run as root and print output from these programs in a useful way to the command line.
My project is to make the output from this script viewable through a webpage. I'm on a Beaglebone Black using Apache2, which executes files as user www-data from its DocumentRoot, /var/www/html/. The webpage is set up like this:
index.html uses an iframe to display the output of a python CGI script which is also located in /var/www/html/
script.cgi attempts to call/display output from script.py output using the subprocess module
The problem is that script.py is being called just fine, but each of the calls within script.py fail and return script.py's error messages because I presume they need to be run as root when apache is running them as user www-data.
To try to get around this, I created a new group called bbb, added www-data to the group, then ran chown :bbb script.py to change its group to bbb. Unfortunately it was still causing the same problems, so I tried changing permissions from 755 to 775, which didn’t work either. I tried running chown :bbb * on the files/programs that script.py uses, also to no avail. Also, some of the executables script.py uses are in /sbin and I am cautious to just give it blanket root access to directories like this.
Since my attempts at fixing ownership issues felt a bit like 1000 monkey code, I created new version of the script in which I create a list of html output, and after each print statement in the original code, I append the same line of text as a string with html tags to the html output list, then at the end of the script (in whatami) I have it create and write to a .txt file in /var/www/html/, and call os.chmod("/var/www/html/data.txt", 0o755) to give apache access. The CGI then calls subprocess.check_call() on script.py, then opens, reads, and prints each line with html formatting to the iframe in the webpage. This attempt at least resulted in accurate output but... it only updates when it is run in terminal as root, rather than re-running script.py ever time the page is refreshed, which kind of undermines the point of the webpage. I assume this means the subprocess check_call in the CGI script is not working correctly, but for some reason, the subprocess call itself doesn’t throw any errors or indications of failure, yet the text file returns without being updated. Even with the subprocess call in a “try” block succeeded by a “print(‘call successful’)”, it returns the success message and then the not updated text file.
I’m a bit at a loss trying to figure out how to just force the script to run and do it’s thing in the background so that the file will update without just giving apache root access. I've read a few things about either wrapping the python script in a shell that causes it to be run as root, or to change sudoers to give www-data sudo priviledges, but I do not want to introduce security issues or make what was intended to be a simple script allowing output to a webpage to become more convoluted than it already has. Any advice or direction would be greatly appreciated.
Best way IMO would be to "decouple" execution, by creating a localhost-only service which you "call" from the apache process by connecting via a local socket.
E.g. if using systemd:
Create: /etc/systemd/system/my-svc.socket
[Unit]
Description=My svc socket
[Socket]
ListenStream=127.0.0.1:1234
Accept=yes
[Install]
WantedBy=sockets.target
Create: /etc/systemd/system/my-svc#.service
[Unit]
Description=My Service
Requires=my-svc.socket
[Service]
Type=simple
ExecStart=/opt/my-service/script.sh %i
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
Create /opt/my-service/script.sh:
#!/bin/sh
echo "id=$(id)"
echo "args=$*"
Finish setup with:
$ sudo chmod +x /opt/my-service/script.sh
$ sudo systemctl daemon-reload
Try it out:
$ nc 127.0.0.1 1234
id=uid=0(root) gid=0(root) groups=0(root)
args=55-127.0.0.1:1234-127.0.0.1:32938
Then from your cgi, you'll need to do the equivalent of the nc command above (just a tcp connection).
--jjo
I have a python script inside a virtualenv which is started using systemd.
[Unit]
Description=app
After=network.target
[Service]
Type=simple
User=user
Group=user
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
WorkingDirectory=/home/user/Projects/app
ExecStart=/home/user/Projects/app/venv/bin/python app.py
[Install]
WantedBy=multi-user.target
The thing is that the script uses subprocess.Popen(['python', 'whatever.py']) to open another python script. I got a not found error, and discovered that python should be invoked with an absolute path, so I changed it and it worked well.
However, now I use a third party library, pygatt, which inside uses subprocess to open gatttool or hcitool which are in $PATH (system wide binaries, usually in /usr/bin).
So now I cannot change that library (I could by forking it, but I hope I don't have to).
How comes that systemd cannot spawn python subprocesses without using an absolute path? Without systemd (running from console), everything works.
I'm not sure but it is very possible that setting environment in one configuration line isn't taken into account in the following ones.
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
here you're expecting VIRTUAL_ENV to be set to $VIRTUAL_ENV is evaluated the next line, but that may not work. I would try hardcoding the second line:
Environment=VIRTUAL_ENV=/home/user/Projects/app/venv
Environment=PATH=/home/user/Projects/app/venv/bin:$PATH
I'm looking to create a script in python that initiates an SSH session with a server. I know it has to be a simple process i'm just not sure where to start. My ultimate plan is to automate this script to run on startup. i am not even sure python is the best way to go i just know it comes preloaded on raspbain for the pi.
A simple bash script would be better suited to the task. It is possible with python, but there's no obvious reason to make it harder than necessary.
From
write a shell script to ssh to a remote machine and execute commands:
#!/bin/bash
USERNAME=someUser
HOSTS="host1 host2 host3"
SCRIPT="pwd; ls"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
done
From how do i run a script at start up (askubuntu):
You will need root privileges for any the following. To get root, open
a terminal and run the command
sudo su
and the command prompt will change to '#' indicating that the terminal
session has root privileges.
Alternative #1. Add an initscript.
Create a new script in /etc/init.d/myscript.
vi /etc/init.d/myscript
(Obviously it doesn't have to be called "myscript".) In this script,
do whatever you want to do. Perhaps just run the script you mentioned.
#!/bin/sh
/path/to/my/script.sh
Make it executable.
chmod ugo+x /etc/init.d/myscript
Configure the init system to run this script at startup.
update-rc.d myscript defaults
Alternative #2. Add commands to /etc/rc.local
vi /etc/rc.local
with content like the following.
# This script is executed at the end of each multiuser runlevel
/path/to/my/script.sh || exit 1 # Added by me
exit 0
Alternative #3. Add an Upstart job.
Create /etc/init/myjob.conf
vi /etc/init/myjob.conf
with content like the following
description "my job"
start on startup
task
exec /path/to/my/script.sh
Depending on what you do with the ssh connection, if it needs to stay open over the entire up time of the device, you will need to use some more trickery however (ssh connections are autoclosed after a period of inactivity).
I need this command to run automatically on boot or when told to. At the moment i need to run the command in SSH and leave the session open, otherwise it stops.
python CouchPotatoServer/CouchPotato.py
This is on a ReadyNAS (Debian 7)
One easy way to do this would be to create it as a service. Take a look in /etc/init.d and you will find scripts that run as services. Copy one and modify it so that it calls your python script. An good example could be the init script used for starting the avahi daemon. Now, you can use 'service couchPotato start/stop/status', etc. It will also start the service automatically at boot time if the server ever reboots. Find a simple file to use as your template and google init scripts for further assistance. Good luck.
From this page:
To run on boot copy the init script. sudo cp CouchPotatoServer/init/ubuntu /etc/init.d/couchpotato
Change the paths inside the init script. sudo nano /etc/init.d/couchpotato
Make it executable. sudo chmod +x /etc/init.d/couchpotato
Add it to defaults. sudo update-rc.d couchpotato defaults
CouchPotatoServer/init/ubuntu can be found here
sudo update-rc.d <service> <runlevels> is the official Debian way of inserting a service at startup. Its manpage can be read there.
my 2 cents,
Use chkconfig to add the service and specify the run level. Google will give you all you need for examples of how to do this. Good luck.