How to unlock Gnome Keyring from Python running on a Cron Job? - python

I'm hooking a Python script up to run with cron (on Ubuntu 12.04), but authentication is not working.
The cron script accesses a couple services, and has to provide credentials. Storing those credentials with keyring is easy as can be, except that when the cron job actually runs, the credentials can't be retrieved. The script fails out every time.
As nearly as I can tell, this has something to do with the environment cron runs in. I tracked down a set of posts which suggest that the key is having the script export DBUS_SESSION_BUS_ADDRESS. All well and good, I can get that address and, export it, and source it from Python fairly easily. But it simply generates a new error: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11. Setting DISPLAY=:0 has no effect.
So, has anybody figured out how to unlock gnome-keyring from Python running on a Cron job on Ubuntu 12.04?

I'm sorry to say I don't have the answer, but I think I know a bit of what's going on based on an issue I'm dealing with. I'm trying to get a web application and cron script to use some code that stashes an oauth token for Google's API into a keyring using python-keyring.
No matter what I do, something about the environment the web app and cron job runs in requires manual intervention to unlock the keyring. That's quite impossible when your code is running in a non-interactive session. The problem persists when trying some tricks suggested in my research, like giving the process owner a login password that matches the keyring password and setting the keyring password to an empty string.
I will almost guarantee that your error stems from Gnome-Keyring trying to fire up an interactive (graphical) prompt and bombing because you can't do that from cron.

Install keychain:
sudo apt-get install keychain
Put it in your $HOME/.bash_profile :
if [ -z "$SSH_AUTH_SOCK" ] ; then
eval `ssh-agent -s`
fi
eval `keychain --eval id_rsa`
It will ask your password at the first login, and will store your credentials until next reboot.
Insert it at the beginning of your cron script:
source $HOME/.keychain/${HOSTNAME}-sh
If you use other language such as python, call it from a wrapper script.
It works for me, I hope it helps you too.

Adding:
PID=$(pgrep -u <replace with target userid> bash | head -n 1)
DBUS="$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/"$PID"/environ | sed 's/DBUS_SESSION_BUS_ADDRESS=//' )"
export DBUS_SESSION_BUS_ADDRESS=$DBUS
at the beginning of the script listed in the crontab worked for me. I still needed to unlock the keyring interactively once after a boot, but reboots are not frequent so it works ok.
(from https://forum.duplicacy.com/t/cron-job-failed-to-get-value-from-keyring/1238/3)
so the full script run by cron would be:
#! /usr/bin/bash
PID=$(pgrep -u <replace with target userid> bash | head -n 1)
DBUS="$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/"$PID"/environ | sed 's/DBUS_SESSION_BUS_ADDRESS=//' )"
export DBUS_SESSION_BUS_ADDRESS=$DBUS
/home/user/miniconda/conda run -n myenv python myscript.py
using an environment from conda. Or change the python invocation to however you set up python to be run.

Related

systemd service not executing notify-send

I want to generate pop-ups for certain events in my python script. I am using 'notify-send' for that purpose.
subprocess.Popen(['notify-send', "Authentication", "True/False"])
The above command executes fine on terminal but when I run it from systemd-service it does not generate any pop-up.
When I see logs there are no errors.
You need to first set the environment variable so that the root can communicate with the currently logged user and send the notification in GUI.
In my case, I did it as follow:
[Unit]
Description=< write your description>
After=systemd-user-sessions.service,systemd-journald.service
[Service]
Type=simple
ExecStart=/bin/bash /<path to your script file>.sh
Restart=always
RestartSec=1
KillMode=process
IgnoreSIGPIPE=no
RemainAfterExit=yes
Environment="DISPLAY=:0" "XAUTHORITY=/home/<User name>/.Xauthority"
[Install]
WantedBy=multi-user.target
Here,
RemainAfterExit=yes
is very important to mention in service file.
make sure to change all the parameters like Description, User name and path to your script file.
also, make sure that script file has executable permission by executing the command
sudo chmod +x <path to your script file>.sh
Here my script file is written in bash which shows the notification by using the same 'notify-send' command.
Now here the Environment parameter is doing all the magic.
you can read more about this behavior and the problem discussed overhere.
I certainly don't know the complete working of these files or how this worked, but for me, it worked just fine.
So you can give it a try.
please let me know if this worked or not in your case.
Running graphical applications requires the DISPLAY environment variable to be set, which would be set when run it from the CLI, but not when run from systemd (unless you explicitly set it).
This issue is covered more in Writing a systemd service that depends on XOrg.
I agree with the general advise that systemd may not be the best tool for the job. You may be better off using an "auto start" feature of your desktop environment to run your app, which would set the correct things in the environment that you need.
If running notify-send for desktop notifications in cron, notify-send is sending values to dbus. So it needs to tell dbus to connect to the right bus. The address can be found by examining DBUS_SESSION_BUS_ADDRESS environment variable and setting it to the same value.
Copy the value of DISPLAY and DBUS_SESSION_BUS_ADDRESS from your running environment and set them in [Service].Environment section
More info on the Arch Wiki:
https://wiki.archlinux.org/index.php/Cron#Running_X.org_server-based_applications

Run python script on Google Cloud Compute Engine

I know this is an exact copy of this question, but I've been trying different solutions for a while and didn't come up with anything.
I have this simple script that uses PRAW to find posts on Reddit. It takes a while, so I need it to stay alive when I log out of the shell as well.
I tried to set it up as a start-up script, to use nohup in order to run it in the background, but none of this worked. I followed the quickstart and I can get the hello word app to run, but all these examples are for web applications and all I want is start a process on my VM and keep it running when I'm not connected, without using .yaml configuration files and such. Can somebody please point me in the right direction?
Well, at the end using nohup was the answer. I'm new to the GNU environment and I just assumed it didn't work when I first tried. My program was exiting with an error, but I didn't check the nohup.out file so I was unaware of it..
Anyway here is a detailed guide for future reference (Using Debian Stretch):
Make your script an executable
chmod +x myscript.py
Run the nohup command to execute the script in the background. The & option ensures that the process stays alive after exiting. I've added the shebang line to my python script so there's no need to call python here
nohup /path/to/script/myscript.py &
Logout from the shell if you want
logout
Done! Now your script is up and running. You can login back and make sure that your process is still alive by checking the output of this command:
ps -e | grep myscript.py

How to login inside a AWS server & do some maintenance on it using Python?

I need to login inside a AWS Linux server, then create a folder, add some ownership on it and lastly restart tomcat.
I know that I should be using Ansible or any config mgmt tool & that's easy way.. but out of curiosity I want to do it using Python.
So basically, the steps that need to be followed are:
Login to Machine
mkdir /mnt/some_new_folder
Give permissions, chown tomcat7:tomcat7 /mnt/some_new_folder
Lastly restart tomcat: sudo service tomcat7 restart
Lastly logout
Is it possible to do all this via Python script ?
With open source tools like Python everything is possible. Only your knowledge sets the limit.
I would suggest using sh module which allows easy execution of remote commands over SSH.
sh + SSH tutorial.
You can use it like:
import sh
print(sh.ssh("username#example.com", "mkdir /foo/bar"))
First you need to setup proper SSH keys and SSH agent.

Start couchpotato.py at boot, without needing to leave ssh session open. ReadyNAS (Debian 7)

I need this command to run automatically on boot or when told to. At the moment i need to run the command in SSH and leave the session open, otherwise it stops.
python CouchPotatoServer/CouchPotato.py
This is on a ReadyNAS (Debian 7)
One easy way to do this would be to create it as a service. Take a look in /etc/init.d and you will find scripts that run as services. Copy one and modify it so that it calls your python script. An good example could be the init script used for starting the avahi daemon. Now, you can use 'service couchPotato start/stop/status', etc. It will also start the service automatically at boot time if the server ever reboots. Find a simple file to use as your template and google init scripts for further assistance. Good luck.
From this page:
To run on boot copy the init script. sudo cp CouchPotatoServer/init/ubuntu /etc/init.d/couchpotato
Change the paths inside the init script. sudo nano /etc/init.d/couchpotato
Make it executable. sudo chmod +x /etc/init.d/couchpotato
Add it to defaults. sudo update-rc.d couchpotato defaults
CouchPotatoServer/init/ubuntu can be found here
sudo update-rc.d <service> <runlevels> is the official Debian way of inserting a service at startup. Its manpage can be read there.
my 2 cents,
Use chkconfig to add the service and specify the run level. Google will give you all you need for examples of how to do this. Good luck.

How can you automate terminal commands?

I'm tired of doing this.
ssh me#somehost.com
input my password
sudo su - someuser
input my password
cd /some/working/directory
<run some commands>
Is there anyway to automate this? Do I need a special shell? or a shell emulator? can I programmatically drive the shell up to certain point then run manual commands on it?
Bonus points of it's programmed in python for extra hacking goodness
edit: All the answers below focus on the "full automation" part of the question: Where the hard part is what I highlighted above. Here is another example to see if I can capture the essence.
ssh me#somehost.com
<get a shell because keys are setup>
sudo su - user_that_deploys_the_app
<input password, because we don't want to give passwordless sudo to developers>
cd env; source bin/activate
cd /path/where/ur/app/is/staging
<edit some files, restart the server, edit some more, check the logs, etc.>
exit the term
For the ssh/authentication piece, you can setup passwordless authentication by using keys. Then you can simply use ssh and a bash script to execute a series of commands in an automated fashion.
You could use Python here, but if you are executing a series of shell commands, it's probably a better idea to use a shell script, as that's precisely what they do.
Alternately, look into Fabric for your automation needs. It's Python-based, and your "recipes" are written in Python.
I'm not quite sure what you're asking, but what you're probably asking about is getting SSH working in password-less mode using public keys. The general idea is you generate an SSH keypair:
ssh-keygen -t rsa
which gives you id_rsa and id_rsa.pub. You append the contents of id_rsa.pub to the ~/.ssh/authorized_keys file of your target user, and SSH from that point on will not ask for credentials. In your example, this will work out to:
Only once
# On your source machine
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub
# Copy this to clip board
# On somehost.com
su - someuser
# edit ~/.ssh/authorized_keys and paste what you had copied from previous step
From now on, you can now just run
ssh someuser#somehost.com "sh -c 'cd /some/dir; command.sh'"
and not be prompted for credentials.
fabric is a fine choice, as others have pointed out. there is also pexpect which might be more what you're looking for.
You can play with autoexpect. It creates expect script (script language intended to handle interaction with user). Run
autoexpect ssh me#somehost.com
followed by rest of commands. Script script.exp will be created.
Please note, that exact results of input and output will be recorded by the script. If output may differ from execution to execution, you'll need to modify a bit generated script.
As Daniel pointed out you need to have a secure way of doing ssh and sudo on the boxes. Those items are universal to dealing with linux/unix boxes. Once you've tackled that you can use fabric. It's a python based tool to do automation.
You can set stuff up in your ~/.ssh/config
For example:
Host somehost
User test
See ssh_config(5) for more info.
Next, you can generate a SSH key using ssh-keygen(1), run ssh-agent(1), and use that for authentication.
If you want to run a command on a remote machine, you can just use something like:
$ ssh somehost "sh myscript.sh ${myparameter}".
I hope this at least points you in the right direction :)
If you need sudo access, then there are obvious potential security issues though ... You can use ChrootDirectory on a per user basis inside a Match block though. See sshd_config(5) for info.
try module paramiko. This can meet your requirement.

Categories

Resources