Bash at and $DISPLAY, howto? - python

I'm working on a Python GUI application and at some point I need to be defering the execution of big parts of Python code. I have tried using at for doing it :
line = 'echo "python ./executor.py ibm ide graph" | at -t 1403211632'
subprocess.Popen(line,Shell=True)
This line gives no error and effectively starts the job at the given time.
Now, each option for executor.py is a job it has to do, and each job is protected with a try/catch log. In some cases I catch this error :
14-03-21_17:07:00 starting ibm for Simulations/140321170659
Failed to execute ibm : no display name and no $DISPLAY environment variable
Aborted the whole execution.
I have tried the following, thinking I could provide the $DISPLAY to the environement, with no success (same error):
line = 'DISPLAY=:0.0;echo "python ./executor.py Simulations/140321170936 eid defer" | at -t 1403211711'
From man at :
The working directory, the environment (except for the variables BASH_VERSINFO, DISPLAY, EUID, GROUPS, SHELLOPTS, TERM, UID, and _) and the umask are retained from the time of invocation.
Question :
What can possibly be causing this error to raise ?
How do I provide the $DISPLAY variable to at's environement ?
Solution :
I actually needed to put export DISPLAY=:0.0 inside echo so that it is set after at had started his environnement.
line = echo "export DISPLAY=:0.0; python..." | at...
subprocess.Popen(line,Shell=True)

You will need to set DISPLAY in the python script by taking the current environment, adding the DISPLAY setting and passing the new environment to the sub-shell created by Popen.
import os;
new_env = dict(os.environ)
new_env['DISPLAY'] = '0.0'
...
...
subprocess.Popen(line, env=new_env, Shell=True)

Related

How to change the working directory on Slurm

I am working on a slurm cluster where I am running couple of jobs. It is hard for me to check the jobs one by one in each directory.
I could manage to check in which directory the jobs are running using
scontrol show job JOB_ID
This command gives me various lines on the output. Few of them are listed below
OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
Command=/home/astha/vt-st/scf-test/303030/49/qsub.job
WorkDir=/home/astha/vt-st/scf-test/303030/49
StdErr=/home/astha/vt-st/scf-test/303030/49/qsub.job.e1205
StdIn=/dev/null
StdOut=/home/astha/vt-st/scf-test/303030/49/qsub.job.o1205
Power=
MailUser=(null) MailType=NONE
Where WorkDir (this is an example, the path will be different for each job) from above output is the directory in which I want to switch.
then
cd /home/astha/vt-st/scf-test/303030/49
But typing this long commands make my fingers cry.
I have tried to make a small python script to print scontrol show job
# Try block
try:
# Take a number
print("scontrol show job")
# Exception block
except (ValueError):
# Print error message
print("Enter a numeric value")
But then how I should improve it so that it takes my given input number and then grep the WorkDir from the output and change the directory.
You will not be able to have a python script change your current working directory easily, and can do it simply in Bash like this:
$ cdjob() { cd $(squeue -h -o%Z -j "$1") ; }
This will create a Bash function named cdjob that accept a job ID as parameter. You can check it was created correctly with
$ type cdjob
cdjob is a function
cdjob ()
{
cd $(squeue -h -o%Z -j "$1")
}
After you run the above command (which you can place in your startup script .bashrc or .bash_profile if you want it to survive logouts) you will be able to do
$ cdjob 22078365
and this will bring you to the working directory of job 22078365 for instance. You see that rather than trying to parse the output of scontrol I am using the output formatting options of squeue to simply output the needed information.

How to call . /home/test.sh file in python script

I have file called . /home/test.sh (the space between the first . and / is intentional) which contains some environmental variables. I need to load this file and run the .py. If I run the command manually first on the Linux server and then run python script it generates the required output. However, I want to call . /home/test.sh from within python to load the profile and run rest of the code. If this profile is not loaded python scripts runs and gives 0 as an output.
The call
subprocess.call('. /home/test.sh',shell=True)
runs fine but the profile is not loaded on the Linux terminal to execute python code and give the desired output.
Can someone help?
Environment variables are not inherited directly by the parent process, which is why your simple approach does not work.
If you are trying to pick up environment variables that have been set in your test.sh, then one thing you could do instead is to use env in a sub-shell to write them to stdout after sourcing the script, and then in Python you can parse these and set them locally.
The code below will work provided that test.sh does not write any output itself. (If it does, then what you could do to work around it would be to echo some separator string afterward sourcing it, and before running the env, and then in the Python code, strip off the separator string and everything before it.)
import subprocess
import os
p = subprocess.Popen(". /home/test.sh; env -0", shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, _ = p.communicate()
for varspec in out.decode().split("\x00")[:-1]:
pos = varspec.index("=")
name = varspec[:pos]
value = varspec[pos + 1:]
os.environ[name] = value
# just to test whether it works - output of the following should include
# the variables that were set
os.system("env")
It is also worth considering that if all that you want to do is set some environment variables every time before you run any python code, then one option is just to source your test.sh from a shell-script wrapper, and not try to set them inside python at all:
#!/bin/sh
. /home/test.sh
exec "/path/to/your/python/script $#"
Then when you want to run the Python code, you run the wrapper instead.

Airflow SSHExecuteOperator() with env=... not setting remote environment

I am modifying the environment of the calling process and appending to it's PATH along with setting some new environment variables. However, when I print os.environ in the child process, these changes are not reflected. Any idea what may be happening?
My call to the script on the instance:
ssh_hook = SSHHook(conn_id=ssh_conn_id)
temp_env = os.environ.copy()
temp_env["PATH"] = "/somepath:"+temp_env["PATH"]
run = SSHExecuteOperator(
bash_command="python main.py",
env=temp_env,
ssh_hook=ssh_hook,
task_id="run",
dag=dag)
Explanation: Implementation Analysis
If you look at the source to Airflow's SSHHook class, you'll see that it doesn't incorporate the env argument into the command being remotely run at all. The SSHExecuteOperator implementation passes env= through to the Popen() call on the hook, but that only passes it through to the local subprocess.Popen() implementation, not to the remote operation.
Thus, in short: Airflow does not support passing environment variables over SSH. If it were to have such support, it would need to either incorporate them into the command being remotely executed, or to add the SendEnv option to the ssh command being locally executed for each command to be sent (which even then would only work if the remote sshd were configured with AcceptEnv whitelisting the specific environment variable names to be received).
Workaround: Passing Environment Variables On The Command Line
from pipes import quote # in Python 3, make this "from shlex import quote"
def with_prefix_from_env(env_dict, command=None):
result = 'set -a; '
for (k,v) in env_dict.items():
result += '%s=%s; ' % (quote(k), quote(v))
if command:
result += command
return result
SSHExecuteOperator(bash_command=prefix_from_env(temp_env, "python main.py"),
ssh_hook=ssh_hook, task_id="run", dag=dag)
Workaround: Remote Sourcing
If your environment variables are sensitive and you don't want them to be logged with the command, you can transfer them out-of-band and source the remote file containing them.
from pipes import quote
def with_env_from_remote_file(filename, command):
return "set -a; . %s; %s" % (quote(filename), command)
SSHExecuteOperator(bash_command=with_env_from_remote_file(envfile, "python main.py"),
ssh_hook=ssh_hook, task_id="run", dag=dag)
Note that set -a directs the shell to export all defined variables, so the file being executed need only define variables with key=val declarations; they'll be automatically exported. If generating this file from your Python script, be sure to quote both keys and values with pipes.quote() to ensure that it only performs assignments and does not run other commands. The . keyword is a POSIX-compliant equivalent to the bash source command.

Python - send KDE knotify message with cron job on linux?

I'm trying to send a notification to KDE's knotify from a cron job. The code below works fine but when I run it as a cron job the notification doesnt appear.
#!/usr/bin/python2
import dbus
import gobject
album = "album"
artist = "artist"
title = "title"
knotify = dbus.SessionBus().get_object("org.kde.knotify", "/Notify")
knotify.event("warning", "kde", [], title, u"by %s from %s" % (artist, album), [], [], 0, 0, dbus_interface="org.kde.KNotify")
Anyone know how I can run this as a cron job?
You need to supply an environment variable called DBUS_SESSION_BUS_ADDRESS.
You can get the value from a running kde session.
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:abstract=/tmp/dbus-iHb7INjMEc,guid=d46013545434477a1b7a6b27512d573c
In your kde startup (autostart module in configuration), create a script entry to run after your environment starts up. Output this environment variable value to a temp file in your home directory and then you can set the environment variable within your cron job or python script from the temp file.
#!/bin/bash
echo $DBUS_SESSION_BUS_ADDRESS > $HOME/tmp/kde_dbus.session
As of 2019 KDE5, it still works but is slightly different results:
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:path=/run/user/1863/bus
To test it, you can do the following:
$ qdbus org.freedesktop.ScreenSaver /ScreenSaver SimulateUserActivity
You may need to use qdbus-qt5 if you still have the old kde4 binaries installed along with kde5. You can determine which one you should use with the following:
export QDBUS_CMD=$(which qdbus-qt5 2> /dev/null || which qdbus || exit 1)
I run this with a sleep statement when I want to prevent my screensaver from engaging and it works. I run it remotely from another computer beside my main one.
For those who want to know how I lock and unlock the remote screensaver, it's a different command...
loginctl lock-session 1
or
loginctl unlock-session 1
That is assuming that your session is the first one. You can add scripts to the KDE notification events for screensaver start and stop. Hope this information helps someone who wants to synchronize their screen savers across more than one computer.
I know this is long answer, but I wanted to provide an example for you to test with and a practical use case where I use it today.

How do I change my current directory from a python script?

I'm trying to implement my own version of the 'cd' command that presents the user with a list of hard-coded directories to choose from, and the user has to enter a number corresponding to an entry in the list. The program, named my_cd.py for now, should then effectively 'cd' the user to the chosen directory. Example of how this should work:
/some/directory
$ my_cd.py
1) ~
2) /bin/
3) /usr
Enter menu selection, or q to quit: 2
/bin
$
Currently, I'm trying to 'cd' using os.chdir('dir'). However, this doesn't work, probably because my_cd.py is kicked off in its own child process. I tried wrapping the call to my_cd.py in a sourced bash script named my_cd.sh:
#! /bin/bash
function my_cd() {
/path/to/my_cd.py
}
/some/directory
$ . my_cd.sh
$ my_cd
... shows list of dirs, but doesn't 'cd' in the interactive shell
Any ideas on how I can get this to work? Is it possible to change my interactive shell's current directory from a python script?
Change your sourced bash code to:
#! /bin/bash
function my_cd() {
cd `/path/to/my_cd.py`
}
and your Python code to do all of its cosmetic output (messages to the users, menus, etc) on sys.stderr, and, at the end, instead of os.chdir, just print (to sys.stdout) the path to which the directory should be changed.
my_cd.py:
#!/usr/bin/env python
import sys
dirs = ['/usr/bin', '/bin', '~']
for n, dir in enumerate(dirs):
sys.stderr.write('%d) %s\n' % (n+1, dir))
sys.stderr.write('Choice: ')
n = int(raw_input())
print dirs[n-1]
Usage:
nosklo:/tmp$ alias mcd="cd \$(/path/to/my_cd.py)"
nosklo:/tmp$ mcd
1) /usr/bin
2) /bin
3) ~
Choice: 1
nosklo:/usr/bin$
This can't be done. Changes to the working directory are not visible to parent processes. At best you could have the Python script print the directory to change to, then have the sourced script actually change to that directory.
For what its worth, since this question is also tagged "bash", here is a simple bash-only solution:
$ cat select_cd
#!/bin/bash
PS3="Number: "
dir_choices="/home/klittle /local_home/oracle"
select CHOICE in $dir_choices; do
break
done
[[ "$CHOICE" != "" ]] && eval 'cd '$CHOICE
Now, this script must be source'd, not executed:
$ pwd
/home/klittle/bin
$ source select_cd
1) /home/klittle
2) /local_home/oracle
Number: 2
$ pwd
/local_home/oracle
So,
$ alias mycd='source /home/klittle/bin/select_cd'
$ mycd
1) /home/klittle
2) /local_home/oracle
Number:
To solve your case, you could have the command the user runs be an alias that sources a bash script, which does the dir selection first, then dives into a python program after the cd has been done.
Contrary to what was said, you can do this by replacing the process image, twice.
In bash, replace your my_cd function with:
function my_cd() {
exec /path/to/my_cd.py "$BASH" "$0"
}
Then your python script has to finish with:
os.execl(sys.argv[1], sys.argv[2])
Remember to import os, sys at the beginning of the script.
But note that this is borderline hack. Your shell dies, replacing itself with the python script, running in the same process. The python script makes changes to the environment and replaces itself with the shell, back again, still in the same process. This means that if you have some other local unsaved and unexported data or environment in the previous shell session, it will not persist to the new one. It also means that rc and profile scripts will run again (not usually a problem).

Categories

Resources