I am working on a slurm cluster where I am running couple of jobs. It is hard for me to check the jobs one by one in each directory.
I could manage to check in which directory the jobs are running using
scontrol show job JOB_ID
This command gives me various lines on the output. Few of them are listed below
OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
Command=/home/astha/vt-st/scf-test/303030/49/qsub.job
WorkDir=/home/astha/vt-st/scf-test/303030/49
StdErr=/home/astha/vt-st/scf-test/303030/49/qsub.job.e1205
StdIn=/dev/null
StdOut=/home/astha/vt-st/scf-test/303030/49/qsub.job.o1205
Power=
MailUser=(null) MailType=NONE
Where WorkDir (this is an example, the path will be different for each job) from above output is the directory in which I want to switch.
then
cd /home/astha/vt-st/scf-test/303030/49
But typing this long commands make my fingers cry.
I have tried to make a small python script to print scontrol show job
# Try block
try:
# Take a number
print("scontrol show job")
# Exception block
except (ValueError):
# Print error message
print("Enter a numeric value")
But then how I should improve it so that it takes my given input number and then grep the WorkDir from the output and change the directory.
You will not be able to have a python script change your current working directory easily, and can do it simply in Bash like this:
$ cdjob() { cd $(squeue -h -o%Z -j "$1") ; }
This will create a Bash function named cdjob that accept a job ID as parameter. You can check it was created correctly with
$ type cdjob
cdjob is a function
cdjob ()
{
cd $(squeue -h -o%Z -j "$1")
}
After you run the above command (which you can place in your startup script .bashrc or .bash_profile if you want it to survive logouts) you will be able to do
$ cdjob 22078365
and this will bring you to the working directory of job 22078365 for instance. You see that rather than trying to parse the output of scontrol I am using the output formatting options of squeue to simply output the needed information.
Related
I have file called . /home/test.sh (the space between the first . and / is intentional) which contains some environmental variables. I need to load this file and run the .py. If I run the command manually first on the Linux server and then run python script it generates the required output. However, I want to call . /home/test.sh from within python to load the profile and run rest of the code. If this profile is not loaded python scripts runs and gives 0 as an output.
The call
subprocess.call('. /home/test.sh',shell=True)
runs fine but the profile is not loaded on the Linux terminal to execute python code and give the desired output.
Can someone help?
Environment variables are not inherited directly by the parent process, which is why your simple approach does not work.
If you are trying to pick up environment variables that have been set in your test.sh, then one thing you could do instead is to use env in a sub-shell to write them to stdout after sourcing the script, and then in Python you can parse these and set them locally.
The code below will work provided that test.sh does not write any output itself. (If it does, then what you could do to work around it would be to echo some separator string afterward sourcing it, and before running the env, and then in the Python code, strip off the separator string and everything before it.)
import subprocess
import os
p = subprocess.Popen(". /home/test.sh; env -0", shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, _ = p.communicate()
for varspec in out.decode().split("\x00")[:-1]:
pos = varspec.index("=")
name = varspec[:pos]
value = varspec[pos + 1:]
os.environ[name] = value
# just to test whether it works - output of the following should include
# the variables that were set
os.system("env")
It is also worth considering that if all that you want to do is set some environment variables every time before you run any python code, then one option is just to source your test.sh from a shell-script wrapper, and not try to set them inside python at all:
#!/bin/sh
. /home/test.sh
exec "/path/to/your/python/script $#"
Then when you want to run the Python code, you run the wrapper instead.
when I issue git with tab , it can auto-complete with a list, I want to write a test.py, when I type test.py followed with tab, it can auto-complete with a given list defined in test.py, is it possible ?
$ git [tab]
add branch column fetch help mv reflog revert stash
am bundle commit filter-branch imap-send name-rev relink rm status
annotate checkout config format-patch init notes remote send-email submodule
apply cherry credential fsck instaweb p4 repack shortlog subtree
archive cherry-pick describe gc log pull replace show tag
bisect clean diff get-tar-commit-id merge push request-pull show-branch whatchanged
blame clone difftool grep mergetool rebase reset stage
The method you are looking for is: readline.set_completer . This method interacts with the readline of the bash shell. It's simple to implement. Examples: https://pymotw.com/2/readline/
That's not a feature of the git binary itself, it's a bash completion 'hack' and as such has nothing to do with Python per-se, but since you've tagged it as such let's add a little twist. Let's say we create a script aware of its acceptable arguments - test.py:
#!/usr/bin/env python
import sys
# let's define some sample functions to be called on passed arguments
def f1():
print("F1 called!")
def f2():
print("F2 called!")
def f3():
print("F3 called!")
def f_invalid(): # a simple invalid placeholder function
print("Invalid command!")
def f_list(): # a function to list all valid arguments
print(" ".join(sorted(arguments.keys())))
if __name__ == "__main__": # make sure we're running this as a script
arguments = { # a simple argument map, use argparse or similar in a real world use
"arg1": f1,
"arg2": f2,
"arg3": f3,
"list_arguments": f_list
}
if len(sys.argv) > 1:
for arg in sys.argv[1:]: # loop through all arguments
arguments.get(arg, f_invalid)() # call the mapped or invalid function
else:
print("At least one argument required!")
NOTE: Make sure you add an executable flag to the script (chmod +x test.py) so its shebang is used for executing instead of providing it as an argument to the Python interpreter.
Apart from all the boilerplate, the important argument is list_arguments - it lists all available arguments to this script and we'll use this output in our bash completion script to instruct bash how to auto-complete. To do so, create another script, let's call it test-completion.bash:
#!/usr/bin/env bash
SCRIPT_NAME=test.py
SCRIPT_PATH=/path/to/your/script
_complete_script()
{
local cursor options
options=$(${SCRIPT_PATH}/${SCRIPT_NAME} list_arguments)
cursor="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${options}" -- ${cursor}) )
return 0
}
complete -F _complete_script ${SCRIPT_NAME}
What it does is essentially adding to complete the _complete_script function to be called whenever a completion over test.py is invoked. The _complete_script function itself first calls list_arguments on test.py to retrieve its acceptable arguments, and then uses compgen to create a required structure for complete to be able to print it out.
To test, all you need is to source this script as:
source test-completion.bash
And then your bash will behave as:
$ ./test.py [tab]
arg1 arg2 arg3 list_arguments
And what's more, it's completely controllable from your Python script - whatever gets printed as a list on list_arguments command is what will be shown as auto-completion help.
To make the change permanent, you can simply add the source line to your .bashrc, or if you want more structured solution you can follow the guidelines for your OS. There are a couple of ways described on the git-flow-completion page for example. Of course, this assumes you actually have bash-autocomplete installed and enabled on your system, but your git autocompletion wouldn't work if you didn't.
Speaking of git autocompletion, you can see how it's implemented by checking git-completion.bash source - a word of warning, it's not for the fainthearted.
I have a python script that takes two arguments, and when I run it the script outputs 3 new files as it is supposed too:
>>> python importpymol2.py 65_*.pdb BTB_old.pdb
but when I put is through a shell loop that also changes directories(the script is in each directory):
>>>> for i in *;do current_dir=$PWD; cd $PWD/*;python importpymol2.py 65_*.pdb BTB_old.pdb;cd $current_dir; done
but, it runs perfectly normal with the exception that it doesn't output the files.... how can I get it to output the files?
Don't try to cd back. Instead, just run in a subshell:
for i in *; do ( cd $i; python ...; ); done
There may be a typo in your shell command, try to change it to:
for i in *;do current_dir=$PWD; cd $PWD/$i;python ../importpymol2.py 65_*.pdb BTB_old.pdb;cd $current_dir; done
(I assume importpymol2.py is located in your $PWD.)
I'm working on a Python GUI application and at some point I need to be defering the execution of big parts of Python code. I have tried using at for doing it :
line = 'echo "python ./executor.py ibm ide graph" | at -t 1403211632'
subprocess.Popen(line,Shell=True)
This line gives no error and effectively starts the job at the given time.
Now, each option for executor.py is a job it has to do, and each job is protected with a try/catch log. In some cases I catch this error :
14-03-21_17:07:00 starting ibm for Simulations/140321170659
Failed to execute ibm : no display name and no $DISPLAY environment variable
Aborted the whole execution.
I have tried the following, thinking I could provide the $DISPLAY to the environement, with no success (same error):
line = 'DISPLAY=:0.0;echo "python ./executor.py Simulations/140321170936 eid defer" | at -t 1403211711'
From man at :
The working directory, the environment (except for the variables BASH_VERSINFO, DISPLAY, EUID, GROUPS, SHELLOPTS, TERM, UID, and _) and the umask are retained from the time of invocation.
Question :
What can possibly be causing this error to raise ?
How do I provide the $DISPLAY variable to at's environement ?
Solution :
I actually needed to put export DISPLAY=:0.0 inside echo so that it is set after at had started his environnement.
line = echo "export DISPLAY=:0.0; python..." | at...
subprocess.Popen(line,Shell=True)
You will need to set DISPLAY in the python script by taking the current environment, adding the DISPLAY setting and passing the new environment to the sub-shell created by Popen.
import os;
new_env = dict(os.environ)
new_env['DISPLAY'] = '0.0'
...
...
subprocess.Popen(line, env=new_env, Shell=True)
I'm trying to implement my own version of the 'cd' command that presents the user with a list of hard-coded directories to choose from, and the user has to enter a number corresponding to an entry in the list. The program, named my_cd.py for now, should then effectively 'cd' the user to the chosen directory. Example of how this should work:
/some/directory
$ my_cd.py
1) ~
2) /bin/
3) /usr
Enter menu selection, or q to quit: 2
/bin
$
Currently, I'm trying to 'cd' using os.chdir('dir'). However, this doesn't work, probably because my_cd.py is kicked off in its own child process. I tried wrapping the call to my_cd.py in a sourced bash script named my_cd.sh:
#! /bin/bash
function my_cd() {
/path/to/my_cd.py
}
/some/directory
$ . my_cd.sh
$ my_cd
... shows list of dirs, but doesn't 'cd' in the interactive shell
Any ideas on how I can get this to work? Is it possible to change my interactive shell's current directory from a python script?
Change your sourced bash code to:
#! /bin/bash
function my_cd() {
cd `/path/to/my_cd.py`
}
and your Python code to do all of its cosmetic output (messages to the users, menus, etc) on sys.stderr, and, at the end, instead of os.chdir, just print (to sys.stdout) the path to which the directory should be changed.
my_cd.py:
#!/usr/bin/env python
import sys
dirs = ['/usr/bin', '/bin', '~']
for n, dir in enumerate(dirs):
sys.stderr.write('%d) %s\n' % (n+1, dir))
sys.stderr.write('Choice: ')
n = int(raw_input())
print dirs[n-1]
Usage:
nosklo:/tmp$ alias mcd="cd \$(/path/to/my_cd.py)"
nosklo:/tmp$ mcd
1) /usr/bin
2) /bin
3) ~
Choice: 1
nosklo:/usr/bin$
This can't be done. Changes to the working directory are not visible to parent processes. At best you could have the Python script print the directory to change to, then have the sourced script actually change to that directory.
For what its worth, since this question is also tagged "bash", here is a simple bash-only solution:
$ cat select_cd
#!/bin/bash
PS3="Number: "
dir_choices="/home/klittle /local_home/oracle"
select CHOICE in $dir_choices; do
break
done
[[ "$CHOICE" != "" ]] && eval 'cd '$CHOICE
Now, this script must be source'd, not executed:
$ pwd
/home/klittle/bin
$ source select_cd
1) /home/klittle
2) /local_home/oracle
Number: 2
$ pwd
/local_home/oracle
So,
$ alias mycd='source /home/klittle/bin/select_cd'
$ mycd
1) /home/klittle
2) /local_home/oracle
Number:
To solve your case, you could have the command the user runs be an alias that sources a bash script, which does the dir selection first, then dives into a python program after the cd has been done.
Contrary to what was said, you can do this by replacing the process image, twice.
In bash, replace your my_cd function with:
function my_cd() {
exec /path/to/my_cd.py "$BASH" "$0"
}
Then your python script has to finish with:
os.execl(sys.argv[1], sys.argv[2])
Remember to import os, sys at the beginning of the script.
But note that this is borderline hack. Your shell dies, replacing itself with the python script, running in the same process. The python script makes changes to the environment and replaces itself with the shell, back again, still in the same process. This means that if you have some other local unsaved and unexported data or environment in the previous shell session, it will not persist to the new one. It also means that rc and profile scripts will run again (not usually a problem).