I'm trying to implement my own version of the 'cd' command that presents the user with a list of hard-coded directories to choose from, and the user has to enter a number corresponding to an entry in the list. The program, named my_cd.py for now, should then effectively 'cd' the user to the chosen directory. Example of how this should work:
/some/directory
$ my_cd.py
1) ~
2) /bin/
3) /usr
Enter menu selection, or q to quit: 2
/bin
$
Currently, I'm trying to 'cd' using os.chdir('dir'). However, this doesn't work, probably because my_cd.py is kicked off in its own child process. I tried wrapping the call to my_cd.py in a sourced bash script named my_cd.sh:
#! /bin/bash
function my_cd() {
/path/to/my_cd.py
}
/some/directory
$ . my_cd.sh
$ my_cd
... shows list of dirs, but doesn't 'cd' in the interactive shell
Any ideas on how I can get this to work? Is it possible to change my interactive shell's current directory from a python script?
Change your sourced bash code to:
#! /bin/bash
function my_cd() {
cd `/path/to/my_cd.py`
}
and your Python code to do all of its cosmetic output (messages to the users, menus, etc) on sys.stderr, and, at the end, instead of os.chdir, just print (to sys.stdout) the path to which the directory should be changed.
my_cd.py:
#!/usr/bin/env python
import sys
dirs = ['/usr/bin', '/bin', '~']
for n, dir in enumerate(dirs):
sys.stderr.write('%d) %s\n' % (n+1, dir))
sys.stderr.write('Choice: ')
n = int(raw_input())
print dirs[n-1]
Usage:
nosklo:/tmp$ alias mcd="cd \$(/path/to/my_cd.py)"
nosklo:/tmp$ mcd
1) /usr/bin
2) /bin
3) ~
Choice: 1
nosklo:/usr/bin$
This can't be done. Changes to the working directory are not visible to parent processes. At best you could have the Python script print the directory to change to, then have the sourced script actually change to that directory.
For what its worth, since this question is also tagged "bash", here is a simple bash-only solution:
$ cat select_cd
#!/bin/bash
PS3="Number: "
dir_choices="/home/klittle /local_home/oracle"
select CHOICE in $dir_choices; do
break
done
[[ "$CHOICE" != "" ]] && eval 'cd '$CHOICE
Now, this script must be source'd, not executed:
$ pwd
/home/klittle/bin
$ source select_cd
1) /home/klittle
2) /local_home/oracle
Number: 2
$ pwd
/local_home/oracle
So,
$ alias mycd='source /home/klittle/bin/select_cd'
$ mycd
1) /home/klittle
2) /local_home/oracle
Number:
To solve your case, you could have the command the user runs be an alias that sources a bash script, which does the dir selection first, then dives into a python program after the cd has been done.
Contrary to what was said, you can do this by replacing the process image, twice.
In bash, replace your my_cd function with:
function my_cd() {
exec /path/to/my_cd.py "$BASH" "$0"
}
Then your python script has to finish with:
os.execl(sys.argv[1], sys.argv[2])
Remember to import os, sys at the beginning of the script.
But note that this is borderline hack. Your shell dies, replacing itself with the python script, running in the same process. The python script makes changes to the environment and replaces itself with the shell, back again, still in the same process. This means that if you have some other local unsaved and unexported data or environment in the previous shell session, it will not persist to the new one. It also means that rc and profile scripts will run again (not usually a problem).
Related
I am working on a slurm cluster where I am running couple of jobs. It is hard for me to check the jobs one by one in each directory.
I could manage to check in which directory the jobs are running using
scontrol show job JOB_ID
This command gives me various lines on the output. Few of them are listed below
OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
Command=/home/astha/vt-st/scf-test/303030/49/qsub.job
WorkDir=/home/astha/vt-st/scf-test/303030/49
StdErr=/home/astha/vt-st/scf-test/303030/49/qsub.job.e1205
StdIn=/dev/null
StdOut=/home/astha/vt-st/scf-test/303030/49/qsub.job.o1205
Power=
MailUser=(null) MailType=NONE
Where WorkDir (this is an example, the path will be different for each job) from above output is the directory in which I want to switch.
then
cd /home/astha/vt-st/scf-test/303030/49
But typing this long commands make my fingers cry.
I have tried to make a small python script to print scontrol show job
# Try block
try:
# Take a number
print("scontrol show job")
# Exception block
except (ValueError):
# Print error message
print("Enter a numeric value")
But then how I should improve it so that it takes my given input number and then grep the WorkDir from the output and change the directory.
You will not be able to have a python script change your current working directory easily, and can do it simply in Bash like this:
$ cdjob() { cd $(squeue -h -o%Z -j "$1") ; }
This will create a Bash function named cdjob that accept a job ID as parameter. You can check it was created correctly with
$ type cdjob
cdjob is a function
cdjob ()
{
cd $(squeue -h -o%Z -j "$1")
}
After you run the above command (which you can place in your startup script .bashrc or .bash_profile if you want it to survive logouts) you will be able to do
$ cdjob 22078365
and this will bring you to the working directory of job 22078365 for instance. You see that rather than trying to parse the output of scontrol I am using the output formatting options of squeue to simply output the needed information.
This question already has answers here:
Is it possible to change the Environment of a parent process in Python?
(4 answers)
Closed 4 years ago.
I have a bash script that looks like this:
python myPythonScript.py
python myOtherScript.py $VarFromFirstScript
and myPythonScript.py looks like this:
print("Running some code...")
VarFromFirstScript = someFunc()
print("Now I do other stuff")
The question is, how do I get the variable VarFromFirstScript back to the bash script that called myPythonScript.py.
I tried os.environ['VarFromFirstScript'] = VarFromFirstScript but this doesn't work (I assume this means that the python environment is a different env from the calling bash script).
you cannot propagate an environment variable to the parent process. But you can print the variable, and assign it back to the variable name from your shell:
VarFromFirstScript=$(python myOtherScript.py $VarFromFirstScript)
you must not print anything else in your code, or using stderr
sys.stderr.write("Running some code...\n")
VarFromFirstScript = someFunc()
sys.stdout.write(VarFromFirstScript)
an alternative would be to create a file with the variables to set, and make it parse by your shell (you could create a shell that the parent shell would source)
import shlex
with open("shell_to_source.sh","w") as f:
f.write("VarFromFirstScript={}\n".format(shlex.quote(VarFromFirstScript))
(shlex.quote allows to avoid code injection from python, courtesy Charles Duffy)
then after calling python:
source ./shell_to_source.sh
You can only pass environment variables from parent process to child.
When the child process is created the environment block is copied to the child - the child has a copy, so any changes in the child process only affects the child's copy (and any further children which it creates).
To communicate with the parent the simplest way is to use command substitution in bash where we capture stdout:
Bash script:
#!/bin/bash
var=$(python myPythonScript.py)
echo "Value in bash: $var"
Python script:
print("Hollow world!")
Sample run:
$ bash gash.sh
Value in bash: Hollow world!
You have other print statements in python, you will need to filter out to only the data you require, possibly by marking the data with a well-known prefix.
If you have many print statements in python then this solution is not scalable, so you might need to use process substitution, like this:
Bash script:
#!/bin/bash
while read -r line
do
if [[ $line = ++++* ]]
then
# Strip out the marker
var=${line#++++}
else
echo "$line"
fi
done < <(python myPythonScript.py)
echo "Value in bash: $var"
Python script:
def someFunc():
return "Hollow World"
print("Running some code...")
VarFromFirstScript = someFunc()
# Prefix our data with a well-known marker
print("++++" + VarFromFirstScript)
print("Now I do other stuff")
Sample Run:
$ bash gash.sh
Running some code...
Now I do other stuff
Value in bash: Hollow World
I would source your script, this is the most commonly used method. This executes the script under the current shell instead of loading another one. Because this uses same shell env variables you set will be accessible when it exits. . /path/to/script.sh or source /path/to/script.sh will both work, . works where source doesn't sometimes.
when I issue git with tab , it can auto-complete with a list, I want to write a test.py, when I type test.py followed with tab, it can auto-complete with a given list defined in test.py, is it possible ?
$ git [tab]
add branch column fetch help mv reflog revert stash
am bundle commit filter-branch imap-send name-rev relink rm status
annotate checkout config format-patch init notes remote send-email submodule
apply cherry credential fsck instaweb p4 repack shortlog subtree
archive cherry-pick describe gc log pull replace show tag
bisect clean diff get-tar-commit-id merge push request-pull show-branch whatchanged
blame clone difftool grep mergetool rebase reset stage
The method you are looking for is: readline.set_completer . This method interacts with the readline of the bash shell. It's simple to implement. Examples: https://pymotw.com/2/readline/
That's not a feature of the git binary itself, it's a bash completion 'hack' and as such has nothing to do with Python per-se, but since you've tagged it as such let's add a little twist. Let's say we create a script aware of its acceptable arguments - test.py:
#!/usr/bin/env python
import sys
# let's define some sample functions to be called on passed arguments
def f1():
print("F1 called!")
def f2():
print("F2 called!")
def f3():
print("F3 called!")
def f_invalid(): # a simple invalid placeholder function
print("Invalid command!")
def f_list(): # a function to list all valid arguments
print(" ".join(sorted(arguments.keys())))
if __name__ == "__main__": # make sure we're running this as a script
arguments = { # a simple argument map, use argparse or similar in a real world use
"arg1": f1,
"arg2": f2,
"arg3": f3,
"list_arguments": f_list
}
if len(sys.argv) > 1:
for arg in sys.argv[1:]: # loop through all arguments
arguments.get(arg, f_invalid)() # call the mapped or invalid function
else:
print("At least one argument required!")
NOTE: Make sure you add an executable flag to the script (chmod +x test.py) so its shebang is used for executing instead of providing it as an argument to the Python interpreter.
Apart from all the boilerplate, the important argument is list_arguments - it lists all available arguments to this script and we'll use this output in our bash completion script to instruct bash how to auto-complete. To do so, create another script, let's call it test-completion.bash:
#!/usr/bin/env bash
SCRIPT_NAME=test.py
SCRIPT_PATH=/path/to/your/script
_complete_script()
{
local cursor options
options=$(${SCRIPT_PATH}/${SCRIPT_NAME} list_arguments)
cursor="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${options}" -- ${cursor}) )
return 0
}
complete -F _complete_script ${SCRIPT_NAME}
What it does is essentially adding to complete the _complete_script function to be called whenever a completion over test.py is invoked. The _complete_script function itself first calls list_arguments on test.py to retrieve its acceptable arguments, and then uses compgen to create a required structure for complete to be able to print it out.
To test, all you need is to source this script as:
source test-completion.bash
And then your bash will behave as:
$ ./test.py [tab]
arg1 arg2 arg3 list_arguments
And what's more, it's completely controllable from your Python script - whatever gets printed as a list on list_arguments command is what will be shown as auto-completion help.
To make the change permanent, you can simply add the source line to your .bashrc, or if you want more structured solution you can follow the guidelines for your OS. There are a couple of ways described on the git-flow-completion page for example. Of course, this assumes you actually have bash-autocomplete installed and enabled on your system, but your git autocompletion wouldn't work if you didn't.
Speaking of git autocompletion, you can see how it's implemented by checking git-completion.bash source - a word of warning, it's not for the fainthearted.
I have a python script that takes two arguments, and when I run it the script outputs 3 new files as it is supposed too:
>>> python importpymol2.py 65_*.pdb BTB_old.pdb
but when I put is through a shell loop that also changes directories(the script is in each directory):
>>>> for i in *;do current_dir=$PWD; cd $PWD/*;python importpymol2.py 65_*.pdb BTB_old.pdb;cd $current_dir; done
but, it runs perfectly normal with the exception that it doesn't output the files.... how can I get it to output the files?
Don't try to cd back. Instead, just run in a subshell:
for i in *; do ( cd $i; python ...; ); done
There may be a typo in your shell command, try to change it to:
for i in *;do current_dir=$PWD; cd $PWD/$i;python ../importpymol2.py 65_*.pdb BTB_old.pdb;cd $current_dir; done
(I assume importpymol2.py is located in your $PWD.)
I want to run two commands in sequence:
First go to /var/tmp/test folder
Then svn checkout here
In order to do that I wrote this script:
open_folder = "cd /var/tmp/%s" % (folder_name)
cmd = "%s %s/%s/%s && %s %s/%s/%s" % (svn_co, svn_co_directory, fst_product_name, fst_firmware_name, svn_co, svn_co_directory, snd_product_name, snd_firmware_name)
os.system(open_folder)
os.system(cmd)
It creates folder_name, but does not checkout into folder_name. It checked out to my current directory. Why?
Try os.chdir(path) to change the directory. Or you could use the folder as a prefix in your second command.
This explains, why cd won't work.
I would prefer to use subprocess.Popen(..) instead of os.system. It allows to specify a current working directory for the command you execute.