In bash or C, exec will terminate the current process and replace it with something new. Can this functionality be accomplished in Python? I don't wish to simply execute some code then continue running the python script (even just to immediately exit), or spawn a child process.
My specific situation is the following. I'm developing a command line application (python + curses) to manage data generation/analysis in the context of scientific computing. It will sometimes be necessary for me to terminate the application and go wrangle with the data in a given subdirectory manually. It would be convenient if I could do something like:
# within python script; d=<some directory>
if exit_and_goto_dir:
exec("pushd {}".format(d)) # C-style exec -- terminate program and execute new command
The following do not work, for example:
# attempt 1
if exit_and_goto_dir:
os.system("pushd {}".format(d))
exit(0) # pushd does not outlast the python script
# attempt 2
if exit_and_goto_dir:
os.chdir(d)
exit(0)
This behavior isn't really critical. There are plenty of work arounds (e.g. print the directory I care about to terminal then cd manually). Mostly I'm curious if it's possible. Thanks!
The os module contains Python wrappers for the various exec* functions in the C standard library:
>>> [method for method in dir(os) if method.startswith("exec")]
['execl', 'execle', 'execlp', 'execlpe', 'execv', 'execve', 'execvp', 'execvpe']
However, pushd is not an executable that you can exec but rather a bash builtin (and the same is true for cd).
What you could do would be to change directory inside the python process and then exec an interactive shell:
import os
os.chdir(d)
os.execvp("bash", ["bash", "-login"])
Python's current directory will be inherited by the shell that you exec. When you later exit from that shell, control will then return to the original login shell from which you invoked python (unless you used that shell's exec command to launch python in the first place).
What you can't do is to modify the current directory of the calling shell from inside python, in order to return to the shell prompt but in a different working directory from when python was invoked. (At least there's no straightforward way. There is a hack involving attaching gdb to it, described here, but which only worked as root when I tried it on Ubuntu.)
Related
i'm calling a python script inside my bash script and I was wondering if there is a simple way to set my bash variables within my python script.
Example:
My bash script:
#!/bin/bash
someVar=""
python3 /some/folder/pythonScript.py
My python script:
anotherVar="HelloWorld"
Is there a way I can set my someVar to the value of anotherVar? I was thinking of printing properties in a file inside the python script and then read them from my bash script but maybe there is another way. Also I don't know and don't think it makes any difference but I can name both variable with the same name (someVar/someVar instead of someVar/anotherVar)
No, when you execute python, you start a new process, and every process has access only to their own memory. Imagine what would happen if a process could influence another processes memory! Even for parent/child processes like this, this would be a huge security problem.
You can make python print() something and use that, though:
#!/usr/bin/env python3
print('Hello!')
And in your shell script:
#!/usr/bin/env bash
someVar=$(python3 myscript.py)
echo "$someVar"
There are, of course, many others IPC techniques you could use, such as sockets, pipes, shared memory, etc... But without context, it's difficult to make a specific recommendation.
shlex.quote() in Python 3, or pipes.quote() in Python 2, can be used to generate code which can be evaled by the calling shell. Thus, if the following script:
#!/usr/bin/env python3
import sys, shlex
print('export foobar=%s' % (shlex.quote(sys.argv[1].upper())))
...is named setFoobar and invoked as:
eval "$(setFoobar argOne)"
...then the calling shell will have an environment variable set with the name foobar and the value argOne.
Assume using Linux:
In Perl, the exec function executes an external program and immediately exits itself, leaving the external program in same shell session.
A very close answer using Python is https://stackoverflow.com/a/13256908
However, the Python solution using start_new_session=True starts an external program using setsid method, that means that solution is suitable for making a daemon, not an interactive program.
Here is an simple example of using perl:
perl -e '$para=qq(-X --cmd ":vsp");exec "vim $para"'
After vim is started, the original Perl program has exited and the vim is still in the same shell session(vim is not sent to new session group).
How to get the same solution with Python.
Perl is just wrapping the exec* system call functions here. Python has the same wrappers, in the os module, see the os.exec* documentation:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller.
To do the same in Python:
python -c 'import os; para="-X --cmd \":vsp\"".split(); os.execlp("vim", *para)'
os.execlp accepts an argument list and looks up the binary in $PATH from the first argument.
The subprocess module is only ever suitable for running processes next to the Python process, not to replace the Python process. On POSIX systems, the subprocess module uses the low-level exec* functions to implement it's functionality, where a fork of the Python process is then replaced with the command you wanted to run with subprocess.
I'm trying to change the terminal directory through a python script. I've seen this post and others like it so I know about os.chdir, but it's not working the way I'd like. os.chdir appears to change the directory, but only for the python script. For instance I have this code.
#! /usr/bin/env python
import os
os.chdir("/home/chekid/work2/")
print os.getcwd()
Unfortunately after running I'm still in the directory of the python script (e.g. /home/chekid) rather than the directory I want to be in. See below.
gandalf(pts/42):~> pwd
/home/chekid
gandalf(pts/42):~> ./changedirectory.py
/home/chekid/work2
gandalf(pts/42):~> pwd
/home/chekid
Any thoughts on what I should do?
Edit: Looks like what I'm trying to do doesn't exist in 'normal' python. I did find a work around, although it doesn't look so elegant to me.
cd `./changedirectory.py`
You can't. The shell's current directory belongs to the shell, not to you.
(OK, you could ptrace(2) the shell and make it call chdir(2), but that's probably not a great design, won't work on Windows, and I would not begin to know how to do it in pure Python except that you'd probably have to mess around with ctypes or something similar.)
You could launch a subshell with your current working directory. That might be close enough to what you need:
os.chdir('/path/to/somewhere')
shell = os.environ.get('SHELL', '/bin/sh')
os.execl(shell, shell)
# execl() does not return; it replaces the Python process with a new shell process
The original shell will still be there, so make sure you don't leave it hanging around. If you initially call Python with the exec builtin (e.g. exec python /path/to/script.py), then the original shell will be replaced with the Python process and you won't have to worry about this. But if Python exits without launching the shell, you'll be left with no shell open at all.
You can if you cheat: Make a bash script that calls your python script. The python script returns the path you want to change directory to. Then the bash script does the acctual chdir. Of course you would have to run the bash script in your bash shell using "source".
The current working directory is an attribute of a process. It cannot be changed by another program, such as changing the current working directory in your shell by running a separate Python program. This is why cd is always a shell built-in command.
You can make your python print the directory you want to move to, and then call your script with cd "$(./python-script.py)". In condition your script actually does not print anything else.
I am trying to compile a C program using Python and want to give input using "<" operator but it's not working as expected.
If I compile the C program and run it by giving input though a file it works; for example
./a.out <inp.txt works
But similarly if I try to do this using a Python script, it did not quite work out as expected.
For example:
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x"])
and
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x","<inp.txt"])
Both script ask for input though terminal. But I think in the second script it should read from file. why both the programs are working the same?
To complement #Jonathan Leffler's and #alastair's helpful answers:
Assuming you control the string you're passing to the shell for execution, I see nothing wrong with using the shell for convenience. [1]
subprocess.call() has an optional Boolean shell parameter, which causes the command to be passed to the shell, enabling I/O redirection, referencing environment variables, ...:
subprocess.call("./x <inp.txt", shell = True)
Note how the entire command line is passed as a single string rather than an array of arguments.
[1]
Avoid use of the shell in the following cases:
If your Python code must run on platforms other than Unix-like ones, such as Windows.
If performance is paramount.
If you find yourself "outsourcing" tasks better handled on the Python side.
If you're concerned about lack of predictability of the shell environment (as #alastair is):
subprocess.call with shell = True always creates non-interactive non-login instances of /bin/sh - note that it is NOT the user's default shell that is used.
sh does NOT read initialization files for non-interactive non-login shells (neither system-wide nor user-specific ones).
Note that even on platforms where sh is bash in disguise, bash will act this way when invoked as sh.
Every shell instance created with subprocess.call with shell = True is its own world, and its environment is neither influenced by previous shell instances nor does it influence later ones.
However, the shell instances created do inherit the environment of the python process itself:
If you started your Python program from an interactive shell, then that shell's environment is inherited. Note that this only pertains to the current working directory and environment variables, and NOT to aliases, shell functions, and shell variables.
Generally, that's a feature, given that Python (CPython) itself is designed to be controllable via environment variables (for 2.x, see https://docs.python.org/2/using/cmdline.html#environment-variables; for 3.x, see https://docs.python.org/3/using/cmdline.html#environment-variables).
If needed, you can supply your own environment to the shell via the env parameter; note, however, that you'll have to supply the entire environment in that event, potentially including variables such as USER and HOME, if needed; simple example, defining $PATH explicitly:
subprocess.call('echo $PATH', shell = True, \
env = { 'PATH': '/sbin:/bin:/usr/bin' })
The shell does I/O redirection for a process. Based on what you're saying, the subprocess module does not do I/O redirection like that. To demonstrate, run:
subprocess.call(["sh","-c", "./x <inp.txt"])
That runs the shell and should redirect the I/O. With your code, your program ./x is being given an argument <inp.txt which it is ignoring.
NB: the alternative call to subprocess.call is purely for diagnostic purposes, not a recommended solution. The recommended solution involves reading the (Python 2) subprocess module documentation (or the Python 3 documentation for it) to find out how to do the redirection using the module.
import subprocess
i_file = open("inp.txt")
subprocess.call("./x", stdin=i_file)
i_file.close()
If your script is about to exit so you don't have to worry about wasted file descriptors, you can compress that to:
import subprocess
subprocess.call("./x", stdin=open("inp.txt"))
By default, the subprocess module does not pass the arguments to the shell. Why? Because running commands via the shell is dangerous; unless they're correctly quoted and escaped (which is complicated), it is often possible to convince programs that do this kind of thing to run unwanted and unexpected shell commands.
Using the shell for this would be wrong anyway. If you want to take input from a particular file, you can use subprocess.Popen, setting the stdin argument to a file descriptor for the file inp.txt (you can get the file descriptor by calling fileno() a Python file object).
I am trying to set environment variable using python. And this variable is used in another script.
My code is:
#!/usr/bin/env python
import os
os.environ['VAR'] = '/current_working_directory'
after executing above python script,i execute second script which uses the same variable 'VAR', but it is not working.
But when i do export VAR='/current_working_directory and run the second script, it works fine. I tried putenv() also.
This depends on how the second python script get's called.
If you have a shell, and the shell first runs the first python script, then the second, it won't work. The reason is that the first python script inherits the environment from the shell. But modifying os.environ[] or calling putenv() will then only modify the inherited environment --- the one from the second python script, not the one from the shell script.
If now the shell script runs the second python script, it will again inherit the environment from the shell ... and because the shell script is unmodified, you cannot see the modification the first python script did.
One way to achive your goal is using a helper file:
#!/bin/bash
rm -f envfile
./first_pythonscript
test -f envfile && . envfile
rm -f envfile
./second_pythonscript
That code is crude, it won't work if two instances of the shell script run, so don't use it as-is. But I hope you get the idea.
Even another way is to make your second_pythonscript not a program, but a Python module that the first_pythonscript can import. You can also make it a hybrid, library when imported, program when run via the if __name__ == "__main__": construct.
And finally you can use one of the os function, e.g. os.spawnvpe
This code should provide the required environment to your 2nd script:
#!/usr/bin/env python
import os
os.environ['VAR'] = '/current_working_directory'
execfile("/path/to/your/second/script.py")
A child process cannot change the environment of its parent process.
The general shell programming solution to this is to make your first script print out the value you need, and assign it in the calling process. Then you can do
#!/bin/sh
# Assign VAR to the output from first.py
VAR="$(first.py)"
export VAR
exec second.py
or even more succinctly
#!/bin/sh
VAR="$(first.py)" second.py
Obviously, if the output from first.py is trivial to obtain without invoking Python, that's often a better approach; but for e.g. having two scripts call different functions from a library and/or communicating with a common back end, this is a common pattern.
Using Python for the communication between two pieces of Python code is often more elegant and Pythonic, though.
#!/usr/bin/env python
from yourlib import first, second
value=first()
# maybe putenv VAR here if that's really, really necessary
second(value)