Assume using Linux:
In Perl, the exec function executes an external program and immediately exits itself, leaving the external program in same shell session.
A very close answer using Python is https://stackoverflow.com/a/13256908
However, the Python solution using start_new_session=True starts an external program using setsid method, that means that solution is suitable for making a daemon, not an interactive program.
Here is an simple example of using perl:
perl -e '$para=qq(-X --cmd ":vsp");exec "vim $para"'
After vim is started, the original Perl program has exited and the vim is still in the same shell session(vim is not sent to new session group).
How to get the same solution with Python.
Perl is just wrapping the exec* system call functions here. Python has the same wrappers, in the os module, see the os.exec* documentation:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller.
To do the same in Python:
python -c 'import os; para="-X --cmd \":vsp\"".split(); os.execlp("vim", *para)'
os.execlp accepts an argument list and looks up the binary in $PATH from the first argument.
The subprocess module is only ever suitable for running processes next to the Python process, not to replace the Python process. On POSIX systems, the subprocess module uses the low-level exec* functions to implement it's functionality, where a fork of the Python process is then replaced with the command you wanted to run with subprocess.
Related
In bash or C, exec will terminate the current process and replace it with something new. Can this functionality be accomplished in Python? I don't wish to simply execute some code then continue running the python script (even just to immediately exit), or spawn a child process.
My specific situation is the following. I'm developing a command line application (python + curses) to manage data generation/analysis in the context of scientific computing. It will sometimes be necessary for me to terminate the application and go wrangle with the data in a given subdirectory manually. It would be convenient if I could do something like:
# within python script; d=<some directory>
if exit_and_goto_dir:
exec("pushd {}".format(d)) # C-style exec -- terminate program and execute new command
The following do not work, for example:
# attempt 1
if exit_and_goto_dir:
os.system("pushd {}".format(d))
exit(0) # pushd does not outlast the python script
# attempt 2
if exit_and_goto_dir:
os.chdir(d)
exit(0)
This behavior isn't really critical. There are plenty of work arounds (e.g. print the directory I care about to terminal then cd manually). Mostly I'm curious if it's possible. Thanks!
The os module contains Python wrappers for the various exec* functions in the C standard library:
>>> [method for method in dir(os) if method.startswith("exec")]
['execl', 'execle', 'execlp', 'execlpe', 'execv', 'execve', 'execvp', 'execvpe']
However, pushd is not an executable that you can exec but rather a bash builtin (and the same is true for cd).
What you could do would be to change directory inside the python process and then exec an interactive shell:
import os
os.chdir(d)
os.execvp("bash", ["bash", "-login"])
Python's current directory will be inherited by the shell that you exec. When you later exit from that shell, control will then return to the original login shell from which you invoked python (unless you used that shell's exec command to launch python in the first place).
What you can't do is to modify the current directory of the calling shell from inside python, in order to return to the shell prompt but in a different working directory from when python was invoked. (At least there's no straightforward way. There is a hack involving attaching gdb to it, described here, but which only worked as root when I tried it on Ubuntu.)
I am trying to compile a C program using Python and want to give input using "<" operator but it's not working as expected.
If I compile the C program and run it by giving input though a file it works; for example
./a.out <inp.txt works
But similarly if I try to do this using a Python script, it did not quite work out as expected.
For example:
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x"])
and
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x","<inp.txt"])
Both script ask for input though terminal. But I think in the second script it should read from file. why both the programs are working the same?
To complement #Jonathan Leffler's and #alastair's helpful answers:
Assuming you control the string you're passing to the shell for execution, I see nothing wrong with using the shell for convenience. [1]
subprocess.call() has an optional Boolean shell parameter, which causes the command to be passed to the shell, enabling I/O redirection, referencing environment variables, ...:
subprocess.call("./x <inp.txt", shell = True)
Note how the entire command line is passed as a single string rather than an array of arguments.
[1]
Avoid use of the shell in the following cases:
If your Python code must run on platforms other than Unix-like ones, such as Windows.
If performance is paramount.
If you find yourself "outsourcing" tasks better handled on the Python side.
If you're concerned about lack of predictability of the shell environment (as #alastair is):
subprocess.call with shell = True always creates non-interactive non-login instances of /bin/sh - note that it is NOT the user's default shell that is used.
sh does NOT read initialization files for non-interactive non-login shells (neither system-wide nor user-specific ones).
Note that even on platforms where sh is bash in disguise, bash will act this way when invoked as sh.
Every shell instance created with subprocess.call with shell = True is its own world, and its environment is neither influenced by previous shell instances nor does it influence later ones.
However, the shell instances created do inherit the environment of the python process itself:
If you started your Python program from an interactive shell, then that shell's environment is inherited. Note that this only pertains to the current working directory and environment variables, and NOT to aliases, shell functions, and shell variables.
Generally, that's a feature, given that Python (CPython) itself is designed to be controllable via environment variables (for 2.x, see https://docs.python.org/2/using/cmdline.html#environment-variables; for 3.x, see https://docs.python.org/3/using/cmdline.html#environment-variables).
If needed, you can supply your own environment to the shell via the env parameter; note, however, that you'll have to supply the entire environment in that event, potentially including variables such as USER and HOME, if needed; simple example, defining $PATH explicitly:
subprocess.call('echo $PATH', shell = True, \
env = { 'PATH': '/sbin:/bin:/usr/bin' })
The shell does I/O redirection for a process. Based on what you're saying, the subprocess module does not do I/O redirection like that. To demonstrate, run:
subprocess.call(["sh","-c", "./x <inp.txt"])
That runs the shell and should redirect the I/O. With your code, your program ./x is being given an argument <inp.txt which it is ignoring.
NB: the alternative call to subprocess.call is purely for diagnostic purposes, not a recommended solution. The recommended solution involves reading the (Python 2) subprocess module documentation (or the Python 3 documentation for it) to find out how to do the redirection using the module.
import subprocess
i_file = open("inp.txt")
subprocess.call("./x", stdin=i_file)
i_file.close()
If your script is about to exit so you don't have to worry about wasted file descriptors, you can compress that to:
import subprocess
subprocess.call("./x", stdin=open("inp.txt"))
By default, the subprocess module does not pass the arguments to the shell. Why? Because running commands via the shell is dangerous; unless they're correctly quoted and escaped (which is complicated), it is often possible to convince programs that do this kind of thing to run unwanted and unexpected shell commands.
Using the shell for this would be wrong anyway. If you want to take input from a particular file, you can use subprocess.Popen, setting the stdin argument to a file descriptor for the file inp.txt (you can get the file descriptor by calling fileno() a Python file object).
Recently, I came across the Linux command source and then found this answer on what it does.
My understanding was that source executes the file that is passed to it, and it did work for a simple shell script. Then I tried using source on a Python script–but it did not work.
The Python script has a shebang (e.g. #!/usr/bin/python) and I am able to do a ./python.py, as the script has executable permission. If that is possible, source python.py should also be possible, right? The only difference is ./ executes in a new shell and source executes in the current shell. Why is it not working on a .py script? Am I missing something here?
You're still not quite on-target understanding what source does.
source does indeed execute commands from a file in the current shell process. It does this effectively as if you had typed them directly into your current shell.
The reason this is necessary is because when you run a shell script without sourcing it, it will spawn a subshell—a new process. When this process exits, any changes made within that script are lost as you return to the shell from which it spawned.
It follows, then, that you cannot source Python into a shell, because the Python interpreter is always a different process from your shell. Running a Python script spawns a brand-new process, and when that process exits, its state is lost.
Of course, if your shell is actually Python (which I would not recommend!), you can still "source" into it—by using import.
source executes the files and places whatever functions/aliases/environment variables created in that script within the shell that called it. It does this by not spawning a new process, but instead executing the script in the current process.
The shabang is used by the shell to indicate what to use to spawn the new process, so for source it is ignored, and the file is interpreted as the language of the current process (bash in this case). This is why using source on a python file failed for you.
I'm using subprocess to start a process and let it run in the background, it's a server application. The process itself is a java program with a thin wrapper (which among other things, means that I can just launch it as an executable without having to call java explicitly).
I'm using Popen to run the process and when I set shell=False, it runs but it spawns two processes instead of one. The first process has init as its parent and when I inspect it via ps, it just displays the raw command. However, the second process displays with the expanded java arguments (-D and -X flags) - this is what I expect to see and how the process looks when I run the command manually.
Interestingly, when I set shell=True, the command fails. The command does have a help message but it doesn't seem to indicate that there's a problem with my argument list (there shouldn't be). Everything is the same except the shell named argument to Popen.
I'm using Python 2.7 on Ubuntu. Not really sure what's going on here, any help is appreciated. I suppose it's possible that the java command is doing an exec/fork and for some reason, the parent process isn't dying when I start it through Python.
I saw this SO question which looked promising but doesn't change the behavior that I'm experiencing.
This is actually more of a question about the wrapper than about Python -- you would get the same behavior running it from any other language.
To get the behavior you want, the wrapper would want to have the line where it invokes the JVM look as follows:
exec java -D... -cp ... main.class.here "$#"
...as opposed to lacking the exec on front:
java -D... -cp ... main.class.here "$#"
In the former case, the process image of the wrapper is replaced with that of the JVM it invokes; in the latter, the wrapper waits for the JVM to exit, and then continues to run.
If the wrapper does any cleanup after JVM exit, using exec will prevent this from happening and would thus be the Wrong Thing -- in this case, you would want the wrapper to still exist while the JVM runs, as otherwise it would be unable to perform cleanup afterwards.
Be aware that if the wrapper is responsible for detaching the subprocess, it needs to be able to close open file handles for this to happen correctly. Consider passing close_fds=True to your Popen call if your parent process has more file descriptors than only stdin, stdout and stderr open.
I've got some strange behavioral differences between Python's subprocess.call() and os.system() that appears to be related to setgid. The difference is causing Perl's taint checks to be invoked when subprocess.call() is used, which creates problems because I do not have the ability to modify all the Perl scripts that would need untaint code added to them.
Example, "process.py"
#!/usr/bin/python
import os, subprocess
print "Python calling os.system"
os.system('perl subprocess.pl true')
print "Python done calling os.system"
print "Python calling subprocess.call"
subprocess.call(['perl', 'subprocess.pl', 'true'])
print "Python done calling subprocess.call"
"subprocess.pl"
#!/usr/bin/perl
print "perl subprocess\n";
`$ARGV[0]`;
print "perl subprocess done\n";
The output - both runs of subprocess.pl should be the same, but the one run with subprocess.call() gets a taint error:
mybox> process.py
Python calling os.system
perl subprocess
perl subprocess done
Python done calling os.system
Python calling subprocess.call
perl subprocess
Insecure dependency in `` while running setgid at subprocess.pl line 4.
Python done calling subprocess.call
mybox>
While using os.system() works, I would really rather be using subprocess.check_call() as it's more forward-compatible and has nice checking behaviors.
Any suggestions or documentation that might explain why these two are different? Is it possible this is some strange setting in my local unix environment that is invoking these behaviors?
I think your error is with perl, or the way it's interacting with your environment.
Your backtick process is calling setgid for some reason. The only way I can replicate this, is to setgid on /usr/bin/perl (-rwxr-sr-x). [EDIT] Having python setgid does this too!
[EDIT] I forgot that os.system is working for you. I think the only relevant difference here, is that with os.system the environment is not inherited by the subprocess. Look through the environment of each subprocess, and you may find your culprit.
Doesn't happen for me:
$ python proc.py
Python calling os.system
perl subprocess
perl subprocess done
Python done calling os.system
Python calling subprocess.call
perl subprocess
perl subprocess done
Python done calling subprocess.call
$ python --version
Python 2.5.2
$ perl --version
This is perl, v5.8.8 built for i486-linux-gnu-thread-multi
What are your version numbers?
Under what sort of account are you running?
EDIT:
Sorry missed the title - I don't have python 2.6 anywhere easy to access, so I'll have to leave this problem.
EDIT:
So it looks like we worked out the problem - sgid on the python 2.6 binary.
It would also be interesting to see if subprocess with the shell also avoids the problem.