Subprocess pipes stdin without using files - python

I've got a main process in which I run a subprocess, which stdin is what I want to pipe. I know I can do it using files:
import subprocess
subprocess.call('shell command', stdin=open('somefile','mode'))
Is there any option to use a custom stdin pipe WITHOUT actual hard drive files? Is there any option, for example, to use string list (each list element would be a newline)?
I know that python subprocess calls .readline() on the pipe object.

First, use subprocess.Popen - .call is just a shortcut for it, and you'll need to access the Popen instance so you can write to the pipe. Then pass subprocess.PIPE flag as the stdin kwarg. Something like:
import subprocess
proc = subprocess.Popen('shell command', stdin=subprocess.PIPE)
proc.stdin.write("my data")
http://docs.python.org/2/library/subprocess.html#subprocess.PIPE

Related

What exactly means that the stdout (or stdin) is set to subprocess.PIPE in Popen?

I have already read the documentation about subprocesses in python, but still cannot quite understand this.
When using Popen, and we set the parameter stdout (or stdin) to subprocesses.PIPE, what does that actually mean?
The documentation says
stdin, stdout and stderr specify the executed program’s standard
input, standard output and standard error file handles,
respectively... PIPE indicates that a new pipe to the child should be
created.
what does this mean?
For example, if I have two subprocesses both with stdout to PIPE, are the ouptuts mixed? (I don't think so)
more importantly, if I have a subprocess with stdout set to PIPE and later another subprocess with stdin set to PIPE , is that pipe the same, the output of one goes to the other?
Can someone explain me that part of the documentation that seems criptic to me?
Additional notes:
For example
import os
import signal
import subprocess
import time
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen("sar -u 1 > mylog.log", stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
// Here another subprocess
subprocess.Popen(some_command, stdin=subprocess.PIPE)
time.sleep(10)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM)
Does the output of sar goes as input to "some command"?
Please see the document.
So as you can see, the PIPE is a special value, it "indicates that a new pipe to the child should be created." Which means, stdout=subprocess.PIPE and stderr=subprocess.PIPE results in two different pipes.
And for your example, the answer is no. These are two different pipes.
Actually you can print out the subprocess.PIPE:
print(subprocess.PIPE)
# -1
print(type(subprocess.PIPE))
# int
# So it is just an integer to represent a special case.

subprocess Popen stdin will only input and run after the script has finished

Description:
I was trying to make a shell that can be interactive on a chatting software, so I need a cmd.exe as a subprocess and pass strings into the process.
I have this:
from subprocess import Popen
from subprocess import PIPE as p
proc = Popen("cmd",stdout=p,stdin=p,shell=True)
so usually what we do if we need to pass input to the process is by using proc.stdin.write()
but it seems that the string will only pass in and work after the python script is complete
for example, I have
#same thing above
proc.stdin.write("ping 127.0.0.1".encode())
time.sleep(10)
the script will wait for 10 sec then pass and run the ping command.
which means it's impossible to get the result stdout.read() because there is nothing.
I have tried to use subprocess.Popen.communicate() but it closes the pipe after one input.
Is there any way to solve the "only run the command after script finish" thing, or make communicate() not close the pipe?
Writes to pipes are buffered, you need to flush the buffer.
proc.stdin.write("ping 127.0.0.1".encode())
proc.stdin.flush()

How to pass subprocess control to regular stdin after using a pipe?

What I'd like to do is to, in Python, programmatically send a few initial commands via stdin to a process, and then pass input to the user to let them control the program afterward. The Python program should simply wait until the subprocess exits due to user input. In essence, what I want to do is something along the lines of:
import subprocess
p = subprocess.Popen(['cat'], stdin=subprocess.PIPE)
# Send initial commands.
p.stdin.write(b"three\ninitial\ncommands\n")
p.stdin.flush()
# Give over control to the user.
# …Although stdin can't simply be reassigned
# in post like this, it seems.
p.stdin = sys.stdin
# Wait for the subprocess to finish.
p.wait()
How can I pass stdin back to the user (not using raw_input, since I need the user's input to come into effect every keypress and not just after pressing enter)?
Unfortunately, there is no standard way to splice your own stdin to some other process's stdin for the duration of that process, other than to read from your own stdin and write to that process, once you have chosen to write to that process in the first place.
That is, you can do this:
proc = subprocess.Popen(...) # no stdin=
and the process will inherit your stdin; or you can do this:
proc = subprocess.Popen(..., stdin=subprocess.PIPE, ...)
and then you supply the stdin to that process. But once you have chosen to supply any of its stdin, you supply all of its stdin, even if that means you have to read your own stdin.
Linux offers a splice system call (documentation at man7.org, documentation at linux.die.net, Wikipedia, linux pipe data from file descriptor into a fifo) but your best bet is probably a background thread to copy the data.
So searching for this same thing, at least in my case, the pexpect library takes care of this:
https://pexpect.readthedocs.io/en/stable/
p = pexpect.spawn("ssh myhost")
p.sendline("some_line")
p.interact()
As by its name you can automate a lot of interaction before handing it over to the user.
Note, in your case you may want an output filter:
Using expect() and interact() simultaneously in pexpect

Need help to read out the output of a subprocess

My python script (python 3.4.3) calls a bash script via subprocess.
OutPST = subprocess.check_output(cmd,shell=True)
It works, but the problem is, that I only get half of the data. The subprocess I call, calls a different subprocess and I have the guess, that if the "sub subprocess" sends the EOF, my programm thinks, that that´s it and ends the check_output.
Has someone an idea how to get all the data?
You should use subprocess.run() unless you really need that fine grained of control over talking to the processing via its stdin (or doing something else while the process is running instead of blocking for it to finish). It makes capturing output super easy:
from subprocess import run, PIPE
result = run(cmd, stdout=PIPE, stderr=PIPE)
print(result.stdout)
print(result.stderr)
If you want to merge stdout and stderr (like how you'd see it in your terminal if you didn't do any redirection), you can use the special destination STDOUT for stderr:
from subprocess import STDOUT
result = run(cmd, stdout=PIPE, stderr=STDOUT)
print(result.stdout)

Using file descriptors to communicate between processes

I have the following python code:
import pty
import subprocess
os=subprocess.os
from subprocess import PIPE
import time
import resource
pipe=subprocess.Popen(["cat"], stdin=PIPE, stdout=PIPE, stderr=PIPE, \
close_fds=True)
skip=[f.fileno() for f in (pipe.stdin, pipe.stdout, pipe.stderr)]
pid, child_fd = pty.fork()
if(pid==0):
max_fd=resource.getrlimit(resource.RLIMIT_NOFILE)[0]
fd=3
while fd<max_fd:
if(fd not in skip):
try:
os.close(fd)
except OSError:
pass
fd+=1
enviroment=os.environ.copy()
enviroment.update({"FD": str(pipe.stdin.fileno())})
os.execvpe("zsh", ["-i", "-s"], enviroment)
else:
os.write(child_fd, "echo a >&$FD\n")
time.sleep(1)
print pipe.stdout.read(2)
How can I rewrite it so that it will not use Popen and cat? I need a way to pass data from a shell function running in the interactive shell that will not mix with data created by other functions (so I cannot use stdout or stderr).
Ok, I think I've got a handle on your question now, and see two different approaches you could take.
If you absolutely want to provide the shell in the child process with an already-open file descriptor, then you can replace the Popen() of cat with a call to os.pipe(). That will give you a connected pair of real file descriptors (not Python file objects). Anything written to the second file descriptor can be read from the first, replacing your jury-rigged cat-pipe. (Although "cat-pipe" is fun to say...). A socket pair (socket.socketpair()) can also be used to achieve the same end if you need a bidirectional pair.
Alternatively, you could simplify your life even further by using a named pipe (aka FIFO). If you aren't familiar with the facility, a named pipe is a uni-directional pipe located in the filesystem namespace. The os.mkfifo() function will create the pipe on the filesystem. You can then open the pipe for reading in your primary process and open it for writing / direct output to it from your shell child process. This should simplify your code and open the option of using an existing library like Pexpect to interact with the shell.

Categories

Resources