I'm trying to send the cmd many commands according to the answers he sends me.
I'm getting a run time error message:
ValueError: I/O operation on closed file
When I'm running something like this:
import subprocess
process = subprocess.Popen("cmd.exe", stdout=subprocess.PIPE,stdin=subprocess.PIPE)
answer = process.communicate(input="some command\n" + '\n')[0]
"""
choosing another command according to answer
"""
print process.communicate(input=another_command + '\n')[0]
process.kill()
Any idea on how to solve the problem?
Thank you for your help!
Do not send your commands to cmd.exe. Call your commands directly like:
subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE,stdin=subprocess.PIPE)
Perhaps you will not need the pipe for stdin if you use it this way.
The error is normal. communicate closes the standard input of the subprocess to indicate that no more input is pending so that the subprocess can flush its output. So you cannot chain multiple communicate calls on one single subprocess.
But if you commands are simple enough (not many kbytes of input data), and if you do not need to collect and process the output of one command before sending the next one, you should be able to write all the commands in sequence, reading as much output as possible between two of them. After last command, you could then close the subprocess standard input and wait for it to terminate, still collating the output:
process = subprocess.Popen("cmd.exe", stdout=subprocess.PIPE, stdin=subprocess.PIPE)
process.stdin.write("some command\n\n")
partial_answer = process.stdout.read() # all or part of the answer can still be buffered subprocess side
...
process.stdin.write("some other command\n\n")
...
# after last command, time to close subprocess
process.stdin.close()
retcode = None
while True:
end_of_answer += process.stdout.read()
if retcode is not None: break
Related
I am using subprosess to ping a server, but I want to receive the full response. In the past I have used os to ping but this only returned 1 or 0.
The code that I am using is:
import subprocess
p = subprocess.Popen(['ping', '8.8.8.8'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print out
I would like if if the response that I would see if I had ran it from the terminal was visible. Please note that I am using -c and not -n because I am using linux as the OS.
I am confused as to why this does't work because when I ran similar code, it printed out the expected response:
import subprocess
p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print out
The code above printed out the list of files and folders in the directory which the python script was saved in, so the code that I am using seems to be correct.
My question is, how can I have the response from pinging a server assigned to a variable that I can then print out like I can when I run ls.
The default ping is a continuous process; it doesn't stop until you interrupt it, e.g. with ctrl-C. Of course, with subprocess, ctrl-C is not possible.
subprocess.communicate will buffer all the output in memory until the program ends (that is, never); you could use it in fact to create an out-of-memory error ;-).
If you just like a few pings, or even 1 ping, use the -c option to ping:
p = subprocess.Popen(['ping', '-c1', '8.8.8.8'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
If you'd like continuous polling, you could wrap this in a while loop inside Python.
.communicate() waits for the child process to finish. ping with the given arguments does not exit unless you explicitly stop it e.g., send it Ctrl+C.
Here're varous methods to stop reading process output in Python without hang.
If want to see the output while the process is still running; see Python: read streaming input from subprocess.communicate()
Im trying to handle tcpdump output in python.
What I need is to run tcpdump (which captures the packets and gives me information) and read the output and process it.
The problem is that tcpdump keeps running forever and I need to read the packet info as soon as it outputs and continue doing it.
I tried looking into subprocess of python and tried calling tcpdump using popen and piping the stdout but it doesnt seem to work.
Any directions on how to proceed with this.
import subprocess
def redirect():
tcpdump = subprocess.Popen("sudo tcpdump...", stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
while True:
s = tcpdump.stdout.readline()
# do domething with s
redirect()
You can make tcpdump line-buffered with "-l". Then you can use subprocess to capture the output as it comes out.
import subprocess as sub
p = sub.Popen(('sudo', 'tcpdump', '-l'), stdout=sub.PIPE)
for row in iter(p.stdout.readline, b''):
print row.rstrip() # process here
By default, pipes are block buffered and interactive output is line buffered. It sounds like you need a line buffered pipe - coming from tcpdump in a subprocess.
In the old days, we'd recommend Dan Bernstein's "pty" program for this kind of thing. Today, it appears that pty hasn't been updated in a long time, but there's a new program called "emtpy" which is more or less the same idea:
http://empty.sourceforge.net/
You might try running tcpdump under empty in your subprocess to make tcpdump line buffered even though it's writing to a pipe.
I am writing a script in which in the external system command may sometimes require user input. I am not able to handle that properly. I have tried using os.popen4 and subprocess module but could not achieve the desired behavior.
Below mentioned example would show this problem using "cp" command. ("cp" command is used to show this problem, i am calling some different exe which may similarly prompt for user response in some scenarios). In this example there are two files present on disk and when user tries to copy file1 to file2, an conformer message comes up.
proc = subprocess.Popen("cp -i a.txt b.txt", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,)
stdout_val, stderr_val = proc.communicate()
print stdout_val
b.txt?
proc.communicate("y")
Now in this example if i read only stdout/stderr and prints it, later on if i try to write "y" or "n" based on user's input, i got an error that channel is closed.
Can some one please help me on achieving this behavior in python such that i can print stdout first, then should take user input and write stdin later on.
I found another solution (Threading) from Non-blocking read on a subprocess.PIPE in python , not sure whether it would help. But it appears it is printing question from cp command, i have modified code but not sure on how to write in threading code.
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['cp', '-i', 'a.txt', 'b.txt'],stdin=PIPE, stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.start()
try:
line = q.get_nowait()
except Empty:
print('no output yet')
else:
pass
Popen.communicate will run the subprocess to completion, so you can't call it more than once. You could use the stdin and stdout attributes directly, although that's risky as you could deadlock if the process uses block buffering or the buffers fill up:
stdout_val = proc.stdout.readline()
print stdout_val
proc.stdin.write('y\n')
As there is a risk of deadlock and because this may not work if the process uses block buffering, you would do well to consider using the pexpect package instead.
I don't have a technical answer to this question. More of just a solution. It has something to do with the way the process waits for the input, and once you communicate with the process, a None input is enough to close the process.
For your cp example, what you can do is check the return code immediately with proc.poll(). If the return value is None, you might assume it is trying to wait for input and can ask your user a question. You can then pass the response to the process via proc.communicate(response). It will then pass the value and proceed with the process.
Maybe someone else can chime in with a more technical reason why an initial communicate with a None value closes the process.
I have a large file that needs to be processed before feeding to another command. I could save the processed data as a temporary file but would like to avoid it. I wrote a generator that processes each line at a time then following script to feed to the external command as input. however I got "I/O operation on closed file" exception at the second round of the loop:
cmd = ['intersectBed', '-a', 'stdin', '-b', bedfile]
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for entry in my_entry_generator: # <- this is my generator
output = p.communicate(input='\t'.join(entry) + '\n')[0]
print output
I read another similar question that uses p.stdin.write. but still had the same problem.
What I did wrong?
[edit]
I replaced last two statements with following (thanks SpliFF):
output = p.communicate(input='\t'.join(entry) + '\n')
if output[1]: print "error:", output[1]
else: print output[0]
to see if there was any error by the external program. But no.
Still have the same exception at p.communicate line.
The communicate method of subprocess.Popen objects can only be called once. What it does is it sends the input you give it to the process while reading all the stdout and stderr output. And by "all", I mean it waits for the process to exit so that it knows it has all output. Once communicate returns, the process no longer exists.
If you want to use communicate, you have to either restart the process in the loop, or give it a single string that is all the input from the generator. If you want to do streaming communication, sending data bit by bit, then you have to not use communicate. Instead, you would need to write to p.stdin while reading from p.stdout and p.stderr. Doing this is tricky, because you can't tell which output is caused by which input, and because you can easily run into deadlocks. There are third-party libraries that can help you with this, like Twisted.
If you want to do this interactively, sending some data and then waiting for and processing the result before sending more data, things get even harder. You should probably use a third-party library like pexpect for that.
Of course, if you can get away with just starting the process inside the loop, that would be a lot easier:
cmd = ['intersectBed', '-a', 'stdin', '-b', bedfile]
for entry in my_entry_generator:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output = p.communicate(input='\t'.join(entry) + '\n')[0]
print output
Probably your intersectBed application is exiting with an error but since you aren't printing any stderr data you can't see it. Try:
result = p.communicate(input='\t'.join(entry) + '\n')
if result[1]:
print "error:", result[1]
else:
print result[0]
I am using the subprocess module to call an external program (plink.exe) to log-in to a server; but when I call communicate to read the output, it is blocking. The code is below:
import subprocess
process = subprocess.Popen('plink.exe hello#10.120.139.170 -pw 123456'.split(), shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print process.communicate() #block here
I know the block is because plink.exe it still running; but I need to read the output before the subprocess terminates. Is there anyway to do that?
The whole purpose of the communicate method is to wait for the process to finish and return all the output. If you don't want to wait, don't call communicate. Instead, read from the stdout or stderr attribute to read the output.
If the process outputs to both stdout and stderr (and you want to read it separately), you will have to be careful to actually read from both without blocking, or you can deadlock. This is fairly hard on Windows, and you may wish to use the pexpect module instead.
Maybe because "plink.exe" needs to take in input arguments, if you don't pass them, it will block until data are given, you could try adding arguments in method communicate(input)
I faced a similar situation where I had to execute a single command lmstat -a and then get the output of the terminal.
If you just need to run a single command and then read the output, you can use the following code:
import subprocess
Username = 'your_username'
Password = 'your_password'
IP = 'IP_of_system'
Connection_type = '-ssh' #can have values -ssh -telnet -rlogin -raw -serial
p = subprocess.Popen(['plink', Connection_type, '-l', Username, '-pw', Password, IP], \
shell = False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = p.communicate('lmstat -a\nexit\n'.encode())
print(out.decode())