earlier today this code worked as I wanted it to. Now it doesn't. I want it to run airodump and store the output to a csv file (command line option). After a few seconds kill the process.
def find_networks():
airodump = subprocess.Popen(["airodump-ng","-w","airo","--output-format","csv","mon0"],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
try:
os.kill(airodump.pid, signal.SIGTERM)
if not airodump.poll():
print "shouldn't have had to do this.."
airodump.kill()
if not airodump.poll():
print "shouldn't have had to do this....."
airodump.terminate()
if not airodump.poll():
print "shouldn't have had to do this........"
except OSError:
print "?"
pass
except UnboundLocalError:
print "??"
pass
return_code = airodump.wait()
print return_code
(the output here is:
shouldn't have had to do this..
shouldn't have had to do this.....
-9)
Earlier it would do exactly what I said (same code). The negative 9 is worrisome, earlier I was getting 1, but the process still DOES die which is all thats important, but not due to os.kill statement, which is wierd. That's not the big problem though. All i need is that .csv file. With this implementation, the csv file is completely empty - it is made but nothing ever put in it. If I run subprocess without stdout set to PIPE, the csv file IS created and populated, but I cant do it that way- I have to keep the stdout off the screen though.
Is stdout=subprocess.PIPE causing the data to be "written" to a nonexistent csv file in PIPE-land???
It's difficult to reproduce what's happening on your machine with your example code. One thing that might be causing some problems though: You should only use stdout=subprocess.PIPE if you are actually intending to read from the pipe. If you don't, the process will block once it generates enough output to fill up the pipe buffer.
If all you want to do is to hide stdout and stderr, you can do this:
airodump = subprocess.Popen(..., stdout=open("/dev/null", "w"), stderr=open("/dev/null", "w"))
Or better yet:
import os
airodump = subprocess.Popen(..., stdout=open(os.devnull, "w"), stderr=open(os.devnull, "w"))
Or if you are using Python 3, you can use subprocess.DEVNULL.
since you send the output to PIPE, you should explicitly read it out if you want it, you can then write it to a local file. Sth like:
airodump = subprocess.Popen(["airodump-ng","-w","airo","--output-format","csv","mon0"],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
... # wait the command to complete
open("the_file_you_want_to_store_output", "w").write(airodump.stdout.read())
Related
I want to get user input from a subprocess in an new terminal.
import subprocess
additionalBuildArguments = "defaultarg1"
proc = subprocess.Popen(["python", "user_input.py", additionalBuildArguments],
creationflags=subprocess.CREATE_NEW_CONSOLE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
try:
outs, errs = proc.communicate(timeout=15)
except subprocess.TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
additionalBuildArguments = outs or additionalBuildArguments
user_input.py:
import sys
arg = sys.argv[1]
user_input = input(f"Additional build arguments [{arg}] (Push <ENTER> to use these settings):\n")
print(user_input)
as long as I don't set the stdout=subprocess.PIPE and/or the stderr=subprocess.PIPE options I can enter input. But with these options I can't write any input to the console.
Indeed I need these options to redirect the stdout, to have access to the printed user_input in the parent process.
Does anyone know what's the problem here?
Please note that I do not understand why you want to do this, and feel instinctively that you should not. However, it's perfectly possible: just catpure only stdout:
import sys
from subprocess import run
print("Type away: ", end="")
sys.stdout.flush()
r = run(["python", "-c", "print(input())"], capture_output=True, encoding="utf8")
print(f"You entered {r.stdout}")
EDIT Apparently you are using windows. Per the docs your flag is set when shell=True. With shell=True this works for me, but I have no idea whether it will for you:
import sys
from subprocess import run
print("Type away: ", end="")
sys.stdout.flush()
r = run("python -c 'print(input())'", capture_output=True, shell=True, encoding="utf8")
print(f"You entered {r.stdout}")
This can be chained to run in yet a third process, which would be needed to print whilst also capturing stdout, from a subprocess. But at this point we are in the realm of very horrible hacks.
A better, but still hacky, solution, is to re-phrase the problem a bit. You want to spawn a terminal, which apparently you can do, and the user can interact with it correctly, and then you want to get output from that terminal in the spawning code. STDOUT is not the proper channel for this communication. Personally I would structure my code like this:
in spawning code:
generate parametrised script to run in spawned terminal and save it as a temp file
spawn subterminal running the generated script
wait for completion
read temp out file and get data
delete both temp script and temp out file
in generated code:
do as much as possible (you have a full python, after all)
dump output as json to a temporary file
This is still hacky, but it only involves spawning one terminal. Note that I still don't understand why you want to do this, but this should at least work.
In my script, I redirect stdout to a file as below:
with open(logFileName, 'w') as fp:
proc = Popen([myexe], stdout=fp)
# proc.wait() # I don't want to block until process completes.
I understand that the fp would be closed even before the process completes.
Thus my program does not work as expected.
If I do a wait(), it will work but I don't want to block.
I am wondering what is the right way to do this. Is a separate thread the only way? Surprisingly, I could not find an answer through google through this requirement should be a very common one.
Update: I see that it was not working for a different reason. It works fine even though the file object would be closed even before the process completes. Still not sure if this is right way to do.
use subprocess:
import subprocess
with open(logFileName, 'w') as fp:
p = subprocess.Popen( [myexec], stdout=subprocess.PIPE, stderr=subprocess.PIPE )
fp, error_msg = p.communicate()
I would like to run several commands in the same shell. After some research I found that I could keep a shell open using the return process from Popen. I can then write and read to stdin and stdout. I tried implementing it as such:
process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
process.stdin.write('ls -al\n')
out = ' '
while not out == '':
out = process.stdout.readline().rstrip('\n')
print out
Not only is my solution ugly, it doesn't work. out is never empty because it hands on the readline(). How can I successfully end the while loop when there is nothing left to read?
Use iter to read data in real time:
for line in iter(process.stdout.readline,""):
print line
If you just want to write to stdin and get the output you can use communicate to make the process end:
process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
out,err =process.communicate('ls -al\n')
Or simply get the output use check_output:
from subprocess import check_output
out = check_output(["ls", "-al"])
The command you're running in a subprocess is sh, so the output you're reading is sh's output. Since you didn't indicate to the shell it should quit, it is still alive, thus its stdout is still open.
You can perhaps write exit to its stdin to make it quit, but be aware that in any case, you get to read things you don't need from its stdout, e.g. the prompt.
Bottom line, this approach is flawed to start with...
I am writing a script in which in the external system command may sometimes require user input. I am not able to handle that properly. I have tried using os.popen4 and subprocess module but could not achieve the desired behavior.
Below mentioned example would show this problem using "cp" command. ("cp" command is used to show this problem, i am calling some different exe which may similarly prompt for user response in some scenarios). In this example there are two files present on disk and when user tries to copy file1 to file2, an conformer message comes up.
proc = subprocess.Popen("cp -i a.txt b.txt", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,)
stdout_val, stderr_val = proc.communicate()
print stdout_val
b.txt?
proc.communicate("y")
Now in this example if i read only stdout/stderr and prints it, later on if i try to write "y" or "n" based on user's input, i got an error that channel is closed.
Can some one please help me on achieving this behavior in python such that i can print stdout first, then should take user input and write stdin later on.
I found another solution (Threading) from Non-blocking read on a subprocess.PIPE in python , not sure whether it would help. But it appears it is printing question from cp command, i have modified code but not sure on how to write in threading code.
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['cp', '-i', 'a.txt', 'b.txt'],stdin=PIPE, stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.start()
try:
line = q.get_nowait()
except Empty:
print('no output yet')
else:
pass
Popen.communicate will run the subprocess to completion, so you can't call it more than once. You could use the stdin and stdout attributes directly, although that's risky as you could deadlock if the process uses block buffering or the buffers fill up:
stdout_val = proc.stdout.readline()
print stdout_val
proc.stdin.write('y\n')
As there is a risk of deadlock and because this may not work if the process uses block buffering, you would do well to consider using the pexpect package instead.
I don't have a technical answer to this question. More of just a solution. It has something to do with the way the process waits for the input, and once you communicate with the process, a None input is enough to close the process.
For your cp example, what you can do is check the return code immediately with proc.poll(). If the return value is None, you might assume it is trying to wait for input and can ask your user a question. You can then pass the response to the process via proc.communicate(response). It will then pass the value and proceed with the process.
Maybe someone else can chime in with a more technical reason why an initial communicate with a None value closes the process.
In a script , I want to run a .exe with some command line parameters as "-a",and then
redirect the standard output of the program to a file?
How can I implement that?
You can redirect directly to a file using subprocess.
import subprocess
with open('output.txt', 'w') as output_f:
p = subprocess.Popen('Text/to/execute with-arg',
stdout=output_f,
stderr=output_f)
Easiest is os.system("the.exe -a >thefile.txt"), but there are many other ways, for example with the subprocess module in the standard library.
You can do something like this
e.g. to read output of ls -l (or any other command)
p = subprocess.Popen(["ls","-l"],stdout=subprocess.PIPE)
print p.stdout.read() # or put it in a file
you can do similar thing for stderr/stdin
but as Alex mentioned if you just want it in a file, just redirect the cmd output to a file
If you just want to run the executable and wait for the results, Anurag's solution is probably the best. I needed to respond to each line of output as it arrived, and found the following worked:
1) Create an object with a write(text) method. Redirect stdout to it (sys.stdout = obj). In your write method, deal with the output as it arrives.
2) Run a method in a seperate thread with something like the following code:
p = subprocess.Popen('Text/to/execute with-arg', stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
while p.poll() is None:
print p.stdout.readline().strip()
Because you've redirected stdout, PIPE will send the output to your write method line by line. If you're not certain you're going to get line breaks, read(amount) works too, I believe.
3) Remember to redirect stdout back to the default: sys.stdout = __sys.stdout__
Although the title (.exe) sounds like it's a problem on Windows.
I had to share that the accepted answer (subprocess.Popen() with stdout/stderr arguments) didn't work for me on Mac OS X (10.8) with python 2.7.
I had to use subprocess.check_output() (python 2.7 and above) to make it work. Example:
import subprocess
cmd = 'ls -l'
out = subprocess.check_output(cmd, shell=True)
with open('my.log', 'w') as f:
f.writelines(out)
f.close()
Note that this solution writes all the accumulated output out when the program finishes.
If you want to monitor the log file during the run. You may want to try something else.
In my own case, I only cared about the end result.