Simulate Ctrl-C keyboard interrupt in Python while working in Linux - python

I am working on some scripts (in the company I work in) that are loaded/unloaded into hypervisors to fire a piece of code when an event occurs. The only way to actually unload a script is to hit Ctrl-C. I am writing a function in Python that automates the process
As soon as it sees the string "done" in the output of the program, it should kill the vprobe.
I am using subprocess.Popen to execute the command:
lineList = buff.readlines()
cmd = "vprobe /vprobe/myhello.emt"
p = subprocess.Popen(args = cmd, shell=True,stdout = buff, universal_newlines = True,preexec_fn=os.setsid)
while not re.search("done",lineList[-1]):
print "waiting"
os.kill(p.pid,signal.CTRL_C_EVENT)
As you can see, I am writing the output in buff file descriptor opened in read+write mode. I check the last line; if it has 'done', I kill it. Unfortunately, the CTRL_C_EVENT is only valid for Windows.
What can I do for Linux?

I think you can just send the Linux equivalent, signal.SIGINT (the interrupt signal).
(Edit: I used to have something here discouraging the use of this strategy for controlling subprocesses, but on more careful reading it sounds like you've already decided you need control-C in this specific case... So, SIGINT should do it.)

In Linux, Ctrl-C keyboard interrupt can be sent programmatically to a process using Popen.send_signal(signal.SIGINT) function. For example
import subprocess
import signal
..
process = subprocess.Popen(..)
..
process.send_signal(signal.SIGINT)
..
Don't use Popen.communicate() for blocking commands..

Maybe I misunderstand something, but the way you do it it is difficult to get the desired result.
Whatever buff is, you query it first, then use it in the context of Popen() and then you hope that by maciv lineList fills itself up.
What you probably want is something like
logfile = open("mylogfile", "a")
p = subprocess.Popen(['vprobe', '/vprobe/myhello.emt'], stdout=subprocess.PIPE, buff, universal_newlines=True, preexec_fn=os.setsid)
for line in p.stdout:
logfile.write(line)
if re.search("done", line):
break
print "waiting"
os.kill(p.pid, signal.CTRL_C_EVENT)
This gives you a pipe end fed by your vprobe script which you can read out linewise and act appropriately upon the found output.

Related

How to get live output with subprocess in Python

I am trying to run a python file that prints something, waits 2 seconds, and then prints again. I want to catch these outputs live from my python script to then process them. I tried different things but nothing worked.
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if process.poll() is not None and output == '':
break
if output:
print(output.strip())
I'm at this point but it doesn't work. It waits until the code finishes and then prints all the outputs.
I just need to run a python file and get live outputs from it, if you have other ideas for doing it, without using the print function let me know, just know that I have to run the file separately. I just thought of the easiest way possible but, from what I'm seeing it can't be done.
There are three layers of buffering here, and you need to limit all three of them to guarantee you get live data:
Use the stdbuf command (on Linux) to wrap the subprocess execution (e.g. run ['stdbuf', '-oL'] + cmd instead of just cmd), or (if you have the ability to do so) alter the program itself to either explicitly change the buffering on stdout (e.g. using setvbuf for C/C++ code to switch stdout globally to line-buffered mode, rather than the default block buffering it uses when outputting to a non-tty) or to insert flush statements after critical output (e.g. fflush(stdout); for C/C++, fileobj.flush() for Python, etc.) the buffering of the program to line-oriented mode (or add fflushs); without that, everything is stuck in user-mode buffers of the sub-process.
Add bufsize=0 to the Popen arguments (probably not needed since you don't send anything to stdin, but harmless) so it unbuffers all piped handles. If the Popen is in text=True mode, switch to bufsize=1 (which is line-buffered, rather than unbuffered).
Add flush=True to the print arguments (if you're connected to a terminal, the line-buffering will flush it for you, so it's only if stdout is piped to a file that this will matter), or explicitly call sys.stdout.flush().
Between the three of these, you should be able to guarantee no data is stuck waiting in user-mode buffers; if at least one line has been output by the sub-process, it will reach you immediately, and any output triggered by it will also appear immediately. Item #1 is the hardest in most cases (when you can't use stdbuf, or the process reconfigures its own buffering internally and undoes the effect of stdbuf, and you can't modify the process executable to fix it); you have complete control over #2 and #3, but #1 may be outside your control.
This is the code I use for that same purpose:
def run_command(command, **kwargs):
"""Run a command while printing the live output"""
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
**kwargs,
)
while True: # Could be more pythonic with := in Python3.8+
line = process.stdout.readline()
if not line and process.poll() is not None:
break
print(line.decode(), end='')
An example of usage would be:
run_command(['git', 'status'], cwd=Path(__file__).parent.absolute())

Keeping a pipe to a process open

I have an app that reads in stuff from stdin and returns, after a newline, results to stdout
A simple (stupid) example:
$ app
Expand[(x+1)^2]<CR>
x^2 + 2*x + 1
100 - 4<CR>
96
Opening and closing the app requires a lot of initialization and clean-up (its an interface to a Computer Algebra System), so I want to keep this to a minimum.
I want to open a pipe in Python to this process, write strings to its stdin and read out the results from stdout. Popen.communicate() doesn't work for this, as it closes the file handle, requiring to reopen the pipe.
I've tried something along the lines of this related question:
Communicate multiple times with a process without breaking the pipe? but I'm not sure how to wait for the output. It is also difficult to know a priori how long it will take the app to finish to process for the input at hand, so I don't want to make any assumptions. I guess most of my confusion comes from this question: Non-blocking read on a subprocess.PIPE in python where it is stated that mixing high and low level functions is not a good idea.
EDIT:
Sorry that I didn't give any code before, got interrupted. This is what I've tried so far and it seems to work, I'm just worried that something goes wrong unnoticed:
from subprocess import Popen, PIPE
pipe = Popen(["MathPipe"], stdin=PIPE, stdout=PIPE)
expressions = ["Expand[(x+1)^2]", "Integrate[Sin[x], {x,0,2*Pi}]"] # ...
for expr in expressions:
pipe.stdin.write(expr)
while True:
line = pipe.stdout.readline()
if line != '':
print line
# output of MathPipe is always terminated by ';'
if ";" in line:
break
Potential problems with this?
Using subprocess, you can't do this reliably. You might want to look at using the pexpect library. That won't work on Windows - if you're on Windows, try winpexpect.
Also, if you're trying to do mathematical stuff in Python, check out SAGE. They do a lot of work on interfacing with other open-source maths software, so there's a chance they've already done what you're trying to.
Perhaps you could pass stdin=subprocess.PIPE as an argument to subprocess.Popen. This will make the process' stdin available as a general file-like object:
import sys, subprocess
proc = subprocess.Popen(["mathematica <args>"], stdin=subprocess.PIPE,
stdout=sys.stdout, shell=True)
proc.stdin.write("Expand[ (x-1)^2 ]") # Write whatever to the process
proc.stdin.flush() # Ensure nothing is left in the buffer
proc.terminate() # Kill the process
This directs the subprocess' output directly to your python process' stdout. If you need to read the output and do some editing first, that is possible as well. Check out http://docs.python.org/library/subprocess.html#popen-objects.

python run multi command in the same time

Prior to this,I run two command in for loop,like
for x in $set:
command
In order to save time,i want to run these two command in the same time,like parallel method in makefile
Thanks
Lyn
The threading module won't give you much performance-wise because of the Global Interpreter Lock.
I think the best way to do this is to use the subprocess module and open each command with it's own stdout.
processes = {}
for cmd in ['cmd1', 'cmd2', 'cmd3']:
p = subprocess.Popen('cmd1', stdout=subprocess.PIPE)
processes[p.stdout] = p
while len(processes):
rfds, _, _ = select.select(processes.keys(), [], [])
for fd in rfds:
process = processses[fd]
print fd.read()
if process.returncode is not None:
print "Process {0} returned with code {1}".format(process.pid, process.returncode)
del processes[fd]
You basically have to use select to see which file descriptors are ready and you have to check their returncode to see if doing a "read" caused them to exit. Processes basically go into a wait state until their stdout is closed. If you would like to do some things while you're waiting, you can put a timeout on select.select() so you'll stop waiting after so long. You can test the length of rfds and if it is 0 then you know that the timeout happened.
twisted or select module is probably what you're after.
If all you want to do is a bunch of batch commands, shell scripts, ie
#!/bin/sh
for i in "command1 command2 command3"; do
$i &
done
Might work better. Alternately, a Makefile like you said.
Look at the threading module.

Python - capture Popen stdout AND display on console?

I want to capture stdout from a long-ish running process started via subprocess.Popen(...) so I'm using stdout=PIPE as an arg.
However, because it's a long running process I also want to send the output to the console (as if I hadn't piped it) to give the user of the script an idea that it's still working.
Is this at all possible?
Cheers.
The buffering your long-running sub-process is probably performing will make your console output jerky and very bad UX. I suggest you consider instead using pexpect (or, on Windows, wexpect) to defeat such buffering and get smooth, regular output from the sub-process. For example (on just about any unix-y system, after installing pexpect):
>>> import pexpect
>>> child = pexpect.spawn('/bin/bash -c "echo ba; sleep 1; echo bu"', logfile=sys.stdout); x=child.expect(pexpect.EOF); child.close()
ba
bu
>>> child.before
'ba\r\nbu\r\n'
The ba and bu will come with the proper timing (about a second between them). Note the output is not subject to normal terminal processing, so the carriage returns are left in there -- you'll need to post-process the string yourself (just a simple .replace!-) if you need \n as end-of-line markers (the lack of processing is important just in case the sub-process is writing binary data to its stdout -- this ensures all the data's left intact!-).
S. Lott's comment points to Getting realtime output using subprocess and Real-time intercepting of stdout from another process in Python
I'm curious that Alex's answer here is different from his answer 1085071.
My simple little experiments with the answers in the two other referenced questions has given good results...
I went and looked at wexpect as per Alex's answer above, but I have to say reading the comments in the code I was not left a very good feeling about using it.
I guess the meta-question here is when will pexpect/wexpect be one of the Included Batteries?
Can you simply print it as you read it from the pipe?
Inspired by pty.openpty() suggestion somewhere above, tested on python2.6, linux. Publishing since it took a while to make this working properly, w/o buffering...
def call_and_peek_output(cmd, shell=False):
import pty, subprocess
master, slave = pty.openpty()
p = subprocess.Popen(cmd, shell=shell, stdin=None, stdout=slave, close_fds=True)
os.close(slave)
line = ""
while True:
try:
ch = os.read(master, 1)
except OSError:
# We get this exception when the spawn process closes all references to the
# pty descriptor which we passed him to use for stdout
# (typically when it and its childs exit)
break
line += ch
sys.stdout.write(ch)
if ch == '\n':
yield line
line = ""
if line:
yield line
ret = p.wait()
if ret:
raise subprocess.CalledProcessError(ret, cmd)
for l in call_and_peek_output("ls /", shell=True):
pass
Alternatively, you can pipe your process into tee and capture only one of the streams.
Something along the lines of sh -c 'process interesting stuff' | tee /dev/stderr.
Of course, this only works on Unix-like systems.

Interact with a Windows console application via Python

I am using python 2.5 on Windows. I wish to interact with a console process via Popen. I currently have this small snippet of code:
p = Popen( ["console_app.exe"], stdin=PIPE, stdout=PIPE )
# issue command 1...
p.stdin.write( 'command1\n' )
result1 = p.stdout.read() # <---- we never return here
# issue command 2...
p.stdin.write( 'command2\n' )
result2 = p.stdout.read()
I can write to stdin but can not read from stdout. Have I missed a step? I don't want to use p.communicate( "command" )[0] as it terminates the process and I need to interact with the process dynamically over time.
Thanks in advance.
Your problem here is that you are trying to control an interactive application.
stdout.read() will continue reading until it has reached the end of the stream, file or pipe. Unfortunately, in case of an interactive program, the pipe is only closed then whe program exits; which is never, if the command you sent it was anything other than "quit".
You will have to revert to reading the output of the subprocess line-by-line using stdout.readline(), and you'd better have a way to tell when the program is ready to accept a command, and when the command you issued to the program is finished and you can supply a new one. In case of a program like cmd.exe, even readline() won't suffice as the line that indicates a new command can be sent is not terminated by a newline, so will have to analyze the output byte-by-byte. Here's a sample script that runs cmd.exe, looks for the prompt, then issues a dir and then an exit:
from subprocess import *
import re
class InteractiveCommand:
def __init__(self, process, prompt):
self.process = process
self.prompt = prompt
self.output = ""
self.wait_for_prompt()
def wait_for_prompt(self):
while not self.prompt.search(self.output):
c = self.process.stdout.read(1)
if c == "":
break
self.output += c
# Now we're at a prompt; clear the output buffer and return its contents
tmp = self.output
self.output = ""
return tmp
def command(self, command):
self.process.stdin.write(command + "\n")
return self.wait_for_prompt()
p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE )
prompt = re.compile(r"^C:\\.*>", re.M)
cmd = InteractiveCommand(p, prompt)
listing = cmd.command("dir")
cmd.command("exit")
print listing
If the timing isn't important, and interactivity for a user isn't required, it can be a lot simpler just to batch up the calls:
from subprocess import *
p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE )
p.stdin.write("dir\n")
p.stdin.write("exit\n")
print p.stdout.read()
Have you tried to force windows end lines?
i.e.
p.stdin.write( 'command1 \r\n' )
p.stdout.readline()
UPDATE:
I've just checked the solution on windows cmd.exe and it works with readline(). But it has one problem Popen's stdout.readline blocks. So if the app will ever return something without endline your app will stuck forever.
But there is a work around for that check out: http://code.activestate.com/recipes/440554/
I think you might want to try to use readline() instead?
Edit: sorry, misunderstoud.
Maybe this question can help you?
Is it possible that the console app is buffering its output in some way so that it is only being sent to stdout when the pipe is closed? If you have access to the code for the console app, maybe sticking a flush after a batch of output data might help?
Alternatively, is it actually writing to stderr and instead of stdout for some reason?
Just looked at your code again and thought of something else, I see you're sending in "command\n". Could the console app be simply waiting for a carriage return character instead of a new line? Maybe the console app is waiting for you to submit the command before it produces any output.
Had the exact same problem here. I dug into DrPython source code and stole wx.Execute() solution, which is working fine, especially if your script is already using wx. I never found correct solution on windows platform though...

Categories

Resources