python redirect standard stream of subprocess - python

I'm trying to do online coding platform using python as backend. I'm using subprocess.Popen
to run user's code. Now I want to interact with it. One way I know is using pipe and dup2
but problem with it is it changes standard i/o in context of my whole backend application.
I want std streams change with respect to subprocess only.
Any args that I can pass to subprocess.Popen so that main python app can access std stream of its subprocess?
Example:
import subprocess
args=['py',path+'t.py']
p = subprocess.Popen(args)
t.py is Hello World program. subprocess will print Hello World in terminal but I want it to redirect to other file descriptor so that I can access in an variable and send it over network. Same goes with input.
UPDATE:
I did
r,w=os.pipe()
p = subprocess.Popen(args,stdout=w)
for this I get error
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument

subprocess Popen provides additional named parameters for this type of things with PIPE as you mentionned
from subprocess import Popen, PIPE, STDOUT
args=['py',path+'t.py']
# errors included
p = Popen(args, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
#BufferedWriter for input
input_stream = p.stdin
#BufferedReader for output
output_stream = p.stdout
print(output_stream.read())

Related

Python subprocess.Popen: redirect `STDERR` only and keep `STDOUT`

Setup
I have a little Runner program, that prints some info in sys.stderr (for logs, unhandled exceptions and etc.) and sys.stdout (some usefull info about program, maybe interaction with user or smth):
import sys
import time
for i in range(1, 4):
sys.stdout.write(f"This is text #{i} to STDOUT\n")
sys.stderr.write(f"This is text #{i} to STDERR\n")
time.sleep(5)
And I have some Main program, that starts Runner in the new window with subprocess.Popen and prints it's output:
import subprocess
cmd = "python runner.py"
proc = subprocess.Popen(cmd,
stdout=subprocess.PIPE, # Problem line
stderr=subprocess.PIPE,
creationflags=subprocess.CREATE_NEW_CONSOLE
)
proc.wait()
out, err = proc.communicate()
if out:
print(f"[{out.decode('utf-8')}]")
if err:
print(f"[{err.decode('utf-8')}]")
So the resulting output is:
[This is text #1 to STDOUT
This is text #2 to STDOUT
This is text #3 to STDOUT
]
[This is text #1 to STDERR
This is text #2 to STDERR
This is text #3 to STDERR
]
Why Popen?
I need to run several Runners parallely and wait them lately, but subprocess.check_input or subprocess.run does not allow that (or am I wrong??)
Why new window?
I want to see prints separetely for every Runner in their personal windows
What I want
I want to redirect stderr only and keep stdout in opened window, so the Main program will only output errors from subprocess:
[This is text #1 to STDERR
This is text #2 to STDERR
This is text #3 to STDERR
]
That will be very usefull for debugging new Runner's features...
What I tried
When subprocess.Popen has stderr=subprocess.PIPE param and stdout=None (default), stdout is blocking:
it doesn't show in the Runner window
and proc.communicate returns None
So the stdout prints just disappeared... I tried even pass sys.stdout to stdout= param (for output not in window, but in current console), but it throws Bad file descriptor error:
[Traceback (most recent call last):
File "C:\Users\kirin\source\repos\python_tests\runner.py", line 5, in <module>
sys.stdout.write(f"This is text #{i} to STDOUT\n")
OSError: [Errno 9] Bad file descriptor
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1251'>
OSError: [Errno 9] Bad file descriptor
]
(btw, this print was succesfully redirected from Runner to Main)
Need help...
Here is a solution that meets the requirements of the 'What I want' section:
main.py:
import subprocess
command = ["python", "runner.py"]
process = subprocess.Popen(command, shell=False, text=True, stderr=subprocess.PIPE, creationflags=subprocess.CREATE_NEW_CONSOLE)
process.wait()
stderr = process.stderr.read()
print(stderr, end="")
runner.py contains the code mentioned in the question.
Argument shell=False is used to run python runner.py directly (i.e. not as a shell command), text=True makes subprocess open process.stderr in text mode (instead of binary mode).
When running this, output from runner.py sent to stdout appears in the new window while output sent to stderr is captured in variable stderr (and also printed in main.py's window).
If runner.py's output shall be processed right away as it is produced (i.e. without waiting for the process to finish first), the following code may be used:
main.py:
import subprocess
command = ["python", "runner.py"]
process = subprocess.Popen(command, shell=False, text=True, bufsize=1, stderr=subprocess.PIPE, creationflags=subprocess.CREATE_NEW_CONSOLE)
stderr = ""
while (True):
line = process.stderr.readline()
if (line == ""): break # EOF
stderr += line
print(line, end="")
runner.py (modified to illustrate the difference):
import sys
import time
for i in range(1, 4):
sys.stdout.write(f"This is text #{i} to STDOUT\n")
sys.stderr.write(f"This is text #{i} to STDERR\n")
time.sleep(1)
Argument bufsize=1 is used here to get line-buffered output from runner.py's stderr.
Successfully tested on Windows 10 21H2 + Python 3.10.4.

How to keep ssh session open after logging in using subprocess.popen?

I am new to Python.
I am trying to SSH to a server to perform some operations. However, before performing the operations, i need to load a profile, which takes 60-90 seconds. After loading the profile, is there a way to keep the SSH session open so that i can perform the operations later?
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0]
print result
return result
This loads the profile and exits. Is there a way to keep the above ssh session open and run some commands?
Example:
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
<More Python Code>
<More Python Code>
<More Python Code>
<Run some scripts/commands on xyz server non-interactively>
After loading the profile, I want to run some scripts/commands on the remote server, which I am able to do by simply doing below:
p = subprocess.Popen("ssh abc#xyz './profile;**<./a.py;etc>**'", stdout=subprocess.PIPE, shell=True)
However, once done, it exists and the next time I want to execute some script on the above server, I need to load the profile again (which takes 60-90 seconds). I am trying to figure out a way where we can create some sort of tunnel (or any other way) where the ssh connection remains open after loading the profile, so that the users don't have to wait 60-90 seconds whenever anything is to be executed.
I don't have access to strip down the profile.
Try an ssh library like asyncssh or spur. Keeping the connection object should keep the session open.
You could send a dummy command like date to prevent the timeout as well.
You have to construct a ssh command like this ['ssh', '-T', 'host_user_name#host_address'] then follow below code.
Code:
from subprocess import Popen, PIPE
ssh_conn = ['ssh', '-T', 'host_user_name#host_address']
# if you have to add port then ssh_conn should be as following
# ssh_conn = ['ssh', '-T', 'host_user_name#host_address', '-p', 'port']
commands = """
cd Documents/
ls -l
cat test.txt
"""
with Popen(ssh_conn, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as p:
output, error = p.communicate(commands)
print(output)
print(error)
print(p.returncode)
# or can do following things
p.stdin.write('command_1')
# add as many command as you want
p.stdin.write('command_n')
Terminal Output:
Please let me know if you need further explanations.
N.B: You can add command in commands string as many as you want.
What you want to do is write/read to the process's stdin/stdout.
from subprocess import Popen, PIPE
import shlex
shell_command = "ssh user#address"
proc = Popen(shlex.split(shell_command), stdin=PIPE, universal_newlines=True)
# Do python stuff here
proc.stdin.write("cd Desktop\n")
proc.stdin.write("mkdir Example\n")
# And so on
proc.stdin.write("exit\n")
You must include the trailing newline for each command. If you prefer, print() (as of Python 3.x, where it is a function) takes a keyword argument file, which allows you to forget about that newline (and also gain all the benefits of print()).
print("rm Example", file=proc.stdin)
Additionally, if you need to see the output of your command, you can pass stdout=PIPE and then read via proc.stdout.read() (same for stderr).
You may also want to but the exit command in a try/finally block, to ensure you exit the ssh session gracefully.
Note that a) read is blocking, so if there's no output, it'll block forever and b) it will only return what was available to read from the stdout at that time- so you may need to read repeatedly, sleep for a short time, or poll for additional data. See the fnctl and select stdlib modules for changing blocking -> nonblocking read and polling for events, respectively.
Hello Koshur!
I think that what you are trying to achieve looks like what I've tried in the past when trying to make my terminal accessible from a private website:
I would open a bash instance, keep it open and would listen for commands through a WebSocket connection.
What I did to achieve this was using the O_NONBLOCK flag on STDOUT.
Example
import fcntl
import os
import shlex
import subprocess
current_process = subprocess.Popen(shlex.split("/bin/sh"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Open a shell prompt
fcntl.fcntl(current_process.stdout.fileno(), fcntl.F_SETFL,
os.O_NONBLOCK) # Non blocking stdout and stderr reading
What I would have after this is a loop checking for new output in another thread:
from time import sleep
from threading import Thread
def check_output(process):
"""
Checks the output of stdout and stderr to send it to the WebSocket client
"""
while process.poll() is None: # while the process isn't exited
try:
output = process.stdout.read() # Read the stdout PIPE (which contains stdout and stderr)
except Exception:
output = None
if output:
print(output)
sleep(.1)
# from here, we are outside the loop: the process exited
print("Process exited with return code: {code}".format(code=process.returncode))
Thread(target=check_output, args=(current_process,), daemon=True).start() # Start checking for new text in stdout and stderr
So you would need to implement your logic to SSH when starting the process:
current_process = subprocess.Popen(shlex.split("ssh abc#xyz'./profile'"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
And send commands like so:
def send_command(process, cmd):
process.stdin.write(str(cmd + "\n").encode("utf-8")) # Write the input to STDIN
process.stdin.flush() # Run the command
send_command(current_process, "echo Hello")
EDIT
I tried to see the minimum Python requirements for the given examples and found out that Thread(daemon) might not work on Python 2.7, which you asked in the tags.
If you are sure to exit the Thread before exiting, you can ignore daemon and use Thread() which works on 2.7. (You could for example use atexit and terminate the process)
References
fcntl(2) man page
https://man7.org/linux/man-pages/man2/fcntl.2.html
fcntl Python 3 Documentation
https://docs.python.org/3/library/fcntl.html
fcntl Python 2.7 Documentation
https://docs.python.org/2.7/library/fcntl.html
O_NONBLOCK Python 3 Documentation
https://docs.python.org/3/library/os.html#os.O_NONBLOCK
O_NONBLOCK Python 2.7 Documentation
https://docs.python.org/2.7/library/os.html#os.O_NONBLOCK

Python subprocess: pipe an image blob to imagemagick shell command

I have an image in-memory and I wish to execute the convert method of imagemagick using Python's subprocess. While this line works well using Ubuntu's terminal:
cat image.png | convert - new_image.jpg
This piece of code doesn't work using Python:
jpgfile = Image.open('image.png');
proc = Popen(['convert', '-', 'new_image.jpg'], stdin=PIPE, shell=True)
print proc.communicate(jpgfile.tostring())
I've also tried reading the image as a regular file without using PIL, I've tried switching between subprocess methods and different ways to write to stdin.
The best part is, nothing is happening but I'm not getting a real error. When printing stdout I can see imagemagick help on terminal, followed by the following:
By default, the image format of `file' is determined by its magic
number. To specify a particular image format, precede the filename
with an image format name and a colon (i.e. ps:image) or specify the
image type as the filename suffix (i.e. image.ps). Specify 'file' as
'-' for standard input or output. (None, None)
Maybe there's a hint in here I'm not getting.
Please point me in the right direction, I'm new to Python but from my experience with PHP this should be an extremely easy task, or so I hope.
Edit:
This is the solution I eventually used to process PIL image object without saving a temporary file. Hope it helps someone. (in the example I'm reading the file from the local drive, but the idea is to read an image from a remote location)
out = StringIO()
jpgfile = Image.open('image.png')
jpgfile.save(out, 'png', quality=100);
out.seek(0);
proc = Popen(['convert', '-', 'image_new.jpg'], stdin=PIPE)
proc.communicate(out.read())
It is not subprocess that is causing any issue it is what you are passing to imagemagick that is incorrect,tostring() does get passed to imagemagick. if you actually wanted to replicate the linux command you can pipe from one process to another:
from subprocess import Popen,PIPE
proc = Popen(['cat', 'image.jpg'], stdout=PIPE)
p2 = Popen(['convert', '-', 'new_image.jpg'],stdin=proc.stdout)
proc.stdout.close()
out,err = proc.communicate()
print(out)
When you pass a list of args you don't need shell=True, if you wanted to use shell=True you would pass a single string:
from subprocess import check_call
check_call('cat image.jpg | convert - new_image.jpg',shell=True)
Generally I would avoid shell=True. This answer outlines what exactly shell=True does.
You can also pass a file object to stdin:
with open('image.jpg') as jpgfile:
proc = Popen(['convert', "-", 'new_image.jpg'], stdin=jpgfile)
out, err = proc.communicate()
print(out)
But as there is no output when the code runs successfully you can use check_call which will raise a CalledProcessError if there is a non-zero exit status which you can catch and take the appropriate action:
from subprocess import check_call, CalledProcessError
with open('image.jpg') as jpgfile:
try:
check_call(['convert', "-", 'new_image.jpg'], stdin=jpgfile)
except CalledProcessError as e:
print(e.message)
If you wanted to write to stdin using communicate you could also pass the file contents using .read:
with open('image.jpg') as jpgfile:
proc = Popen(['convert', '-', 'new_image.jpg'], stdin=PIPE)
proc.communicate(jpgfile.read())
If you want don't want to store the image on disk use a tempfile:
import tempfile
import requests
r = requests.get("http://www.reptileknowledge.com/images/ball-python.jpg")
out = tempfile.TemporaryFile()
out.write(r.content)
out.seek(0)
from subprocess import check_call
check_call(['convert',"-", 'new_image.jpg'], stdin=out)
Using a CStringIo.StringIO object:
import requests
r = requests.get("http://www.reptileknowledge.com/images/ball-python.jpg")
out = cStringIO.StringIO(r.content)
from subprocess import check_call,Popen,PIPE
p = Popen(['convert',"-", 'new_image.jpg'], stdin=PIPE)
p.communicate(out.read())

Interaction between Python script and linux shell

I have a Python script that needs to interact with the user via the command line, while logging whatever is output.
I currently have this:
# lots of code
popen = subprocess.Popen(
args,
shell=True,
stdin=sys.stdin,
stdout=sys.stdout,
stderr=sys.stdout,
executable='/bin/bash')
popen.communicate()
# more code
This executes a shell command (e.g. adduser newuser02) just as it would when typing it into a terminal, including interactive behavior. This is good.
Now, I want to log, from within the Python script, everything that appears on the screen. But I can't seem to make that part work.
I've tried various ways of using subprocess.PIPE, but this usually messes up the interactivity, like not outputting prompt strings.
I've also tried various ways to directly change the behavior of sys.stdout, but as subprocess writes to sys.stdout.fileno() directly, this was all to no avail.
Popen might not be very suitable for interactive programs due to buffering issues and due to the fact that some programs write/read directly from a terminal e.g., to retrieve a password. See Q: Why not just use a pipe (popen())?.
If you want to emulate script utility then you could use pty.spawn(), see the code example in Duplicating terminal output from a Python subprocess or in log syntax errors and uncaught exceptions for a python subprocess and print them to the terminal:
#!/usr/bin/env python
import os
import pty
import sys
with open('log', 'ab') as file:
def read(fd):
data = os.read(fd, 1024)
file.write(data)
file.flush()
return data
pty.spawn([sys.executable, "test.py"], read)
Or you could use pexpect for more flexibility:
import sys
import pexpect # $ pip install pexpect
with open('log', 'ab') as fout:
p = pexpect.spawn("python test.py")
p.logfile = fout # or .logfile_read
p.interact()
If your child process doesn't buffer its output (or it doesn't interfere with the interactivity) and it prints its output to its stdout or stderr then you could try subprocess:
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE, STDOUT
with open('log','ab') as file:
p = Popen([sys.executable, '-u', 'test.py'],
stdout=PIPE, stderr=STDOUT,
close_fds=True,
bufsize=0)
for c in iter(lambda: p.stdout.read(1), ''):
for f in [sys.stdout, file]:
f.write(c)
f.flush()
p.stdout.close()
rc = p.wait()
To read both stdout/stderr separately, you could use teed_call() from Python subprocess get children's output to file and terminal?
This should work
import subprocess
f = open('file.txt','w')
cmd = ['echo','hello','world']
subprocess.call(cmd, stdout=f)

Python subprocess.Popen() followed by time.sleep

I want to make a python script that will convert a TEX file to PDF and then open the output file with my document viewer.
I first tried the following:
import subprocess
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
This way, the conversion from TEX to PDF works all right, but, as it takes some time, the second command (open file with Document Viewer) is executed before the output file is created.
So, I tried do make the program wait some seconds before executing the second command. Here's what I've done:
import subprocess
import time
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
But, when I do so, the output PDF file is not created. I can't understand why. The only change was the time.sleep command. Why does it affect the Popen process?
Could anyone give me some help?
EDIT:
I've followed the advice from Faust and Paulo Bu and in both cases the result is the same.
When I run this command...
subprocess.call('xelatex --output-directory=Alunos/ Alunos/{}_pratica.tex'.format(aluno), shell=True)
... or this...
p = subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
...the Xelatex program is run but doesn't make the conversion.
Strangely, when I run the command directly in the shell...
$ xelatex --output-directory=Alunos/ Alunos/name_pratica.tex
... the conversion works perfectly.
Here's what I get when I run the subprocess.call() command:
$ python my_file.py
Enter name:
name
This is XeTeX, Version 3.1415926-2.4-0.9998 (TeX Live 2012/Debian)
restricted \write18 enabled.
entering extended mode
(./Alunos/name_pratica.tex
LaTeX2e <2011/06/27>
Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, loaded.
)
*
When I write the command directly in the shell, the output is the same, but it followed automatically by the conversion.
Does anyone know why it happens this way?
PS: sorry for the bad formating. I don't know how to post the shell output properly.
If you need to wait the termination of the program and you are not interested in its output you should use subprocess.call
import subprocess
subprocess.call(['xelatex', '--output-directory=Alunos/', 'Alunos/{}_pratica.tex'.format(aluno)])
subprocess.call([('gnome-open', 'Alunos/{}_pratica.pdf'.format(aluno)])
EDIT:
Also it is generally a good thing to use English when you have to name variables or functions.
If xelatex command works in a shell but fails when you call it from Python then xelatex might be blocked on output in your Python code. You do not read the pipes despite setting stdout/stderr to PIPE. On my machine the pipe buffer is 64KB therefore if xelatex output size is less then it should not block.
You could redirect the output to os.devnull instead:
import os
import webbrowser
from subprocess import STDOUT, check_call
try:
from subprocess import DEVNULL # py3k
except ImportError:
DEVNULL = open(os.devnull, 'w+b')
basename = aluno + '_pratica'
output_dir = 'Alunos'
root = os.path.join(output_dir, basename)
check_call(['xelatex', '--output-directory', output_dir, root+'.tex'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
webbrowser.open(root+'.pdf')
check_call is used to wait for xelatex and raise an exception on error.

Categories

Resources