I am currently using popen to call a Unix command which accepts multiple files as arguments and instead of using files I would like to pass the data from memory as a variable/file object. With this command actual files need to be specified with the command as it does not read them from STDIN. I can pass one file to the command by using '/dev/fd/0' as an argument and passing the contents of the file to STDIN, via communicate() but I am looking for a way to pass multiple files.
I believe I need to use file descriptors here in order to achieve this and from looking I can see python 3+ has an option called pass_fds, but no such option exists in 2.7.
Is there any way to do this in python 2.7, I guess you'd need to use os.pipe perhaps?
Thanks
I am sure there's a much better way of doing but I managed to do what I needed:
from subprocess import PIPE, Popen
import os
fakefiles = []
fd2 = 10 # Arbitrary starting fd number
fakefiles.append("""The entire
content of
file one
""")
fakefiles.append("content of file two")
def fd_file_list(fd, maxrange):
fdlist = []
for i in range(0, maxrange):
fdlist.append('/dev/fd/' + str(fd))
fd += 1
return fdlist
def create_fds(fd, files):
for content in files:
r, w = os.pipe()
w = os.fdopen(w, 'w')
w.write(content)
w.close()
os.dup2(r, fd)
fd += 1
fd_files = fd_file_list(fd2, len(fakefiles))
p2 = Popen(['/home/pi/myscript.sh'] + fd_files, stdin=PIPE, stdout=PIPE, stderr=PIPE, preexec_fn=create_fds(fd2, fakefiles))
out, err = p2.communicate()
print out
Where the content of /home/pi/myscript.sh is:
#!/bin/bash
((!$#)) && exit
for i; do
echo -e "\n\nfile is $i"
cat $i
done
Related
This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
I want to assign the output of a command I run using os.system to a variable and prevent it from being output to the screen. But, in the below code ,the output is sent to the screen and the value printed for var is 0, which I guess signifies whether the command ran successfully or not. Is there any way to assign the command output to the variable and also stop it from being displayed on the screen?
var = os.system("cat /etc/services")
print var #Prints 0
From this question which I asked a long time ago, what you may want to use is popen:
os.popen('cat /etc/services').read()
From the docs for Python 3.6,
This is implemented using subprocess.Popen; see that class’s
documentation for more powerful ways to manage and communicate with
subprocesses.
Here's the corresponding code for subprocess:
import subprocess
proc = subprocess.Popen(["cat", "/etc/services"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
print("program output:", out)
You might also want to look at the subprocess module, which was built to replace the whole family of Python popen-type calls.
import subprocess
output = subprocess.check_output("cat /etc/services", shell=True)
The advantage it has is that there is a ton of flexibility with how you invoke commands, where the standard in/out/error streams are connected, etc.
The commands module is a reasonably high-level way to do this:
import commands
status, output = commands.getstatusoutput("cat /etc/services")
status is 0, output is the contents of /etc/services.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code. Since you are only interested in the output, you can write a utility wrapper like this.
from subprocess import PIPE, run
def out(command):
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
return result.stdout
my_output = out("echo hello world")
# Or
my_output = out(["echo", "hello world"])
I know this has already been answered, but I wanted to share a potentially better looking way to call Popen via the use of from x import x and functions:
from subprocess import PIPE, Popen
def cmdline(command):
process = Popen(
args=command,
stdout=PIPE,
shell=True
)
return process.communicate()[0]
print cmdline("cat /etc/services")
print cmdline('ls')
print cmdline('rpm -qa | grep "php"')
print cmdline('nslookup google.com')
I do it with os.system temp file:
import tempfile, os
def readcmd(cmd):
ftmp = tempfile.NamedTemporaryFile(suffix='.out', prefix='tmp', delete=False)
fpath = ftmp.name
if os.name=="nt":
fpath = fpath.replace("/","\\") # forwin
ftmp.close()
os.system(cmd + " > " + fpath)
data = ""
with open(fpath, 'r') as file:
data = file.read()
file.close()
os.remove(fpath)
return data
Python 2.6 and 3 specifically say to avoid using PIPE for stdout and stderr.
The correct way is
import subprocess
# must create a file object to store the output. Here we are getting
# the ssid we are connected to
outfile = open('/tmp/ssid', 'w');
status = subprocess.Popen(["iwgetid"], bufsize=0, stdout=outfile)
outfile.close()
# now operate on the file
from os import system, remove
from uuid import uuid4
def bash_(shell_command: str) -> tuple:
"""
:param shell_command: your shell command
:return: ( 1 | 0, stdout)
"""
logfile: str = '/tmp/%s' % uuid4().hex
err: int = system('%s &> %s' % (shell_command, logfile))
out: str = open(logfile, 'r').read()
remove(logfile)
return err, out
# Example:
print(bash_('cat /usr/bin/vi | wc -l'))
>>> (0, '3296\n')```
I need to use stream redirectiton in Popen call in python to use bat file with wine. I need make this:
wine32 cmd < file.bat
It works when I run it manually from terminal, however when I try to call it from python:
proc = Popen('wine32 cmd < file.bat',stdout = PIPE)
I got error: No such file or directory
How to manage with that?
Thanks
Try this:
import sys
#...
with open('file.bat', 'r') as infile:
subprocess.Popen(['wine32', 'cmd'],
stdin=infile, stdout=sys.stdout, stderr=sys.stderr)
Make sure that each argument to wine32 is a separate list element.
maybe you can check this thread.. https://stackoverflow.com/a/5469427/3445802
from subprocess import Popen
p = Popen("batch.bat", cwd=r"C:\Path\to\batchfolder")
stdout, stderr = p.communicate()
In shell script, we have the following command:
/script1.pl < input_file| /script2.pl > output_file
I would like to replicate the above stream in Python using the module subprocess. input_file is a large file, and I can't read the whole file at once. As such I would like to pass each line, an input_string into the pipe stream and return a string variable output_string, until the whole file has been streamed through.
The following is a first attempt:
process = subprocess.Popen(["/script1.pl | /script2.pl"], stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell = True)
process.stdin.write(input_string)
output_string = process.communicate()[0]
However, using process.communicate()[0] closes the stream. I would like to keep the stream open for future streams. I have tried using process.stdout.readline(), instead, but the program hangs.
To emulate /script1.pl < input_file | /script2.pl > output_file shell command using subprocess module in Python:
#!/usr/bin/env python
from subprocess import check_call
with open('input_file', 'rb') as input_file
with open('output_file', 'wb') as output_file:
check_call("/script1.pl | /script2.pl", shell=True,
stdin=input_file, stdout=output_file)
You could write it without shell=True (though I don't see a reason here) based on 17.1.4.2. Replacing shell pipeline example from the docs:
#!/usr/bin/env python
from subprocess import Popen, PIPE
with open('input_file', 'rb') as input_file
script1 = Popen("/script1.pl", stdin=input_file, stdout=PIPE)
with open("output_file", "wb") as output_file:
script2 = Popen("/script2.pl", stdin=script1.stdout, stdout=output_file)
script1.stdout.close() # allow script1 to receive SIGPIPE if script2 exits
script2.wait()
script1.wait()
You could also use plumbum module to get shell-like syntax in Python:
#!/usr/bin/env python
from plumbum import local
script1, script2 = local["/script1.pl"], local["/script2.pl"]
(script1 < "input_file" | script2 > "output_file")()
See also How do I use subprocess.Popen to connect multiple processes by pipes?
If you want to read/write line by line then the answer depends on the concrete scripts that you want to run. In general it is easy to deadlock sending/receiving input/output if you are not careful e.g., due to buffering issues.
If input doesn't depend on output in your case then a reliable cross-platform approach is to use a separate thread for each stream:
#!/usr/bin/env python
from subprocess import Popen, PIPE
from threading import Thread
def pump_input(pipe):
try:
for i in xrange(1000000000): # generate large input
print >>pipe, i
finally:
pipe.close()
p = Popen("/script1.pl | /script2.pl", shell=True, stdin=PIPE, stdout=PIPE,
bufsize=1)
Thread(target=pump_input, args=[p.stdin]).start()
try: # read output line by line as soon as the child flushes its stdout buffer
for line in iter(p.stdout.readline, b''):
print line.strip()[::-1] # print reversed lines
finally:
p.stdout.close()
p.wait()
In my Python code, I have
executable_filepath = '/home/user/executable'
input_filepath = '/home/user/file.in'
I want to analyze the output I would get in shell from command
/home/user/executable </home/user/file.in
I tried
command = executable_filepath + ' <' + input_filepath
p = subprocess.Popen([command], stdout=subprocess.PIPE)
p.wait()
output = p.stdout.read()
but it doesn't work. The only solution that I can think of now is creating another pipe, and copying input file through it, but there must be a simple way.
from subprocess import check_output
with open("/home/user/file.in", "rb") as file:
output = check_output(["/home/user/executable"], stdin=file)
You need to specify shell=True in the call to Popen. By default, [command] is passed directly to a system call in the exec family, which doesn't understand shell redirection operators.
Alternatively, you can let Popen connect the process to the file:
with open(input_filepath, 'r') as input_fh:
p = subprocess.Popen( [executable_filepath], stdout=subprocess.PIPE, stdin=input_fh)
p.wait()
output=p.stdout.read()
I am trying to pass a file to a program (MolPro) that I start as subprocess with Python.
It most commonly takes a file as argument, like this in console:
path/molpro filename.ext
Where filename.ex contains the code to execute. Alternatively a bash script (what I'm trying to do but in Python):
#!/usr/bin/env bash
path/molpro << EOF
# MolPro code
EOF
I'm trying to do the above in Python. I have tried this:
from subprocess import Popen, STDOUT, PIPE
DEVNULL = open('/dev/null', 'w') # I'm using Python 2 so I can't use subprocess.DEVNULL
StdinCommand = '''
MolPro code
'''
# Method 1 (stdout will be a file)
Popen(['path/molpro', StdinCommand], shell = False, stdout = None, stderr = STDOUT, stdin = DEVNULL)
# ERROR: more than 1 input file not allowed
# Method 2
p = Popen(['path/molpro', StdinCommand], shell = False, stdout = None, stderr = STDOUT, stdin = PIPE)
p.communicate(input = StdinCommand)
# ERROR: more than 1 input file not allowed
So I am pretty sure the input doesn't look enough like a file, but even looking at Python - How do I pass a string into subprocess.Popen (using the stdin argument)? I can't find what Im doing wrong.
I prefer not to:
Write the string to an actual file
set shell to True
(And I can't change MolPro code)
Thanks a lot for any help!
Update: if anyone is trying to do the same thing, if you don't want to wait for the job to finish (as it doesn't return anything, either way), use p.stdin.write(StdinCommand) instead.
It seems like your second method should work if you remove StdinCommand from the Popen() arguments:
p = Popen(['/vol/thchem/x86_64-linux/bin/molpro'], shell = False, stdout = None, stderr = STDOUT, stdin = PIPE)
p.communicate(input = StdinCommand)