Python: calling Fortran with subprocess and giving commands via communicate - python

I want to call a Fortran program from python. I use the Popen statement from subprocess like this:
p = Popen(['./finput'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
I then want to send some file names to the fortran program. The fortran program reads them from stdin and then opens the files.
If I use something like:
p_stdout = p.communicate(input='file1.dat\nfile2.dat\n')[0]
everything is fine and the fortran program works as expected.
However I want to give the file names as a variable from within the python program.
So if I use
p_stdout = p.communicate(input=file1+'\n'+file2+'\n')[0]
my fortran program can not open the file names. The problem is that the string that fortran reads looks like this
f i l e 1 . d a t
with a blank character as a first character and some strange character inbetween every correct character. Unfortunately this only shows up if you print every character of the string individually. If you just print the file name with
print*,file1
you get
file1.dat
So my question is, why is python putting in these strange characters into the communication with the child process and, more important, how do I get rid of the?
many thanks

Sounds like your Fortran might be getting Unicode, are you using Python 3? If so, then construct the string to be passed then use string.encode()

Related

How to read a print out statement from another program with python?

I have an algorithm that is written in C++ that outputs a cout debug statement to the terminal window and I would like to figure out how to read that printout with python without it being piped/written to a file or to return a value.
Python organizes how each of the individual C++ algorithms are called while the data is kept on the heap and not onto disk. Below is an example of the a situation that is of similar output,
+-------------- terminal window-----------------+
(c++)runNewAlgo: Debug printouts on
(c++)runNewAlgo: closing pipes and exiting
(c++)runNewAlgo: There are 5 objects of interest found
( PYTHON LINE READS THE PRINT OUT STATEMENT)
(python)main.py: Starting the next processing node, calling algorithm
(c++)newProcessNode: Node does work
+---------------------------------------------------+
Say the line of interest is "there are 5 objects of interest" and the code will be inserted before the python call. I've tried to use sys.stdout and subprocess.Popen() but I'm struggling here.
Your easiest path would probably be to invoke your C++ program from inside your Python script.
More details here: How to call an external program in python and retrieve the output and return code?
You can use stdout from the returned process and read it line-by-line. The key is to pass stdout=subprocess.PIPE so that the output is sent to a pipe instead of being printed to your terminal (via sys.stdout).
Since you're printing human-readable text from your C++ program, you can also pass encoding='utf-8' as well to automatically decode each line using utf-8 encoding; otherwise, raw bytes will be returned.
import subprocess
proc = subprocess.Popen(['/path/to/your/c++/program'],
stdout=subprocess.PIPE, encoding='utf-8')
for line in proc.stdout:
do_something_with(line)
print(line, end='') # if you also want to see each line printed

Running and piping text into a .exe file from Python

I am trying to a run a .exe file from python and pipe a string into it. The .exe itself opens a command box and requires a series of string inputs that can be entered in one go on a series of lines (as below)
In bash the solution would be:
printf "test.dat\nMoreinput\nMoreinput" | ~/Desktop/Median_filt_exes/ascxyz.exe
To recreate this in python I have tried:
from subprocess import Popen, PIPE
p = Popen(r"./ascxyz.exe", stdin=PIPE,text=True)
p.communicate("test.dat\nMoreinput/nMoreinput")
There's no error however it doesn't seem to be working (the .exe should create a new file when run successfully). Any help into what I could do to figure out why the exe isnt running properly would be very appreciated!
The immediate problem is probably that you are not terminating the input with a newline. But you really also don't want to do the Popen plumbing yourself.
from subprocess import run
run(['./ascxyz.exe'], text=True,
input="test.dat\nMoreInput\nMoreInput\n")
Notice also how we pass in a list as the first argument, to avoid the complications of shell=True.

Making a Block of Text into a list

I am writing a Python script that enumerates all processes running on the computer. My current code does this but prints this out in a large block of text that is hard to read. How can I improve my script to have the output text in a vertical list for each process and all?
import subprocess
print(subprocess.check_output('set',shell=True)
*Edit: Here is the output text from the above script
set is an internal command that displays cmd.exe environment variables in your case.
To get environment variables in Python, use os.environ instead.
If you want to get the output of set command as a list of strings (not tested):
#!/usr/bin/env python3
import os
from subprocess import check_output
lines = check_output('cmd.exe /U /c set').decode('utf-16').split(os.linesep)
set should already print with newlines, so if they're not showing up, something is more wrong than you're telling us. You could always double up the newlines if you want to split the settings apart, e.g.:
import subprocess
print(subprocess.check_output('set',shell=True).replace('\n', '\n\n'))
If the problem is that you're running on Python 3 and the bytes object is a big blob, you can make subprocess decode it to a friendly printable string for you:
print(subprocess.check_output('set',shell=True, universal_newlines=True))
# Yes, the name of the keyword is dumb; it sounds like it handles different
# line ending conventions, but on Python 3, it also decodes from `bytes`
# to `str` for you.
For the general case of line wrapping nicely (though it does nothing for paragraphs of text that are just "too big"), you might want to look at the textwrap module; it splits a block of text up into a list of lines wrapped nicely at word boundaries so you don't have words split across lines.
Disclaimer: I have not done what you are doing before but this might work.
import subprocess
processes = subprocess.check_output('set',shell=True)
processes = processes.decode('UTF-8').split('\n') # convert bytes to unicodes and split
for process in processes:
print(process)

Call Perl script from Python constantly returning values

I found a question on this site which showed me how to call a Perl script from Python. I'm currently using the following lines of code to achieve this:
pipe = subprocess.Popen(["perl", "./Perl_Script.pl", param], stdout=subprocess.PIPE)
result = pipe.stdout.read()
This works perfectly, but the only issue is that the Perl script takes a few minutes to run. At the end of the Perl script, I use a simple print statement to print my values I need to return back to Python, which gets set to the result variable in Python.
Is there a way I can include more print statements in my Perl script every few seconds that can get returned to Python continuously (instead of waiting a few minutes and returning a long list at the end)?
Ultimately, what I'm doing is using the Perl script to obtain data points that I then send back to Python to plot an eye diagram. Instead of waiting for minutes to plot the eye diagram when the Perl script is finished running, I'd like to return segments of the data to Python continuously, allowing my plot to update every few seconds.
The default UNIX stdio buffer is at least 8k. If you're writing less than 8k, you'll end up waiting until the program ends before the buffer is flushed.
Tell the Perl program to stop buffering output, and probably tell python not to buffer input through the pipe.
$| = 1;
to un-buffer STDOUT in your Perl program.
You need two pieces: To read a line at a time in Python space and to emit a line at a time from Perl. The first can be accomplished with a loop like
while True:
result = pipe.stdout.readline()
if not result:
break
# do something with result
The readline blocks until a line of text (or EOF) is received from the attached process, then gives you the data it read. So long as each chunk of data is on its own line, that should work.
If you run this code without modifying the Perl script, however, you will not get any output for quite a while, possibly until the Perl script is finished executing. This is because Perl block-buffers output to a pipe by default. You can tell it to flush the buffer more often by changing a global variable in the scope in which you are printing:
use English qw(-no_match_vars);
local $OUTPUT_AUTOFLUSH = 1;
print ...;
See http://perl.plover.com/FAQs/Buffering.html and http://perldoc.perl.org/perlvar.html .
pipe.stdout.read() tries to read the whole stream, so it will block until perl is finished.
Try this:
line=' '
while line:
line = pipe.stdout.readline()
print line,

Using textfile as stdin in python under windows 7

I'm a win7-user.
I accidentally read about redirections (like command1 < infile > outfile) in *nix systems, and then I discovered that something similar can be done in Windows (link). And python is also can do something like this with pipes(?) or stdin/stdout(?).
I do not understand how this happens in Windows, so I have a question.
I use some kind of proprietary windows-program (.exe). This program is able to append data to a file.
For simplicity, let's assume that it is the equivalent of something like
while True:
f = open('textfile.txt','a')
f.write(repr(ctime()) + '\n')
f.close()
sleep(100)
The question:
Can I use this file (textfile.txt) as stdin?
I mean that the script (while it runs) should always (not once) handle all new data, ie
In the "never-ending cycle":
The program (.exe) writes something.
Python script captures the data and processes.
Could you please write how to do this in python, or maybe in win cmd/.bat or somehow else.
This is insanely cool thing. I want to learn how to do it! :D
If I am reading your question correctly then you want to pipe output from one command to another.
This is normally done as such:
cmd1 | cmd2
However, you say that your program only writes to files. I would double check the documentation to see if their isn't a way to get the command to write to stdout instead of a file.
If this is not possible then you can create what is known as a named pipe. It appears as a file on your filesystem, but is really just a buffer of data that can be written to and read from (the data is a stream and can only be read once). Meaning your program reading it will not finish until the program writing to the pipe stops writing and closes the "file". I don't have experience with named pipes on windows so you'll need to ask a new question for that. One down side of pipes is that they have a limited buffer size. So if there isn't a program reading data from the pipe then once the buffer is full the writing program won't be able to continue and just wait indefinitely until a program starts reading from the pipe.
An alternative is that on Unix there is a program called tail which can be set up to continuously monitor a file for changes and output any data as it is appended to the file (with a short delay.
tail --follow=textfile.txt --retry | mycmd
# wait for data to be appended to the file and output new data to mycmd
cmd1 >> textfile.txt # append output to file
One thing to note about this is that tail won't stop just because the first command has stopped writing to the file. tail will continue to listen to changes on that file forever or until mycmd stops listening to tail, or until tail is killed (or "sigint-ed").
This question has various answers on how to get a version of tail onto a windows machine.
import sys
sys.stdin = open('textfile.txt', 'r')
for line in sys.stdin:
process(line)
If the program writes to textfile.txt, you can't change that to redirect to stdin of your Python script unless you recompile the program to do so.
If you were to edit the program, you'd need to make it write to stdout, rather than a file on the filesystem. That way you can use the redirection operators to feed it into your Python script (in your case the | operator).
Assuming you can't do that, you could write a program that polls for changes on the text file, and consumes only the newly written data, by keeping track of how much it read the last time it was updated.
When you use < to direct the output of a file to a python script, that script receives that data on it's stdin stream.
Simply read from sys.stdin to get that data:
import sys
for line in sys.stdin:
# do something with line

Categories

Resources