how to read output from several running processes in python? - python

is there any chance to read outputs from running terminals?
I have running processes in /dev/pts/# and I want to read all of them (and then maybe save into a file).
This is working quite well, but only for one/first process:
import subprocess
with open("path_to_file_with_devices_list") as f:
content = f.read().splitlines()
for x in content:
subprocess.check_call(['cat', x]
I have whole output from first /dev/pts/# in terminal and I understand this stuck, because script capture first /dev/pts/# and I can see only this output.
How to handle this? I mean, how to capture output from other/next terminal in /dev/pts/#?
Somehow run in other terminal each next process? Or force this script to end each reading and move on further to next one.
Any ideas?

Related

python script hangs when writing to /dev/xconsole

my python script is supposed to write to /dev/xconsole. It works as expected, when I am reading from /dev/xconsole, such as with tail -F /dev/xconsole. But if I don't have tail running, my script hangs and waits.
I am opening the file as follows:
xconsole = open('/dev/xconsole', 'w')
and writing to it:
for line in sys.stdin:
xconsole.write(line)
Why does my script hang, when nobody is reading the output from /dev/xconsole ?
/dev/xconsole is a named pipe and it is on demand FIFO pipe.
So when you use it, it stores data in memory as Linux provides an object of fixed size. If the application doesn't read the data timely, then the buffer becomes full and the application hangs.
In order to avoid this, you'll need to Write > Read > Write and so on. Just ensure that it doesn't fill up. For a Linux system it's around 64KB usually.
#Vishnudev summarized this nicely already and should be accepted as the correct answer. I'll just add to his answer with the following code to resize your FIFO memory buffer:
import fcntl
F_SETPIPE_SZ = 1031
F_GETPIPE_SZ = 1032
fifo_fd = open("/path/to/fifo", "rb")
print(f"fifo buffer size before: {fcntl.fcntl(fifo_fd, F_GETPIPE_SZ))}"
fcntl.fcntl(fifo_fd, F_SETPIPE_SZ, 1000000)
print(f"fifo buffer size after: {fcntl.fcntl(fifo_fd, F_GETPIPE_SZ))}"

Loop an existing script

I'm using a script from a third party I can't modify or show (let's call it original.py) which takes a file and produces some calculations. At the end it ouputs a result (using the print statment).
Since I have many files I decided to make a second script that gets all wanted files and runs them through the original.py
1st get list of all files to run
2nd run each file through the original.py
3rd obtain results from each file
I have the 1st and 2nd step. However, the end result only saves the calculations from the last file it read.
import sys
import original
import glob
import os
fn=str(sys.argv[1])
for filename in sys.argv[1:]:
print(filename)
ficheiros = [f for f in glob.glob(fn)]
for ficheiro in ficheiros:
original.file = bytes(ficheiro,'utf-8')
original.function()
To summarize:
Knowing I can't change the original script (which is made with a print statement) how can I obtain the results for each loop? Is there a better way than using a for loop?.
The first script can be invoked with python original.py
It requires the file to be changed manually inside the script in the original.file line.
This script outputs the result in the console and I redirect it with: python original.py > result.txt
At the moment when I try to run my script, it reads all the correct files in the folder but only returns the results for the last file.
#
(I tried to reformulate the question hopefully it's easier to understand)
#
The problem is due to a mistake in the ````ficheiros = [f for f in glob.glob(fn)]`````it's only reading one file, hence only outputting one result.
Thanks for the time.sleep() trick in the comments.
Solved:
I changed the initial part to:
fn=str(sys.argv[1])
ficheiros= []
for filename in sys.argv[1:]:
ficheiros.append(filename)
#print(filename)
and now it correctly reads all the files and it outputs all the results
Depending on your operating system there are different ways to take what is printed to the console and append it to a file.
For example on Linux, you could run this file that calls original.py for every file python yourfile.py >> outputfile.txt, which will then effectively save everything that is printed into outputfile.txt.
The syntax is similar for Windows.
I'm not quite sure what you're asking, but you could try one of these:
Either redirecting all output to a file for later use, by running the script like so: python secondscript.py > outfilename.txt
Or, and this might or might not work for you, redefining the print command to a function that outputs the result how you want, eg:
def print(x):
with open('outfile.txt','w') as f:
f.write('example: ' + x)
If you choose the second option, I recommend saving the old print function (oldprint = print) so you can restore and use the regular print later.
I don't know if I got exactly what you want. You have a first script named original.py which takes some arguments and returns things in the form of print statements and you would like to grab these prints statements in your scripts to do things?
If so, a solution could be the subprocess module:
Let's say that this is original.py:
print("Hi, I'm original.py")
print("print me!")
And this is main.py:
import subprocess
script_path = "original.py"
print("Executing ", script_path)
process = subprocess.Popen(["python3", script_path], stdout=subprocess.PIPE)
for line in process.stdout:
print(line.decode("utf8"))
You can easily add more arguments in the Popen call like ["arg1", "arg2",] etc.
Output:
Executing original.py
Hi, I'm original.py
print me!
and you can grab the lines in the main.py to do what you want with them.

Call Perl script from Python constantly returning values

I found a question on this site which showed me how to call a Perl script from Python. I'm currently using the following lines of code to achieve this:
pipe = subprocess.Popen(["perl", "./Perl_Script.pl", param], stdout=subprocess.PIPE)
result = pipe.stdout.read()
This works perfectly, but the only issue is that the Perl script takes a few minutes to run. At the end of the Perl script, I use a simple print statement to print my values I need to return back to Python, which gets set to the result variable in Python.
Is there a way I can include more print statements in my Perl script every few seconds that can get returned to Python continuously (instead of waiting a few minutes and returning a long list at the end)?
Ultimately, what I'm doing is using the Perl script to obtain data points that I then send back to Python to plot an eye diagram. Instead of waiting for minutes to plot the eye diagram when the Perl script is finished running, I'd like to return segments of the data to Python continuously, allowing my plot to update every few seconds.
The default UNIX stdio buffer is at least 8k. If you're writing less than 8k, you'll end up waiting until the program ends before the buffer is flushed.
Tell the Perl program to stop buffering output, and probably tell python not to buffer input through the pipe.
$| = 1;
to un-buffer STDOUT in your Perl program.
You need two pieces: To read a line at a time in Python space and to emit a line at a time from Perl. The first can be accomplished with a loop like
while True:
result = pipe.stdout.readline()
if not result:
break
# do something with result
The readline blocks until a line of text (or EOF) is received from the attached process, then gives you the data it read. So long as each chunk of data is on its own line, that should work.
If you run this code without modifying the Perl script, however, you will not get any output for quite a while, possibly until the Perl script is finished executing. This is because Perl block-buffers output to a pipe by default. You can tell it to flush the buffer more often by changing a global variable in the scope in which you are printing:
use English qw(-no_match_vars);
local $OUTPUT_AUTOFLUSH = 1;
print ...;
See http://perl.plover.com/FAQs/Buffering.html and http://perldoc.perl.org/perlvar.html .
pipe.stdout.read() tries to read the whole stream, so it will block until perl is finished.
Try this:
line=' '
while line:
line = pipe.stdout.readline()
print line,

Using textfile as stdin in python under windows 7

I'm a win7-user.
I accidentally read about redirections (like command1 < infile > outfile) in *nix systems, and then I discovered that something similar can be done in Windows (link). And python is also can do something like this with pipes(?) or stdin/stdout(?).
I do not understand how this happens in Windows, so I have a question.
I use some kind of proprietary windows-program (.exe). This program is able to append data to a file.
For simplicity, let's assume that it is the equivalent of something like
while True:
f = open('textfile.txt','a')
f.write(repr(ctime()) + '\n')
f.close()
sleep(100)
The question:
Can I use this file (textfile.txt) as stdin?
I mean that the script (while it runs) should always (not once) handle all new data, ie
In the "never-ending cycle":
The program (.exe) writes something.
Python script captures the data and processes.
Could you please write how to do this in python, or maybe in win cmd/.bat or somehow else.
This is insanely cool thing. I want to learn how to do it! :D
If I am reading your question correctly then you want to pipe output from one command to another.
This is normally done as such:
cmd1 | cmd2
However, you say that your program only writes to files. I would double check the documentation to see if their isn't a way to get the command to write to stdout instead of a file.
If this is not possible then you can create what is known as a named pipe. It appears as a file on your filesystem, but is really just a buffer of data that can be written to and read from (the data is a stream and can only be read once). Meaning your program reading it will not finish until the program writing to the pipe stops writing and closes the "file". I don't have experience with named pipes on windows so you'll need to ask a new question for that. One down side of pipes is that they have a limited buffer size. So if there isn't a program reading data from the pipe then once the buffer is full the writing program won't be able to continue and just wait indefinitely until a program starts reading from the pipe.
An alternative is that on Unix there is a program called tail which can be set up to continuously monitor a file for changes and output any data as it is appended to the file (with a short delay.
tail --follow=textfile.txt --retry | mycmd
# wait for data to be appended to the file and output new data to mycmd
cmd1 >> textfile.txt # append output to file
One thing to note about this is that tail won't stop just because the first command has stopped writing to the file. tail will continue to listen to changes on that file forever or until mycmd stops listening to tail, or until tail is killed (or "sigint-ed").
This question has various answers on how to get a version of tail onto a windows machine.
import sys
sys.stdin = open('textfile.txt', 'r')
for line in sys.stdin:
process(line)
If the program writes to textfile.txt, you can't change that to redirect to stdin of your Python script unless you recompile the program to do so.
If you were to edit the program, you'd need to make it write to stdout, rather than a file on the filesystem. That way you can use the redirection operators to feed it into your Python script (in your case the | operator).
Assuming you can't do that, you could write a program that polls for changes on the text file, and consumes only the newly written data, by keeping track of how much it read the last time it was updated.
When you use < to direct the output of a file to a python script, that script receives that data on it's stdin stream.
Simply read from sys.stdin to get that data:
import sys
for line in sys.stdin:
# do something with line

tail -f does not seem to work in the shell when file is being populated through file.write()

I am trying to daemonize a python script that currently runs in the foreground. However, I still need to be able to see its output which it currently dumps to stdout.
So I am using the following piece of code which generates a unique file name in /tmp and then it assigns sys.stdout to this new file. All subsequent calls to 'print' are then redirected to this log file.
import uuid
outfile = open('/tmp/outfile-' + str(uuid.uuid4()), 'w')
outfile.write("Log file for daemon script...\n")
sys.stdout=outfile
# Rest of script uses print statements to dump information into the /tmp file
.
.
.
The problem I am facing is that, when I tail -f the file created in /tmp, I don't see any output. However, once I kill my daemon process, output is visible in the /tmp logfile, because python flushes out the file data.
I want to monitor the /tmp log file in realtime, hence it would be great if somehow, the output can be made visible in realtime.
One solution that I have tried was trying to use unbeffered IO, but that didn't help either.
Try harder to use unbuffered I/O. The problem is almost certainly that your output is buffered.
Opening the file like this should work:
outfile = open(name, 'w', 0)

Categories

Resources