sys.stdout.flush() not working properly with python and electronjs - python

I'm building an electronjs python application and I'm using the pythonshell module. The electron application is supposed to log any messages my python script prints to the console, but rather than printing each message when it's supposed to be printed it waits until the script has finished executing and then prints everything. I've tried using sys.stdout.write("message") and then sys.stdout.flush(), but it still doesn't work.
The question I'm linking has a similar problem that I do, but the answer that worked for them didn't work for me on the electron application. It's flushing properly on the python backend, the frontend is what's causing the problem.
Similar question: Python sys.stdout.flush() doesn't work

file.flush() does not necessarily write the file’s data to disk!. you need to Use flush() followed by os.fsync(fd) to ensure this behavior.
see below:
sys.stdout.flush()
os.fsync(sys.stdout.fileno())
os.fsync(fd) documentation from python docs
Force write of file with file descriptor fd to disk. On Unix, this
calls the native fsync() function; on Windows, the MS _commit()
function.
If you’re starting with a Python file object f, first do f.flush(),
and then do os.fsync(f.fileno()), to ensure that all internal buffers
associated with f are written to disk.

Related

Force a 3rd-party program to flush its output when called through subprocess

I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.

Make printf appear in stdout from shared object library

I am currently using PyCUDD, which is a SWIG-generated Python wrapper for the C package CUDD. I'm currently trying to get CUDD to print some debug information from inside the C code, but any printfs inside the C code don't seem to be producing any output - putting them in the .i files for SWIG produces output, putting them in the C code does not. I'm not sure if this is some property of SWIG specifically or of compiling the C code to a shared object library.
(Particularly frustrating is that I know that I have had this problem and gotten it working before, but I can't seem to find anything searching for the problem now, and I apparently forgot to leave notes on that one thing.)
I think problem is that Launched program closes (or redirects original stdout handler/descriptor to somewhere) stdout, so Library cannot gets Handler/descriptor. So I tried to recover devices and reopen.
In linux, console device is located at /dev/stdout (it is symbolic link to specific devices or file located in fd 1 at launched time), so reopen this file and recover file descriptor.
int fd = open("/dev/stdout", O_RDONLY);
if(fd != -1)
dup2(fd,1);
In Windows, AllocConsole tries to create new Console if current process does not have console. This is for the case when PyCUDD(Or any program) uses custom console. And freopen makes writable to our allocated console, so printf will work for new console.
AllocConsole();
freopen("CONOUT$", "w", stdout);
I still forget what my problem was before. However, I have found that what was causing my problem here was a linker issue - the linker was looking at an old version of the library, which did not have my changes.

Can I save a text file in python without closing it?

I am writing a program in which I would like to be able to view a log file before the program is complete. I have noticed that, in python (2.7 and 3), that file.write() does not save the file, file.close() does. I don't want to create a million little log files with unique names but I would like to be able to view the updated log file before the program is finished. How can I do this?
Now, to be clear I am scripting using Ansys Workbench (trying to batch some CFX runs). Here's a link to a tutorial that shows what I'm talking about. They appear to have wrapped python, and by running the script I can send commands to the various modules. When the script is running there is no console onscreen and it appears to be eating all of the print statements, so the only way I can report what's happening is via a file. Also, I don't want to bring a console window up because eventually I will just run the program in batch mode (no interface). But the simulations take a long time to run and I can't wait for the program to finish before checking on what's happening.
You would need this:
file.flush()
# typically the above line would do. however this is used to ensure that the file is written
os.fsync(file.fileno())
Check this: http://docs.python.org/2/library/stdtypes.html#file.flush
file.flush()
Flush the internal buffer, like stdio‘s fflush(). This may be a no-op on some file-like objects.
Note flush() does not necessarily write the file’s data to disk. Use flush() followed by os.fsync() to ensure this behavior.
EDITED: See this question for detailed explanations: what exactly the python's file.flush() is doing?
Does file.flush() after each write help?
Hannu
This will write the file to disk immediately:
file.flush()
os.fsync(file.fileno())
According to the documentation https://docs.python.org/2/library/os.html#os.fsync
Force write of file with filedescriptor fd to disk. On Unix, this calls the native fsync() function; on Windows, the MS _commit() function.
If you’re starting with a Python file object f, first do f.flush(), and then do os.fsync(f.fileno()), to ensure that all internal buffers associated with f are written to disk.

subprocess.call does not wait for the process to complete

Per Python documentation, subprocess.call should be blocking and wait for the subprocess to complete. In this code I am trying to convert few xls files to a new format by calling Libreoffice on command line. I assumed that the call to subprocess call is blocking but seems like I need to add an artificial delay after each call otherwise I miss few files in the out directory.
what am I doing wrong? and why do I need the delay?
from subprocess import call
for i in range(0,len(sorted_files)):
args = ['libreoffice', '-headless', '-convert-to',
'xls', "%s/%s.xls" %(sorted_files[i]['filename'],sorted_files[i]['filename']), '-outdir', 'out']
call(args)
var = raw_input("Enter something: ") # if comment this line I dont get all the files in out directory
EDIT It might be hard to find the answer through the comments below. I used unoconv for document conversion which is blocking and easy to work with from an script.
It's possible likely that libreoffice is implemented as some sort of daemon/intermediary process. The "daemon" will (effectively1) parse the commandline and then farm the work off to some other process, possibly detaching them so that it can exit immediately. (based on the -invisible option in the documentation I suspect strongly that this is indeed the case you have).
If this is the case, then your subprocess.call does do what it is advertised to do -- It waits for the daemon to complete before moving on. However, it doesn't do what you want which is to wait for all of the work to be completed. The only option you have in that scenario is to look to see if the daemon has a -wait option or similar.
1It is likely that we don't have an actual daemon here, only something which behaves similarly. See comments by abernert
The problem is that the soffice command-line tool (which libreoffice is either just a link to, or a further wrapper around) is just a "controller" for the real program soffice.bin. It finds a running copy of soffice.bin and/or creates on, tells it to do some work, and then quits.
So, call is doing exactly the right thing: it waits for libreoffice to quit.
But you don't want to wait for libreoffice to quit, you want to wait for soffice.bin to finish doing the work that libreoffice asked it to do.
It looks like what you're trying to do isn't possible to do directly. But it's possible to do indirectly.
The docs say that headless mode:
… allows using the application without user interface.
This special mode can be used when the application is controlled by external clients via the API.
In other words, the app doesn't quit after running some UNO strings/doing some conversions/whatever else you specify on the command line, it sits around waiting for more UNO commands from outside, while the launcher just runs as soon as it sends the appropriate commands to the app.
You probably have to use that above-mentioned external control API (UNO) directly.
See Scripting LibreOffice for the basics (although there's more info there about internal scripting than external), and the API documentation for details and examples.
But there may be an even simpler answer: unoconv is a simple command-line tool written using the UNO API that does exactly what you want. It starts up LibreOffice if necessary, sends it some commands, waits for the results, and then quits. So if you just use unoconv instead of libreoffice, call is all you need.
Also notice that unoconv is written in Python, and is designed to be used as a module. If you just import it, you can write your own (simpler, and use-case-specific) code to replace the "Main entrance" code, and not use subprocess at all. (Or, of course, you can tear apart the module and use the relevant code yourself, or just use it as a very nice piece of sample code for using UNO from Python.)
Also, the unoconv page linked above lists a variety of other similar tools, some that work via UNO and some that don't, so if it doesn't work for you, try the others.
If nothing else works, you could consider, e.g., creating a sentinel file and using a filesystem watch, so at least you'll be able to detect exactly when it's finished its work, instead of having to guess at a timeout. But that's a real last-ditch workaround that you shouldn't even consider until eliminating all of the other options.
If libreoffice is being using an intermediary (daemon) as mentioned by #mgilson, then one solution is to find out what program it's invoking, and then directly invoke it yourself.

Seeing what gets written to stderr in Python web site using FastCGI

I am working on a website, hosted on DreamHost, using Python. For a while, I was using their default setup, which runs Python scripts using CGI. It worked fine, but I was worried that if I get a lot of traffic, it would run slow and use a lot of memory, so I switched it over to FastCGI using this module.
Overall, it still works fine, but there is one major annoyance: I can't seem to be able to see anything that gets written to the standard error stream. If anything goes wrong, my usual source of useful clues for what to do about it no longer works. Before, I used to see stuff sent to standard error in my Apache error log. Now, it just seems to disappear.
I tried making a test script, and writing strings using sys.stderr.write (from various places), and environ["wsgi.errors"].write (from within my app, where environ is the first parameter passed to the app by the WSGI/FastCGI wrapper). Either way, I couldn't find them. Does anyone know why, or how to access this data?
Keep in mind that this is my first time ever using FastCGI, so please let me know if I am making a bad choice by using this fcgi module.
If something in your system is capturing file-descriptor two (the "real" stderr), you can assign sys.stderr to any open, writeable file object, or to a file-like object (it basically just needs to implement write) -- including a cStdIO.StdIO instance, whose value you can get at any time (before it's closed) with a call to its .getvalue() method.
To capture any uncaught exception just before it terminates your code, assign to sys.excepthook a function of yours in which you get the information and emit it in any way of your choice; or, to get and emit anything that was written to sys.stderr even without an exception (if that's what you want -- I'm not sure, from your question), use atexit to
register your grab-info-and-emit-it function.

Categories

Resources