Using ctypes to import a DLL. Occasionally, after a function from the dll is called and I call the print() function in Python, I get an OS Error: Invalid handle.
The calls to the dll are successful and 90% of the time the application works without a hitch. Every 10 runs this exception will throw and I can't even catch it properly since I don't have a way to restore the handle.
I'm think the dll is somehow messing with the stdout handle that print() uses. There are some functions within the dll that still print to stdout. Is there any way to reacquire a valid handle?
Traceback (most recent call last):
File "{PATH}/demo.py", line 13, in <module>
print(" ")
OSError: [WinError 6] The handle is invalid
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
OSError: [WinError 6] The handle is invalid
Issue has been fixed by duplicating the stdout handle using os.dup2()
# Duplicate stdout
stdout_copy = 0
os.dup2(sys.stdout.fileno(), stdout_copy)
# Restore stdout
sys.stdout = os.fdopen(stdout_copy,"w")
Related
I was trying to debug an issue with abc.ABCMeta - in particular a subclass check that didn't work as expected and I wanted to start by simply adding a print to the __subclasscheck__ method (I know there are better ways to debug code, but pretend for the sake of this question that there's no alternative). However when starting Python afterwards Python crashes (like a segmentation fault) and I get this exception:
Fatal Python error: Py_Initialize: can't initialize sys standard streams
Traceback (most recent call last):
File "C:\...\lib\io.py", line 84, in <module>
File "C:\...\lib\abc.py", line 158, in register
File "C:\...\lib\abc.py", line 196, in __subclasscheck__
RuntimeError: lost sys.stdout
So it probebly wasn't a good idea to put the print in there. But where exactly does the exception come from? I only changed Python code, that shouldn't crash, right?
Does someone know where this exception is coming from and if/how I can avoid it but still put a print in the abc.ABCMeta.__subclasscheck__ method?
I'm using Windows 10, Python-3.5 (just in case it might be important).
This exception stems from the fact that CPython imports io, and, indirectly, abc.py during the initialization of the standard streams:
if (!(iomod = PyImport_ImportModule("io"))) {
goto error;
}
io imports the abc module and registers FileIO as a virtual subclass of RawIOBase, a couple of other classes for BufferedIOBase and others for TextIOBase. ABCMeta.register invokes __subclasscheck__ in the process.
As you understand, using print in __subclasscheck__ when sys.stdout isn't set-up is a big no-no; the initialization fails and you get back your error:
if (initstdio() < 0)
Py_FatalError(
"Py_NewInterpreter: can't initialize sys standard streams");
You can get around it by guarding it with a hasattr(sys, 'stdout'), sys has been initialized by this point while stdout hasn't (and, as such, won't exist in sys in the early initialization phase):
if hasattr(sys, 'stdout'):
print("Debug")
you should get good amount of output when firing Python up now.
We have an issue using the subprocess library in Python. We tried to pass 2 message with the function communicate. The first one work correctly, but the second one generates a IOError
error. I think that we probably use incorrectly the function of the subprocess library, but we are not able to fix it.
Can anyone help us?
Here is the code:
from subprocess import *
video=Popen("omxplayer myvideo.mp4",shell=True,stdout=PIPE,stdin=PIPE)
video.stdin.write('+')
video.stdin.flush()
result=video.stdout.read()
print "Vol +: "+result
video.communicate('p')
print "Pause"
And the error:
Traceback (most recent call last):
File "youtube.py", line 55, in <module>
proc.stdin.write('+')
IOError: [Errno 32] Broken pipe
Thank you
I used to think that os.fdopen() either eats file descriptor and returns a file io object, or raises an exception.
For example:
fd = os.open("/etc/passwd", os.O_RDONLY)
try: os.fdopen(fd, "w")
except: os.close(fd) # prevent memory leak
However these semantics don't seem to always hold.
Here's an example on OSX:
In [1]: import os
In [2]: os.open("/", os.O_RDONLY, 0660)
Out[2]: 5
In [3]: os.fdopen(5, "rb")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-3-3ca4d619250e> in <module>()
----> 1 os.fdopen(5, "rb")
IOError: [Errno 21] Is a directory: '<fdopen>'
In [4]: os.close(5)
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-4-76713e571514> in <module>()
----> 1 os.close(5)
OSError: [Errno 9] Bad file descriptor
It seems that os.fdopen() both ate my file descriptor 5 and raised an exception...
Is there a safe way to use os.fdopen()?
Did I miss something?
Did I find a bug?
P.S. Python version string Python 2.7.6 (v2.7.6:3a1db0d2747e, Nov 10 2013, 00:42:54) in case someone can't reproduce with theirs.
P.P.S. same problem is present on Py2.7 Linux too.
Py3.3 however doesn't exhibit said problem.
Python checks that the resulting FILE* does not refer to a directory after creating the python file object and storing it in the python object. Because of the error in the directory check, the file object is deref'd (since it won't be returned) which causes the destructor to be called which closes the file.
I agree that it'd be nice if the docs showed what effect it can have on the file descriptor passed in. I'm not sure what you want as a 'safe' way to use fdopen. If you're going to close the filedescriptor on failure, what does it matter that was closed by python? Just use
try: os.close(fd)
except: pass
to quelch the secondary exception.
fill_file_fields is called by PyFile_FromFile to fill in the members of the file object and it calls the dircheck function after the fields have been populated. This causes fill_file_fields to return NULL so PyFile_FromFile does Py_DECREF(f); where f is the file object. Since this is the last reference, the deallocator file_dealloc is called which invokes close_the_file which (surprise, surprise) closes the file.
In the 3.4 branch, the dircheck is done from fileio_init which uses the flag variable fd_is_own to determine whether the file should be closed on an error condition.
I have control over what goes into /usr/bin/python and can replace it with my script which calls python underneath. However, I do not have control on the programs that are written (cannot mandate a particular convention etc.)
In such a situation, what would be the best way to have python dump the stacktrace into a database in addition to displaying it on stdout? (Have your own script?)
Update:
Clarification: I meant a stacktrace that a program generates upon error:
l = [1,2,3]
l[4]
Traceback (most recent call last):
File "", line 1, in
IndexError: list index out of range
Solution: I think AKX's solution below works in redirecting stderr to a script which dumps the stacktrace into a NoSQL store. Thanks!
You can define default exception handler:
import sys
import traceback
def my_handler(typ, value, tb):
error_str = traceback.format_exception(typ, value, tb)
print 'Here you can write exception to DB: ', error_str
sys.excepthook = my_handler
print 1 / 0 # here you can execute third party code via execfile/import
Look into documentation about exception handling and generating tracebacks.
How do I sort out (distinguish) an error derived from a "disk full condition" from "trying to write to a read-only file system"?
I don't want to fill my HD to find out :)
What I want is to know who to catch each exception, so my code can say something to the user when he is trying to write to a ReadOnly FS and another message if the user is trying to write a file in a disk that is full.
Once you catch IOError, e.g. with an except IOError, e: clause in Python 2.*, you can examine e.errno to find out exactly what kind of I/O error it was (unfortunately in a way that's not necessarily fully portable among different operating systems).
See the errno module in Python standard library; opening a file for writing on a R/O filesystem (on a sensible OS) should produce errno.EPERM, errno.EACCES or better yet errno.EROFS ("read-only filesystem"); if the filesystem is R/W but there's no space left you should get errno.ENOSPC ("no space left on device"). But you will need to experiment on the OSes you care about (with a small USB key filling it up should be easy;-).
There's no way to use different except clauses depending on errno -- such clauses must be distinguished by the class of exceptions they catch, not by attributes of the exception instance -- so you'll need an if/else or other kind of dispatching within a single except IOError, e: clause.
On a read-only filesystem, the files themselves will be marked as read-only. Any attempt to open a read-only file for writing (O_WRONLY or O_RDWR) will fail. On UNIX-like systems, the errno EACCES will be set.
>>> file('/etc/resolv.conf', 'a')
Traceback (most recent call last):
File "", line 1, in
IOError: [Errno 13] Permission denied: '/etc/resolv.conf'
In contrast, attempts to write to a full file may result in ENOSPC. May is critical; the error may be delayed until fsync or close.
>>> file(/dev/full, 'a').write('\n')
close failed in file object destructor:
IOError: [Errno 28] No space left on device