I am new to stackoverflow and experimenting with Python, currently just trying tutorial examples. Experienced a wonderful learning curve but got completely stuck with the following (working under windows 10):
import shelve
s = shelve.open("test")
Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\dbm\dumb.py", line 82, in _create
f = _io.open(self._datfile, 'r', encoding="Latin-1")
FileNotFoundError: [Errno 2] No such file or directory: 'test.dat'
It would be great to get some help to resolve this.
During handling of the above exception, another exception occurred:
In Python 3, by default, shelve.open tries to open an existing shelf for reading. You have to pass an explicit flag to create a new shelf if if doesn't already exist.
s = shelve.open("test", "c")
This is in contrast to Python 2, where the default flag was "c" instead of "r".
How to read an error message
In general, error messages will do their best to tell you what's wrong. In the case of python, you'll typically start at the bottom; here
No such file or directory: 'test.dat'
tells you exactly why the error's being thrown: test.dat doesn't exist.
Next you would read upward through the stack trace until we got to something that we either understood or had written recently, and we'd try to make sense of the error message from there.
How to troubleshoot an error
Is the stated problem intelligible?
Yes, we asked the software to do something with a (.dat?) file called "test", so we at least know what the hell the error message is talking about.
Do we agree with the underlying premise of the error?
Specifically, does it make sense that it should matter if test.dat exists or not? Chepner covers this.
Do we agree with the specific problem as stated?
For example, it wouldn't be weird at all to get such an error message when there was in fact such a file. Then we would have a more specific question: "Why can't the software find the file?" That's progress.
(Usually the answer would be either "Because it's looking in the wrong place" or "Because it doesn't have permission to access that file".)
Read the documentation for the tools and functions in question.
How can we validate either our own understanding of the situation, or the situation described in the error message?
Depending on the context this may involve some trial and error of re-writing our code to
print out (log) its state during execution
do something similar but different from what it was doing, which we're more certain should work.
do something similar but different from what it was doing, which we're more certain should not work.
Ask for help.
Seems shelve sometimes uses the dumbdbm to serialize.
Use dbm to use dbm instead:
import dbm
with dbm.open($filename, 'n') as db:
# read/write
Related
Sorry if this might be an easy question, but I'm trying to open a Unix Executable File using Python, but it doesn't have any file extensions attached to it. The file name looks something like 'filename_bib'. I typed this and it worked:
hdulist = open('filename_bib')
But next when I typed in hdulist.info() or hdulist.shape(), it doesn't give me anything, so I checked all its attributes and tried print(type()) and hdulist.attribute? for each attribute, but I didn't really understand any of the explanations, so I actually tried typing all of them to see what they would give me, but at some point it started giving me errors which said:
ValueError: I/O operation on closed file
so I think this may have happened when I tried using hdulist.close() or hdulist.closed(), but I don't know (1) if it was a mistake for me to try any of the attributes, (2) if it somehow changed anything from my original file, and (3) how to fix it.
I was told that this file contains bytes and that I should somehow be able to show a picture from it using Python, but this is my first time handling Unix Executable Files, and I have absolutely no idea how to start. I've handled fits and pl files before, but this is my first time trying to open something like this. I've tried looking up a bunch of things online already, but I can't find any instructions whatsoever. Please help me out if you know anything about this. I will be very grateful for any help that you could give me.
This is what it shows when I open it in Sublime:
enter image description here
As the default file access mode in python is "read only". Technically, since you have not mentioned any access mode in your command
hdulist = open('filename_bib')
file should only be for reading and nothing should have happend to the opened file.
Question:
Have you tried running it in UNIX by,
./filename_bib
What was the output?
I just blew a few days tracking down a bug in a python script (unknown to me, web2py used a different root directory from what i was expecting, leading to a file read to fail silently. Thus when run from command line, code was fine, but when run from web, code failed).
Having finally tracked down the culprit I can just fix the silent fail (which is in a library, openCV in this case) but a smarter detective would have seen the fail in some sort of system log , if such exists. Then , no matter where the silent fail is, I still see it and don't have to painstakingly track down the fail.
So - is there some sort of global error logfile for linux that logs such things as file-read errors?
And yes I know there is python specific error logging but the question still holds. e.g. if i have a complex project with some python, some C, some whatever, and someone somewhere is silently failing, a system-wide error log would be of immense help.
This solution may not be performant enough for your needs, but regardless I wanted to report on some research I did that may lead you in the right direction.
First of all, there is logging in linux systems under /var/log. Of interest are the syslog and messages files, which log all kinds of system events. But file read "errors" are not logged, as explained below.
In the case of opening a file that doesn't exist, we are ultimately looking for an open system call that fails (python's open calls this). But there is no notion of an exception at this low level - if open fails it just returns a negative number. In C, you can open files that don't exist all day long and still have your program return a 0 error code.
This means you have to do some work yourself to track this problem. I took your question to be, "How can I track these errors at a level below python's exceptions?" For this you can use a combination of strace and grep. You attach strace per process and it logs all the system calls that take place.
So imagine we have a C program that looks like this:
#include <stdio.h>
int main()
{
fopen("nothere.txt","r");
}
By running strace ./test 2>&1 | grep ENOENT, we get:
open("nothere.txt", O_RDONLY) = -1 ENOENT (No such file or directory)
You could of course run strace on a python process to achieve the same results.
Things to be wary of:
You have to attach this per process. If you don't, we're back to silent errors.
Python generates a lot of system calls. Your log files might get big.
There are a lot of IO errors out there. ENOENT is only one of them.
You will need more complex string parsing to filter out system calls you don't care about.
Perhaps you could post the code that is reading the file? open() failures should always generate an IOError exception:
with open('no-such-file') as f:
print f.read()
Traceback (most recent call last):
File "app.py", line 1, in <module>
with open('no-such-file') as f:
IOError: [Errno 2] No such file or directory: 'no-such-file'
The cause of your frustration is most likely bad exception handling, as in the following code:
try:
with open('no-such-file') as f:
print f.read()
except Exception, e:
print 'bad exception handling here'
I am trying to write a function that imports data from a Stata .dta file using the pandas read_stata function. I would like to detect any problems with the read process (for example, file doesn't exist) using something akin to:
try:
data = read_stata('filename.dta')
except someTypeOfException:
print "Error"
exit(0)
so I can print a message and exit gracefully (sorry, can't get the indents to work). However, I can't find any information about the Exceptions raised by read_stata if there is a problem. I'm new to python and pandas and I may not be expressing my web searches correctly. Or I may be barking up the wrong tree altogether, of course. Can anyone point me in the right direction please?
Thanks in advance.
I think your question is too broad. There are too many possible exceptions: some of them may be related to read_stata(), some may be not. The one you mentioned, file doesn't exist, would result in a IOerror, which is not even read_stata related.
To see all the possible exceptions that may be raised by read_stata(), go check its source code, located in <path to pandas>/io/stata.py. This should give you a good place to start.
I'm working on a large scale software system written in Python right now, which includes multiple modules. I was wondering what I should do about this, if anyone could make any sense of this error message that I keep receiving:
File "<string>", line 1, in <module>
NameError: name 'CerealObject' is not defined
The thing that makes it very cryptic is that it seems to not provide an actual file name or a specific module. From a beginner's standpoint this makes it seem impossible to debug.
File "<string>" in an exception stack trace typically means that you're using either exec or eval somewhere. They execute code from a string, hence the lack of an actual file name.
You'll need to look at the following line(s) of your stack trace to determine the source of the problem.
I'm trying to import a large LGL file (~2GB) and I am attempting to import this in igraph using
graph = Graph.Read_Lgl("Biggraph.lgl")
The error it is throwing is
Traceback (most recent call last):
File "graph.py", line8, in <module>
graph = Graph.Read_Lgl("Biggraph.lgl")
igraph.core.InternalError: Error at foreign.c:359: Parse error in LGL file, line 9997 (memory exhausted), Parse Error
I'm unsure as to what exactly is going on here. The memory exhausted error is making me think that the memory allocated to python (or the underlying C) is being used up when trying to read the file, but it almost happens instantly, like it isn't even trying to do much. Maybe it's looking at the file size and saying 'woah, can't do that.'
Seriously though, I have no idea what is happening. What I assumed from iGraph is that it can handle extremely large graphs, and I dont think my graph is too large for it.
I did generate the lgl file myself, but I believe I have the syntax correct. This error doesn't really seem like there is a problem with my lgl file, but I could be wrong ("Parse error" kind of scares me).
I just figured I'd try here and see if anyone more keen on how iGraph operates would know how to quickly solve this problem (or extend the memory). Thanks.
For the record, the poster has found a bug in the igraph library and we are working on a fix right now. The problem is caused by a right-recursive rule in the bison parser specification for the LGL format. Once we have an official patch for it in the trunk of the project, I will post the URL of the patch here should others run into the same problem.
Update:
The URLs to the patches are:
http://bazaar.launchpad.net/~igraph/igraph/0.5-main/revision/1696 (for igraph 0.5.x)
http://bazaar.launchpad.net/~igraph/igraph/0.6-main/revision/2543 (for igraph 0.6)