So I just spent a long time processing and writing out files for a project I'm working on. The files contain objects pickled with cPickle. Now I'm trying to load the pickled files and I'm running into the problem: "Can't import module ...". The thing is, I can import the module directly from the python prompt just fine.
I started noticing that my code had the problem reading the file (getting EOF error) and I noted that I was reading it with open('file','r'). Others noted that I need to specify that it's a binary file. I don't get the EOF error anymore, but now I'm getting this error.
It seems to me that I've screwed up the writing of my files initially by writing out with 'w' and not 'wb'.
The question I have is, is there a way to process the binary file and fix what 'w' changed? Possibly by searching for line returns and changing them (which is what I think the big difference is between 'w' and 'wb' on Windows).
Any help doing this would be amazing, as otherwise I will have lost weeks of work. Thanks.
I found the answer here. It talks about a solution to the same problem having, but not before outlining the traditional solution in python 2 (to all those that do this, thank you).
The solution comes down to this:
data = open(filename, "rb").read()
newdata = data.replace("\r\n", "\n")
if newdata != data:
f = open(filename, "wb")
f.write(newdata)
f.close()
Basically, just replace all instances of "\r\n" with "\n". It seems to have worked well, I can now open the file and unpickle it just fine.
Related
Please do not behead me for my noob question. I have looked up many other questions on stackoverflow concerning this topic, but haven't found a solution that works as intended.
The Problem:
I have a fairly large txt-file (about 5 MB) that I want to copy via readlines() or any other build in string-handling function into a new file. For smaller files the following code sure works (only schematically coded here):
f = open('C:/.../old.txt', 'r');
n = open('C:/.../new.txt', 'w');
for line in f:
print(line, file=n);
However, as I found out here (UnicodeDecodeError: 'charmap' codec can't encode character X at position Y: character maps to undefined), internal restrictions of Windows prohibit this from working on larger files. So far, the only solution I came up with is the following:
f = open('C:/.../old.txt', 'r', encoding='utf8', errors='ignore');
n = open('C:/.../new.txt', 'a');
for line in f:
print(line, file=sys.stderr) and append(line, file='C:/.../new.txt');
f.close();
n.close();
But this doesn't work. I do get a new.txt-file, but it is empty. So, how do I iterate through a long txt-file and write every line into a new txt-file? Is there a way to read the sys.stderr as the source for the new file (I actually don't have any idea, what this sys.stderr is)?
I know this is a noob question, but I don't know where to look for an answer anymore.
Thanks in advance!
There is no need to use print() just write() to the file:
with open('C:/.../old.txt', 'r') as f, open('C:/.../new.txt', 'w') as n:
n.writelines(f)
However, it sounds like you may have an encoding issue, so make sure that both files are opened with the correct encoding. If you provide the error output perhaps more help can be provided.
BTW: Python doesn't use ; as a line terminator, it can be used to separate 2 statements if you want to put them on the same line but this is generally considered bad form.
You can set standard output to file like my code.
I successfully copied 6MB text file with this.
import sys
bigoutput = open("bigcopy.txt", "w")
sys.stdout = bigoutput
with open("big.txt", "r") as biginput:
for bigline in biginput.readlines():
print(bigline.replace("\n", ""))
bigoutput.close()
Why don't you just use the shutil module and copy the file?
you can try with this code it works for me.
with open("file_path/../large_file.txt") as f:
with open("file_path/../new_file", "wb") as new_f:
new_f.writelines(f.readlines())
new_f.close()
f.close()
I just tried to update a program i wrote and i needed to add another pickle file. So i created the blank .pkl and then use this command to open it(just as i did with all my others):
with open('tryagain.pkl', 'r') as input:
self.open_multi_clock = pickle.load(input)
only this time around i keep getting this really weird error for no obvious reason,
cPickle.UnpicklingError: invalid load key, 'Γ'.
The pickle file does contain the necessary information to be loaded, it is an exact match to other blank .pkl's that i have and they load fine. I don't know what that last key is in the error but i suspect that could give me some incite if i know what it means.
So have have figured out the solution to this problem, and i thought I'd take the time to list some examples of what to do and what not to do when using pickle files. Firstly, the solution to this was to simply just make a plain old .txt file and dump the pickle data to it.
If you are under the impression that you have to actually make a new file and save it with a .pkl ending you would be wrong. I was creating my .pkl's with notepad++ and saving them as .pkl's. Now from my experience this does work sometimes and sometimes it doesn't, if your semi-new to programming this may cause a fair amount of confusion as it did for me. All that being said, i recommend just using plain old .txt files. It's the information stored inside the file not necessarily the extension that is important here.
#Notice file hasn't been pickled.
#What not to do. No need to name the file .pkl yourself.
with open('tryagain.pkl', 'r') as input:
self.open_multi_clock = pickle.load(input)
The proper way:
#Pickle your new file
with open(filename, 'wb') as output:
pickle.dump(obj, output, -1)
#Now open with the original .txt ext. DONT RENAME.
with open('tryagain.txt', 'r') as input:
self.open_multi_clock = pickle.load(input)
Gonna guess the pickled data is throwing off portability by the outputted characters. I'd suggest base64 encoding the pickled data before writing it to file. What what I ran:
import base64
import pickle
value_p = pickle.dumps("abdfg")
value_p_b64 = base64.b64encode(value_p)
f = file("output.pkl", "w+")
f.write(value_p_b64)
f.close()
for line in open("output.pkl", 'r'):
readable += pickle.loads(base64.b64decode(line))
>>> readable
'abdfg'
I am a beginner, writing a python script in which I need it to create a file that I can write information to. However, I am having problems getting it to create a new, not previously existing file.
for example, I have:
file = open(coordinates.kml, 'w')
which it proceeds to tell me:
nameerror: name 'coordinates' is not defined.
Of course it isn't defined, I'm trying to make that file.
Everything I read on creating a new file says to take this route, but it simply will not allow me. What am I doing wrong?
I even tried to flat out define it...
file = coordinates.kml
file_open = open(file, 'w')
... and essentially got the same result.
You need to pass coordinates.kml as a string, so place them in quotes (single or double is fine).
file = open("coordinates.kml", "w")
In addition to the above answer,
If you want to create a file in the same path, then no problem or else you need to specify the path as well in the quotes.
But surely opening a file with read permission will throw an error as you are trying to access an nonexistent file.
To be future proof and independent of the platforms you can read and write files in binaries. For example if this is Python on Windows, there could be some alternations done to the end of line. Hence reading and writing in Binary mode should help, using switches "rb" and "wb"
file = open("coordinates.kml", "wb")
And also remember to close the file session, else can throw errors while re running the script.
I'm running into a problem that I haven't seen anyone on StackOverflow encounter or even google for that matter.
My main goal is to be able to replace occurences of a string in the file with another string. Is there a way there a way to be able to acess all of the lines in the file.
The problem is that when I try to read in a large text file (1-2 gb) of text, python only reads a subset of it.
For example, I'll do a really simply command such as:
newfile = open("newfile.txt","w")
f = open("filename.txt","r")
for line in f:
replaced = line.replace("string1", "string2")
newfile.write(replaced)
And it only writes the first 382 mb of the original file. Has anyone encountered this problem previously?
I tried a few different solutions such as using:
import fileinput
for i, line in enumerate(fileinput.input("filename.txt", inplace=1)
sys.stdout.write(line.replace("string1", "string2")
But it has the same effect. Nor does reading the file in chunks such as using
f.read(10000)
I've narrowed it down to mostly likely being a reading in problem and not a writing problem because it happens for simply printing out lines. I know that there are more lines. When I open it in a full text editor such as Vim, I can see what the last line should be, and it is not the last line that python prints.
Can anyone offer any advice or things to try?
I'm currently using a 32-bit version of Windows XP with 3.25 gb of ram, and running Python 2.7
Try:
f = open("filename.txt", "rb")
On Windows, rb means open file in binary mode. According to the docs, text mode vs. binary mode only has an impact on end-of-line characters. But (if I remember correctly) I believe opening files in text mode on Windows also does something with EOF (hex 1A).
You can also specify the mode when using fileinput:
fileinput.input("filename.txt", inplace=1, mode="rb")
Are you sure the problem is with reading and not with writing out?
Do you close the file that is written to, either explicitly newfile.close() or using the with construct?
Not closing the output file is often the source of such problems when buffering is going on somewhere. If that's the case in your setting too, closing should fix your initial solutions.
If you use the file like this:
with open("filename.txt") as f:
for line in f:
newfile.write(line.replace("string1", "string2"))
It should only read into memory one line at a time, unless you keep a reference to that line in memory.
After each line is read it will be up to pythons garbage collector to get rid of it. Give this a try and see if it works for you :)
Found to solution thanks to Gareth Latty. Using an iterator:
def read_in_chunks(file, chunk_size=1000):
while True:
data = file.read(chunk_size)
if not data: break
yield data
This answer was posted as an edit to the question Python Does Not Read Entire Text File by the OP user1297872 under CC BY-SA 3.0.
So, this is a seemingly simple question, but I'm apparently very very dull. I have a little script that downloads all the .bz2 files from a webpage, but for some reason the decompressing of that file is giving me a MAJOR headache.
I'm quite a Python newbie, so the answer is probably quite obvious, please help me.
In this bit of the script, I already have the file, and I just want to read it out to a variable, then decompress that? Is that right? I've tried all sorts of way to do this, I usually get "ValueError: couldn't find end of stream" error on the last line in this snippet. I've tried to open up the zipfile and write it out to a string in a zillion different ways. This is the latest.
openZip = open(zipFile, "r")
s = ''
while True:
newLine = openZip.readline()
if(len(newLine)==0):
break
s+=newLine
print s
uncompressedData = bz2.decompress(s)
Hi Alex, I should've listed all the other methods I've tried, as I've tried the read() way.
METHOD A:
print 'decompressing ' + filename
fileHandle = open(zipFile)
uncompressedData = ''
while True:
s = fileHandle.read(1024)
if not s:
break
print('RAW "%s"', s)
uncompressedData += bz2.decompress(s)
uncompressedData += bz2.flush()
newFile = open(steamTF2mapdir + filename.split(".bz2")[0],"w")
newFile.write(uncompressedData)
newFile.close()
I get the error:
uncompressedData += bz2.decompress(s)
ValueError: couldn't find end of stream
METHOD B
zipFile = steamTF2mapdir + filename
print 'decompressing ' + filename
fileHandle = open(zipFile)
s = fileHandle.read()
uncompressedData = bz2.decompress(s)
Same error :
uncompressedData = bz2.decompress(s)
ValueError: couldn't find end of stream
Thanks so much for you prompt reply. I'm really banging my head against the wall, feeling inordinately thick for not being able to decompress a simple .bz2 file.
By the by, used 7zip to decompress it manually, to make sure the file isn't wonky or anything, and it decompresses fine.
You're opening and reading the compressed file as if it was a textfile made up of lines. DON'T! It's NOT.
uncompressedData = bz2.BZ2File(zipFile).read()
seems to be closer to what you're angling for.
Edit: the OP has shown a few more things he's tried (though I don't see any notes about having tried the best method -- the one-liner I recommend above!) but they seem to all have one error in common, and I repeat the key bits from above:
opening ... the compressed file as if
it was a textfile ... It's NOT.
open(filename) and even the more explicit open(filename, 'r') open, for reading, a text file -- a compressed file is a binary file, so in order to read it correctly you must open it with open(filename, 'rb'). ((my recommended bz2.BZ2File KNOWS it's dealing with a compressed file, of course, so there's no need to tell it anything more)).
In Python 2.*, on Unix-y systems (i.e. every system except Windows), you could get away with a sloppy use of open (but in Python 3.* you can't, as text is Unicode, while binary is bytes -- different types).
In Windows (and before then in DOS) it's always been indispensable to distinguish, as Windows' text files, for historical reason, are peculiar (use two bytes rather than one to end lines, and, at least in some cases, take a byte worth '\0x1A' as meaning a logical end of file) and so the reading and writing low-level code must compensate.
So I suspect the OP is using Windows and is paying the price for not carefully using the 'rb' option ("read binary") to the open built-in. (though bz2.BZ2File is still simpler, whatever platform you're using!-).
openZip = open(zipFile, "r")
If you're running on Windows, you may want to do say openZip = open(zipFile, "rb") here since the file is likely to contain CR/LF combinations, and you don't want them to be translated.
newLine = openZip.readline()
As Alex pointed out, this is very wrong, as the concept of "lines" is foreign to a compressed stream.
s = fileHandle.read(1024)
[...]
uncompressedData += bz2.decompress(s)
This is wrong for the same reason. 1024-byte chunks aren't likely to mean much to the decompressor, since it's going to want to work with it's own block-size.
s = fileHandle.read()
uncompressedData = bz2.decompress(s)
If that doesn't work, I'd say it's the new-line translation problem I mentioned above.
This was very helpful.
44 of 2300 files gave an end of file missing error, on Windows open.
Adding the b(inary) flag to open fixed the problem.
for line in bz2.BZ2File(filename, 'rb', 10000000) :
works well. (the 10M is the buffering size that works well with the large files involved)
Thanks!