Python file.write() taking two tries? - python

not sure how to explain this, any help will be appreciated!
Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2, pynotify, tempfile, os
>>> opener = urllib2.build_opener()
>>> page = opener.open('http://img.youtube.com/vi/RLGJU_xUVTs/1.jpg')
>>> thumb = page.read()
>>> temp = tempfile.NamedTemporaryFile(suffix='.jpg')
>>> temp.write(thumb)
>>> os.path.getsize(temp.name)
0
>>> temp.write(thumb)
>>> os.path.getsize(temp.name)
4096
thanks!

If you open the thumb file, you'll see that there's one whole copy and a partial copy of the data that you are writing in it.
flush the file instead of writing a second time.
temp.flush()
The file wasn't writing the first time because the contents aren't large enough to fill the buffer. The second write overfills the buffer and so a buffer's worth of data gets written.
As Cameron points out in his answer, the buffer is automatically flushed when you close the file. If you want to keep it open for some reason (and the fact that this is an issue for you seems to indicate that you do), then you can call flush and the data will be written right away.

You haven't called flush() or close() on the file object before checking its size on disk -- there is an internal buffer that is automatically flushed only after a certain amount of data is written (this avoids too many expensive trips to the disk when doing many writes).

Related

Unable to load .pkl file [duplicate]

When I am trying to load something I dumped using cPickle, I get the error message:
ValueError: insecure string pickle
Both the dumping and loading work are done on the same computer, thus same OS: Ubuntu 8.04.
How could I solve this problem?
"are much more likely than a never-observed bug in Python itself in a functionality that's used billions of times a day all over the world": it always amazes me how cross people get in these forums.
One easy way to get this problem is by forgetting to close the stream that you're using for dumping the data structure. I just did
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
Which is why I came here in the first place, because I couldn't see what I'd done wrong.
And then I actually thought about it, rather than just coming here, and realized that I should have done:
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> out.close() # close it to make sure it's all been written
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Easy to forget. Didn't need people being told that they are idiots.
I've get this error in Python 2.7 because of open mode 'rb':
with open(path_to_file, 'rb') as pickle_file:
obj = pickle.load(pickle_file)
So, for Python 2 'mode' should be 'r'
Also, I've wondered that Python 3 doesn't support pickle format of Python 2, and in case when you'll try to load pickle file created in Python 2 you'll get:
pickle.unpicklingerror: the string opcode argument must be quoted
Check this thread. Peter Otten says:
A corrupted pickle. The error is
raised if a string in the dump does
not both start and end with " or '.
and shows a simple way to reproduce such "corruption". Steve Holden, in the follow-up post, suggests another way to cause the problem would be to mismatch 'rb' and 'wb' (but in Python 2 and on Linux that particular mistake should pass unnoticed).
What are you doing with data between dump() and load()? It's quite common error to store pickled data in file opened in text mode (on Windows) or in database storage in the way that doesn't work properly for binary data (VARCHAR, TEXT columns in some databases, some key-value storages). Try to compare pickled data that you pass to storage and immediately retrieved from it.
If anyone has this error using youtube-dl, this issue has the fix: https://github.com/rg3/youtube-dl/issues/7172#issuecomment-242961695
richiecannizzo commented on Aug 28
brew install libav
Should fix it instantly on mac or
sudo apt-get install libav
#on linux
This error may also occur with python 2 (and early versions of python 3) if your pickle is large (Python Issue #11564):
Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import cPickle as pickle
>>> string = "X"*(2**31)
>>> pp = pickle.dumps(string)
>>> len(pp)
2147483656
>>> ss = pickle.loads(pp)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
This limitation was addressed with the introduction of pickle protocol 4 in python 3.4 (PEP 3154). Unfortunately, this feature has not been back-ported to python 2, and probably won't ever be. If this is your problem and you need to use python 2 pickle, the best you can do is reduce the size of your pickle, e.g., instead of pickling a list, pickle the elements individually into a list of pickles.
Same problem with a file that was made with python on windows, and reloaded with python on linux.
Solution : dos2unix on the file before reading in linux : works as a charm !
I got the Python ValueError: insecure string pickle message in a different way.
For me it happened after a base64 encoding a binary file and passing through urllib2 sockets.
Initially I was wrapping up a file like this
with open(path_to_binary_file) as data_file:
contents = data_file.read()
filename = os.path.split(path)[1]
url = 'http://0.0.0.0:8080/upload'
message = {"filename" : filename, "contents": contents}
pickled_message = cPickle.dumps(message)
base64_message = base64.b64encode(pickled_message)
the_hash = hashlib.md5(base64_message).hexdigest()
server_response = urllib2.urlopen(url, base64_message)
But on the server the hash kept coming out differently for some binary files
decoded_message = base64.b64decode(incoming_base64_message)
the_hash = hashlib.md5(decoded_message).hexdigest()
And unpickling gave insecure string pickle message
cPickle.loads(decoded_message)
BUT SUCCESS
What worked for me was to use urlsafe_b64encode()
base64_message = base64.urlsafe_b64encode(cPickle.dumps(message))
And decode with
base64_decoded_message = base64.urlsafe_b64decode(base64_message)
References
http://docs.python.org/2/library/base64.html
https://www.rfc-editor.org/rfc/rfc3548.html#section-3
This is what happened to me, might be a small section of population, but I want to put this out here nevertheless, for them:
Interpreter (Python3) would have given you an error saying it required the input file stream to be in bytes, and not as a string, and you may have changed the open mode argument from 'r' to 'rb', and now it is telling you the string is corrupt, and thats why you have come here.
The simplest option for such cases is to install Python2 (You can install 2.7) and then run your program with Python 2.7 environment, so it unpickles your file without issue. Basically I wasted a lot of time scanning my string seeing if it was indeed corrupt when all I had to do was change the mode of opening the file from rb to r, and then use Python2 to unpickle the file. So I'm just putting this information out there.
I ran into this earlier, found this thread, and assumed that I was immune to the file closing issue mentioned in a couple of these answers since I was using a with statement:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(temp_file.name)
However, since I was pushing the temp file from inside the with, the file still wasn't closed, so the file I was pushing was truncated. This resulted in the same insecure string pickle error in the script that read the file on the remote machine.
Two potential fixes to this: Keep the file open and force a flush:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
temp_file.flush()
# Push file to another machine
_send_file(temp_file.name)
Or make sure the file is closed before doing anything with it:
file_name = ''
with tempfile.NamedTemporaryFile(mode='wb', delete=False) as temp_file:
file_name = temp_file.name
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(file_name)

Read specific bytes of file in python

I want to specify an offset and then read the bytes of a file like
offset = 5
read(5)
and then read the next 6-10 etc. I read about seek but I cannot understand how it works and the examples arent descriptive enough.
seek(offset,1) returns what?
Thanks
The values for the second parameter of seek are 0, 1, or 2:
0 - offset is relative to start of file
1 - offset is relative to current position
2 - offset is relative to end of file
Remember you can check out the help -
>>> help(file.seek)
Help on method_descriptor:
seek(...)
seek(offset[, whence]) -> None. Move to new file position.
Argument offset is a byte count. Optional argument whence defaults to
0 (offset from start of file, offset should be >= 0); other values are 1
(move relative to current position, positive or negative), and 2 (move
relative to end of file, usually negative, although many platforms allow
seeking beyond the end of a file). If the file is opened in text mode,
only offsets returned by tell() are legal. Use of other offsets causes
undefined behavior.
Note that not all file objects are seekable.
Just play with Python's REPL to see for yourself:
[...]:/tmp$ cat hello.txt
hello world
[...]:/tmp$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f = open('hello.txt', 'rb')
>>> f.seek(6, 1) # move the file pointer forward 6 bytes (i.e. to the 'w')
>>> f.read() # read the rest of the file from the current file pointer
'world\n'
seek doesn't return anything useful. It simply moves the internal file pointer to the given offset. The next read will start reading from that pointer onwards.

Python: Save a list and recover it for AI learning application

I have an application that requires saving a list as it grows. This is a step towards doing AI learning stuff. Anyway here is my code:
vocab = ["cheese", "spam", "eggs"]
word = raw_input()
vocab.append(word)
However when the code is run the finished vocab will return to just being cheese, spam and eggs. How can I make whatever I append to the list stay there permenantly even when windows CMD is closed and I return to the code editing stage. Is this clear enough??
Thanks
You're looking into the more general problem of object persistence. There are bazillions of ways to do this. If you're just starting out, the easiest way to save/restore data structures in Python is using the pickle module. As you get more advanced, there are different methods and they all have their trade-offs...which you'll learn when you need to.
You could use json and files, something like this:
import json
#write the data to a file
outfile = open("dumpFile", 'w')
json.dump(vocab, outfile)
#read the data back in
with open("dumpFile") as infile:
newVocab = json.load(infile)
This has the advantage of being a plain text file, so you can view the data stored in it easily.
Look into pickle
With it, you can serialize a Python data structure and then reload it like so:
>>> import pickle
>>> vocab =["cheese", "spam", "eggs"]
>>> outf=open('vocab.pkl','wb')
>>> pickle.dump(vocab,outf)
>>> outf.close()
>>> quit()
Python interpreter is now exited, restart Python and reload the data structure:
abd-pb:~ andrew$ python
Python 2.7.1 (r271:86882M, Nov 30 2010, 10:35:34)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> pklf=open('vocab.pkl','rb')
>>> l=pickle.load(pklf)
>>> l
['cheese', 'spam', 'eggs']
You could use pickle, but as a technique it's limited to python programs. The normal way to deal with persistent data is to write it to a file. Just read your word list from a regular text file (one word per line), and write out an updated word list later. Later you can learn how to keep this kind of information in a database (more powerful but less flexible than a file).
You can happily program for years without actually needing pickle, but you can't do without file i/o and databases.
PS. Keep it simple: You don't need to mess with json, pickle or any other structured format unless and until you NEED structure.

Gzip and subprocess' stdout in python

I'm using python 2.6.4 and discovered that I can't use gzip with subprocess the way I might hope. This illustrates the problem:
May 17 18:05:36> python
Python 2.6.4 (r264:75706, Mar 10 2010, 14:41:19)
[GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gzip
>>> import subprocess
>>> fh = gzip.open("tmp","wb")
>>> subprocess.Popen("echo HI", shell=True, stdout=fh).wait()
0
>>> fh.close()
>>>
[2]+ Stopped python
May 17 18:17:49> file tmp
tmp: data
May 17 18:17:53> less tmp
"tmp" may be a binary file. See it anyway?
May 17 18:17:58> zcat tmp
zcat: tmp: not in gzip format
Here's what it looks like inside less
HI
^_<8B>^H^Hh<C0><F1>K^B<FF>tmp^#^C^#^#^#^#^#^#^#^#^#
which looks like it put in the stdout as text and then put in an empty gzip file. Indeed, if I remove the "Hi\n", then I get this:
May 17 18:22:34> file tmp
tmp: gzip compressed data, was "tmp", last modified: Mon May 17 18:17:12 2010, max compression
What is going on here?
UPDATE:
This earlier question is asking the same thing: Can I use an opened gzip file with Popen in Python?
You can't use file-likes with subprocess, only real files. The fileno() method of GzipFile returns the FD of the underlying file, so that's what the echo redirects to. The GzipFile then closes, writing an empty gzip file.
just pipe that sucker
from subprocess import Popen,PIPE
GZ = Popen("gzip > outfile.gz",stdin=PIPE,shell=True)
P = Popen("echo HI",stdout=GZ.stdin,shell=True)
# these next three must be in order
P.wait()
GZ.stdin.close()
GZ.wait()
I'm not totally sure why this isn't working (perhaps the output redirection is not calling python's write, which is what gzip works with?) but this works:
>>> fh.write(subprocess.Popen("echo Hi", shell=True, stdout=subprocess.PIPE).stdout.read())
You don't need to use subprocess to write to the gzip.GzipFile. Instead, write to it like any other file-like object. The result is automagically gzipped!
import gzip
with gzip.open("tmp.gz", "wb") as fh:
fh.write('echo HI')

ValueError: insecure string pickle

When I am trying to load something I dumped using cPickle, I get the error message:
ValueError: insecure string pickle
Both the dumping and loading work are done on the same computer, thus same OS: Ubuntu 8.04.
How could I solve this problem?
"are much more likely than a never-observed bug in Python itself in a functionality that's used billions of times a day all over the world": it always amazes me how cross people get in these forums.
One easy way to get this problem is by forgetting to close the stream that you're using for dumping the data structure. I just did
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
Which is why I came here in the first place, because I couldn't see what I'd done wrong.
And then I actually thought about it, rather than just coming here, and realized that I should have done:
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> out.close() # close it to make sure it's all been written
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Easy to forget. Didn't need people being told that they are idiots.
I've get this error in Python 2.7 because of open mode 'rb':
with open(path_to_file, 'rb') as pickle_file:
obj = pickle.load(pickle_file)
So, for Python 2 'mode' should be 'r'
Also, I've wondered that Python 3 doesn't support pickle format of Python 2, and in case when you'll try to load pickle file created in Python 2 you'll get:
pickle.unpicklingerror: the string opcode argument must be quoted
Check this thread. Peter Otten says:
A corrupted pickle. The error is
raised if a string in the dump does
not both start and end with " or '.
and shows a simple way to reproduce such "corruption". Steve Holden, in the follow-up post, suggests another way to cause the problem would be to mismatch 'rb' and 'wb' (but in Python 2 and on Linux that particular mistake should pass unnoticed).
What are you doing with data between dump() and load()? It's quite common error to store pickled data in file opened in text mode (on Windows) or in database storage in the way that doesn't work properly for binary data (VARCHAR, TEXT columns in some databases, some key-value storages). Try to compare pickled data that you pass to storage and immediately retrieved from it.
If anyone has this error using youtube-dl, this issue has the fix: https://github.com/rg3/youtube-dl/issues/7172#issuecomment-242961695
richiecannizzo commented on Aug 28
brew install libav
Should fix it instantly on mac or
sudo apt-get install libav
#on linux
This error may also occur with python 2 (and early versions of python 3) if your pickle is large (Python Issue #11564):
Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import cPickle as pickle
>>> string = "X"*(2**31)
>>> pp = pickle.dumps(string)
>>> len(pp)
2147483656
>>> ss = pickle.loads(pp)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
This limitation was addressed with the introduction of pickle protocol 4 in python 3.4 (PEP 3154). Unfortunately, this feature has not been back-ported to python 2, and probably won't ever be. If this is your problem and you need to use python 2 pickle, the best you can do is reduce the size of your pickle, e.g., instead of pickling a list, pickle the elements individually into a list of pickles.
Same problem with a file that was made with python on windows, and reloaded with python on linux.
Solution : dos2unix on the file before reading in linux : works as a charm !
I got the Python ValueError: insecure string pickle message in a different way.
For me it happened after a base64 encoding a binary file and passing through urllib2 sockets.
Initially I was wrapping up a file like this
with open(path_to_binary_file) as data_file:
contents = data_file.read()
filename = os.path.split(path)[1]
url = 'http://0.0.0.0:8080/upload'
message = {"filename" : filename, "contents": contents}
pickled_message = cPickle.dumps(message)
base64_message = base64.b64encode(pickled_message)
the_hash = hashlib.md5(base64_message).hexdigest()
server_response = urllib2.urlopen(url, base64_message)
But on the server the hash kept coming out differently for some binary files
decoded_message = base64.b64decode(incoming_base64_message)
the_hash = hashlib.md5(decoded_message).hexdigest()
And unpickling gave insecure string pickle message
cPickle.loads(decoded_message)
BUT SUCCESS
What worked for me was to use urlsafe_b64encode()
base64_message = base64.urlsafe_b64encode(cPickle.dumps(message))
And decode with
base64_decoded_message = base64.urlsafe_b64decode(base64_message)
References
http://docs.python.org/2/library/base64.html
https://www.rfc-editor.org/rfc/rfc3548.html#section-3
This is what happened to me, might be a small section of population, but I want to put this out here nevertheless, for them:
Interpreter (Python3) would have given you an error saying it required the input file stream to be in bytes, and not as a string, and you may have changed the open mode argument from 'r' to 'rb', and now it is telling you the string is corrupt, and thats why you have come here.
The simplest option for such cases is to install Python2 (You can install 2.7) and then run your program with Python 2.7 environment, so it unpickles your file without issue. Basically I wasted a lot of time scanning my string seeing if it was indeed corrupt when all I had to do was change the mode of opening the file from rb to r, and then use Python2 to unpickle the file. So I'm just putting this information out there.
I ran into this earlier, found this thread, and assumed that I was immune to the file closing issue mentioned in a couple of these answers since I was using a with statement:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(temp_file.name)
However, since I was pushing the temp file from inside the with, the file still wasn't closed, so the file I was pushing was truncated. This resulted in the same insecure string pickle error in the script that read the file on the remote machine.
Two potential fixes to this: Keep the file open and force a flush:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
temp_file.flush()
# Push file to another machine
_send_file(temp_file.name)
Or make sure the file is closed before doing anything with it:
file_name = ''
with tempfile.NamedTemporaryFile(mode='wb', delete=False) as temp_file:
file_name = temp_file.name
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(file_name)

Categories

Resources