I'm trying to use python's CSV sniffer tool as suggested in many StackOverflow answers to guess if a given CSV file is delimited by ; or ,.
It's working fine with basic files, but when a value contains a delimiter, it is surrounded by double quotes (as the standard goes), and the sniffer throws _csv.Error: Could not determine delimiter.
Has anyone experienced that before?
Here is a minimal failing CSV file:
column1,column2
0,"a, b"
And the proof of concept:
Python 3.5.1 (default, Dec 7 2015, 12:58:09)
[GCC 5.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import csv
>>> f = open("example.csv", "r")
>>> f.seek(0);
0
>>> csv.Sniffer().sniff(f.read(), delimiters=';,')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.5/csv.py", line 186, in sniff
raise Error("Could not determine delimiter")
_csv.Error: Could not determine delimiter
I have total control over the generation of input CSV file; but sometimes it is modified by a third party using MS Office and the delimiter is replaced by semicolumns, so I have to use this guessing approach.
I know I could stop using commas in the input file, but I would like to know if I'm doing something wrong first.
You are giving the sniffer too much input. Your sample file does work if you run:
csv.Sniffer().sniff(f.readline())
which uses only the header row to determine the delimiter character. If you want to understand why the Sniffer heuristics fail for more data, there is no substitute for reading the csv.py library source code.
Related
When I am trying to load something I dumped using cPickle, I get the error message:
ValueError: insecure string pickle
Both the dumping and loading work are done on the same computer, thus same OS: Ubuntu 8.04.
How could I solve this problem?
"are much more likely than a never-observed bug in Python itself in a functionality that's used billions of times a day all over the world": it always amazes me how cross people get in these forums.
One easy way to get this problem is by forgetting to close the stream that you're using for dumping the data structure. I just did
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
Which is why I came here in the first place, because I couldn't see what I'd done wrong.
And then I actually thought about it, rather than just coming here, and realized that I should have done:
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> out.close() # close it to make sure it's all been written
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Easy to forget. Didn't need people being told that they are idiots.
I've get this error in Python 2.7 because of open mode 'rb':
with open(path_to_file, 'rb') as pickle_file:
obj = pickle.load(pickle_file)
So, for Python 2 'mode' should be 'r'
Also, I've wondered that Python 3 doesn't support pickle format of Python 2, and in case when you'll try to load pickle file created in Python 2 you'll get:
pickle.unpicklingerror: the string opcode argument must be quoted
Check this thread. Peter Otten says:
A corrupted pickle. The error is
raised if a string in the dump does
not both start and end with " or '.
and shows a simple way to reproduce such "corruption". Steve Holden, in the follow-up post, suggests another way to cause the problem would be to mismatch 'rb' and 'wb' (but in Python 2 and on Linux that particular mistake should pass unnoticed).
What are you doing with data between dump() and load()? It's quite common error to store pickled data in file opened in text mode (on Windows) or in database storage in the way that doesn't work properly for binary data (VARCHAR, TEXT columns in some databases, some key-value storages). Try to compare pickled data that you pass to storage and immediately retrieved from it.
If anyone has this error using youtube-dl, this issue has the fix: https://github.com/rg3/youtube-dl/issues/7172#issuecomment-242961695
richiecannizzo commented on Aug 28
brew install libav
Should fix it instantly on mac or
sudo apt-get install libav
#on linux
This error may also occur with python 2 (and early versions of python 3) if your pickle is large (Python Issue #11564):
Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import cPickle as pickle
>>> string = "X"*(2**31)
>>> pp = pickle.dumps(string)
>>> len(pp)
2147483656
>>> ss = pickle.loads(pp)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
This limitation was addressed with the introduction of pickle protocol 4 in python 3.4 (PEP 3154). Unfortunately, this feature has not been back-ported to python 2, and probably won't ever be. If this is your problem and you need to use python 2 pickle, the best you can do is reduce the size of your pickle, e.g., instead of pickling a list, pickle the elements individually into a list of pickles.
Same problem with a file that was made with python on windows, and reloaded with python on linux.
Solution : dos2unix on the file before reading in linux : works as a charm !
I got the Python ValueError: insecure string pickle message in a different way.
For me it happened after a base64 encoding a binary file and passing through urllib2 sockets.
Initially I was wrapping up a file like this
with open(path_to_binary_file) as data_file:
contents = data_file.read()
filename = os.path.split(path)[1]
url = 'http://0.0.0.0:8080/upload'
message = {"filename" : filename, "contents": contents}
pickled_message = cPickle.dumps(message)
base64_message = base64.b64encode(pickled_message)
the_hash = hashlib.md5(base64_message).hexdigest()
server_response = urllib2.urlopen(url, base64_message)
But on the server the hash kept coming out differently for some binary files
decoded_message = base64.b64decode(incoming_base64_message)
the_hash = hashlib.md5(decoded_message).hexdigest()
And unpickling gave insecure string pickle message
cPickle.loads(decoded_message)
BUT SUCCESS
What worked for me was to use urlsafe_b64encode()
base64_message = base64.urlsafe_b64encode(cPickle.dumps(message))
And decode with
base64_decoded_message = base64.urlsafe_b64decode(base64_message)
References
http://docs.python.org/2/library/base64.html
https://www.rfc-editor.org/rfc/rfc3548.html#section-3
This is what happened to me, might be a small section of population, but I want to put this out here nevertheless, for them:
Interpreter (Python3) would have given you an error saying it required the input file stream to be in bytes, and not as a string, and you may have changed the open mode argument from 'r' to 'rb', and now it is telling you the string is corrupt, and thats why you have come here.
The simplest option for such cases is to install Python2 (You can install 2.7) and then run your program with Python 2.7 environment, so it unpickles your file without issue. Basically I wasted a lot of time scanning my string seeing if it was indeed corrupt when all I had to do was change the mode of opening the file from rb to r, and then use Python2 to unpickle the file. So I'm just putting this information out there.
I ran into this earlier, found this thread, and assumed that I was immune to the file closing issue mentioned in a couple of these answers since I was using a with statement:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(temp_file.name)
However, since I was pushing the temp file from inside the with, the file still wasn't closed, so the file I was pushing was truncated. This resulted in the same insecure string pickle error in the script that read the file on the remote machine.
Two potential fixes to this: Keep the file open and force a flush:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
temp_file.flush()
# Push file to another machine
_send_file(temp_file.name)
Or make sure the file is closed before doing anything with it:
file_name = ''
with tempfile.NamedTemporaryFile(mode='wb', delete=False) as temp_file:
file_name = temp_file.name
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(file_name)
The below code shows that three files which are on a UNC share hosted on another machine have the same hash. It also shows that local files have different hashes. Why would this be? I feel that there is some UNC consideration that I don't know about.
Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> fn_a = '\\\\some.host.com\\Shares\\folder1\\file_a'
>>> fn_b = '\\\\some.host.com\\Shares\\folder1\\file_b'
>>> fn_c = '\\\\some.host.com\\Shares\\folder2\\file_c'
>>> fn_d = 'E:\\file_d'
>>> fn_e = 'E:\\file_e'
>>> fn_f = 'E:\\folder3\\file_f'
>>> f_a = open(fn_a, 'r')
>>> f_b = open(fn_b, 'r')
>>> f_c = open(fn_c, 'r')
>>> f_d = open(fn_d, 'r')
>>> f_e = open(fn_e, 'r')
>>> f_f = open(fn_f, 'r')
>>> hashlib.md5(f_a.read()).hexdigest()
'54637fdcade4b7fd7cabd45d51ab8311'
>>> hashlib.md5(f_b.read()).hexdigest()
'54637fdcade4b7fd7cabd45d51ab8311'
>>> hashlib.md5(f_c.read()).hexdigest()
'54637fdcade4b7fd7cabd45d51ab8311'
>>> hashlib.md5(f_d.read()).hexdigest()
'd2bf541b1a9d2fc1a985f65590476856'
>>> hashlib.md5(f_e.read()).hexdigest()
'e84be3c598a098f1af9f2a9d6f806ed5'
>>> hashlib.md5(f_f.read()).hexdigest()
'e11f04ed3534cc4784df3875defa0236'
EDIT: To further investigate the problem, I also tested using a file from another host. It appears that changing the host will change the result.
>>> fn_h = '\\\\host\\share\\file'
>>> f_h = open(fn_h, 'r')
>>> hashlib.md5(f_h.read()).hexdigest()
'f23ee2dbbb0040bf2586cfab29a03634'
...but then I tried a different file on the new host, and got a new result!
>>> fn_i = '\\\\host\\share\\different_file'
>>> f_i = open(fn_i, 'r')
>>> hashlib.md5(f_i.read()).hexdigest()
'a8ad771db7af8c96f635bcda8fdce961'
So, now I'm really confused. Could it have something to do with the fact that the original host is a \\host.com format and the new host is a \\host format?
I did some additional research based on the comments and answers everyone provided. I decided I needed to study permutations of these two features of the code:
A raw string literal is used for the path name, i.e. whether or not:
A. The file path string is raw with single backslashes in the path, vs.
B. The file path string is not raw with double backslashes in the path
(FYI to those who don't know, a raw string is one which is proceeded by an "r" like this: r'This is a raw string')
The open function mode is r or rb.
(FYI again to those who don't know, the b in rb mode indicates to read the file as binary.)
The results demonstrated:
The string literal / backslashes make no difference in whether or not the hashes of different files are different
My error was not opening the file in binary mode. When using rb mode in open, I got different results.
Yay! And thanks for the help.
Use f1.seek(0) if you intend to use it again, otherwise it would be a file completely read and calling read() again would just return a empty string.
I don't reproduce your problem. I'm using Python 3.4 on Windows 7 here with the following test script which accesses files on a network hard disk:
import sys, hashlib
def main():
fn0 = r'\\NAS\Public\Software\Backup\Test\Vagrantfile'
fn1 = r'\\NAS\Public\Software\Backup\Test\z.xml'
with open(fn0, 'rb') as f:
h0 = hashlib.md5(f.read())
print(h0.hexdigest())
with open(fn1, 'rb') as f:
h1 = hashlib.md5(f.read())
print(h1.hexdigest())
if __name__ == '__main__':
sys.exit(main())
Running this results in two different hash values (as expected):
c:\src\python>python hashtest.py
8af202dffb88739c2dbe188c12291e3d
2ff3db61ff37ca5ceac6a59fd7c1018b
If reading the file contents returns different data for the remote files then passing that data into md5 has to result in different hash values. You might want to print out the first 80 bytes of each file as a check that you are getting what you expect.
I'm trying to solve exercise 15's extra credit questions of Zed Shaw's Learn Python the Hard Way but I've ran into a problem. The code is as follows:
from sys import argv
script, filename = argv
txt = open(filename)
print "Here's your file %r:" % filename
print txt.read()
print "I'll also ask you to type it again:"
file_again = raw_input("> ")
txt_again = open(file_again)
print txt_again.read()
print txt_again.read()
I understand all the code that has been used, but extra credit question 7 asks:
Startup python again and use open from the prompt. Notice how you can open files and run read on them right there?
I've tried inputting everything I could think of in terminal (on a mac) after first starting up python with the 'python' command, but I can't get the code to run. What should I be doing to get this piece of code to run from the prompt?
Zed doesn't say to run this particular piece of code from within Python. Obviously, that code is getting the filename value from the parameters you used to invoke the script, and if you're just starting up the Python shell, you haven't used any parameters.
If you did:
filename = 'myfilename.txt'
txt = open(filename)
then it would work.
I just started with open(xyz.txt)
Well, yes, of course that isn't going to work, because you don't have a variable xyz, and even if you did, it wouldn't have an attribute txt. Since it's a file name, you want a string "xyz.txt", which you create by putting it in quotes: 'xyz.txt'. Notice that Python treats single and double quotes more or less the same; unlike in languages like C++ and Java, there is not a separate data type for individual characters - they're just length-1 strings.
Basically, just like in this transcript (I've added blank lines to aid readability):
pax:~$ python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> xyz = open ("minimal_main.c")
>>> print xyz.read()
int main (void) {
return 0;
}
>>> xyz.close()
>>> <CTRL-D>
pax:~$ _
All it's showing you is that you don't need a script in order to run Python commands, the command line interface can be used in much the same way.
print open('ex15_sample.txt').read()
After running python in terminal, we'll use open('filename.txt') to open the file and using the dot operator we can apply the read() function directly on it.
After running Python in terminal,
abc = open ("ex15_sample.txt")
print abc.read()
That should do.
not sure how to explain this, any help will be appreciated!
Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2, pynotify, tempfile, os
>>> opener = urllib2.build_opener()
>>> page = opener.open('http://img.youtube.com/vi/RLGJU_xUVTs/1.jpg')
>>> thumb = page.read()
>>> temp = tempfile.NamedTemporaryFile(suffix='.jpg')
>>> temp.write(thumb)
>>> os.path.getsize(temp.name)
0
>>> temp.write(thumb)
>>> os.path.getsize(temp.name)
4096
thanks!
If you open the thumb file, you'll see that there's one whole copy and a partial copy of the data that you are writing in it.
flush the file instead of writing a second time.
temp.flush()
The file wasn't writing the first time because the contents aren't large enough to fill the buffer. The second write overfills the buffer and so a buffer's worth of data gets written.
As Cameron points out in his answer, the buffer is automatically flushed when you close the file. If you want to keep it open for some reason (and the fact that this is an issue for you seems to indicate that you do), then you can call flush and the data will be written right away.
You haven't called flush() or close() on the file object before checking its size on disk -- there is an internal buffer that is automatically flushed only after a certain amount of data is written (this avoids too many expensive trips to the disk when doing many writes).
When I am trying to load something I dumped using cPickle, I get the error message:
ValueError: insecure string pickle
Both the dumping and loading work are done on the same computer, thus same OS: Ubuntu 8.04.
How could I solve this problem?
"are much more likely than a never-observed bug in Python itself in a functionality that's used billions of times a day all over the world": it always amazes me how cross people get in these forums.
One easy way to get this problem is by forgetting to close the stream that you're using for dumping the data structure. I just did
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
Which is why I came here in the first place, because I couldn't see what I'd done wrong.
And then I actually thought about it, rather than just coming here, and realized that I should have done:
>>> out = open('xxx.dmp', 'w')
>>> cPickle.dump(d, out)
>>> out.close() # close it to make sure it's all been written
>>> k = cPickle.load(open('xxx.dmp', 'r'))
Easy to forget. Didn't need people being told that they are idiots.
I've get this error in Python 2.7 because of open mode 'rb':
with open(path_to_file, 'rb') as pickle_file:
obj = pickle.load(pickle_file)
So, for Python 2 'mode' should be 'r'
Also, I've wondered that Python 3 doesn't support pickle format of Python 2, and in case when you'll try to load pickle file created in Python 2 you'll get:
pickle.unpicklingerror: the string opcode argument must be quoted
Check this thread. Peter Otten says:
A corrupted pickle. The error is
raised if a string in the dump does
not both start and end with " or '.
and shows a simple way to reproduce such "corruption". Steve Holden, in the follow-up post, suggests another way to cause the problem would be to mismatch 'rb' and 'wb' (but in Python 2 and on Linux that particular mistake should pass unnoticed).
What are you doing with data between dump() and load()? It's quite common error to store pickled data in file opened in text mode (on Windows) or in database storage in the way that doesn't work properly for binary data (VARCHAR, TEXT columns in some databases, some key-value storages). Try to compare pickled data that you pass to storage and immediately retrieved from it.
If anyone has this error using youtube-dl, this issue has the fix: https://github.com/rg3/youtube-dl/issues/7172#issuecomment-242961695
richiecannizzo commented on Aug 28
brew install libav
Should fix it instantly on mac or
sudo apt-get install libav
#on linux
This error may also occur with python 2 (and early versions of python 3) if your pickle is large (Python Issue #11564):
Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import cPickle as pickle
>>> string = "X"*(2**31)
>>> pp = pickle.dumps(string)
>>> len(pp)
2147483656
>>> ss = pickle.loads(pp)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: insecure string pickle
This limitation was addressed with the introduction of pickle protocol 4 in python 3.4 (PEP 3154). Unfortunately, this feature has not been back-ported to python 2, and probably won't ever be. If this is your problem and you need to use python 2 pickle, the best you can do is reduce the size of your pickle, e.g., instead of pickling a list, pickle the elements individually into a list of pickles.
Same problem with a file that was made with python on windows, and reloaded with python on linux.
Solution : dos2unix on the file before reading in linux : works as a charm !
I got the Python ValueError: insecure string pickle message in a different way.
For me it happened after a base64 encoding a binary file and passing through urllib2 sockets.
Initially I was wrapping up a file like this
with open(path_to_binary_file) as data_file:
contents = data_file.read()
filename = os.path.split(path)[1]
url = 'http://0.0.0.0:8080/upload'
message = {"filename" : filename, "contents": contents}
pickled_message = cPickle.dumps(message)
base64_message = base64.b64encode(pickled_message)
the_hash = hashlib.md5(base64_message).hexdigest()
server_response = urllib2.urlopen(url, base64_message)
But on the server the hash kept coming out differently for some binary files
decoded_message = base64.b64decode(incoming_base64_message)
the_hash = hashlib.md5(decoded_message).hexdigest()
And unpickling gave insecure string pickle message
cPickle.loads(decoded_message)
BUT SUCCESS
What worked for me was to use urlsafe_b64encode()
base64_message = base64.urlsafe_b64encode(cPickle.dumps(message))
And decode with
base64_decoded_message = base64.urlsafe_b64decode(base64_message)
References
http://docs.python.org/2/library/base64.html
https://www.rfc-editor.org/rfc/rfc3548.html#section-3
This is what happened to me, might be a small section of population, but I want to put this out here nevertheless, for them:
Interpreter (Python3) would have given you an error saying it required the input file stream to be in bytes, and not as a string, and you may have changed the open mode argument from 'r' to 'rb', and now it is telling you the string is corrupt, and thats why you have come here.
The simplest option for such cases is to install Python2 (You can install 2.7) and then run your program with Python 2.7 environment, so it unpickles your file without issue. Basically I wasted a lot of time scanning my string seeing if it was indeed corrupt when all I had to do was change the mode of opening the file from rb to r, and then use Python2 to unpickle the file. So I'm just putting this information out there.
I ran into this earlier, found this thread, and assumed that I was immune to the file closing issue mentioned in a couple of these answers since I was using a with statement:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(temp_file.name)
However, since I was pushing the temp file from inside the with, the file still wasn't closed, so the file I was pushing was truncated. This resulted in the same insecure string pickle error in the script that read the file on the remote machine.
Two potential fixes to this: Keep the file open and force a flush:
with tempfile.NamedTemporaryFile(mode='wb') as temp_file:
pickle.dump(foo, temp_file)
temp_file.flush()
# Push file to another machine
_send_file(temp_file.name)
Or make sure the file is closed before doing anything with it:
file_name = ''
with tempfile.NamedTemporaryFile(mode='wb', delete=False) as temp_file:
file_name = temp_file.name
pickle.dump(foo, temp_file)
# Push file to another machine
_send_file(file_name)