UnicodeDecodeError: invalid start byte - python

I have a quick question about UnicodeDecodeError:invalid start byte.
I think somewhere in my text has non-UTF-8 Character, but location of error message is the starting point of reading a file, so I have no idea how to fix it.
If you have any suggestion, just let me know
Following is my error message returned from python.
for line in fi:
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/codecs.py", line 313, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3131: invalid start byte
Following is my code:
for filename in os.listdir(readDir):
filename = os.path.join(readDir, filename)
for keyword in keywords:
outFileName = os.path.join(sortDir, keyword)
outFileName = outFileName+'.csv'
with open(filename, 'r') as fi, open(outFileName, "a") as fo:
for line in fi:

I had the same issue and after searching for a while what i did
import sys
#Set default encoder
sys.setdefaultencoding("ISO-8859-1")
#Then convert string to UTF-8
yourString.encode('utf-8').strip()
I hope it will be useful to someone

Related

Error when retrieving saved object using pickle

Working with the MESA agent based modelling package. Using pickle to save the state of my intermediate model. But when retrieving the saved model the execution ends up in error saying:
File "/home/demonwolf/PycharmProjects/pythonProject1/main.py", line 281, in <module>
empty_model = pickle.load(f)
File "/home/demonwolf/anaconda3/envs/ABM/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte```
Any help would be appreciated.
Thanks in advance.
The file (the f parameter in pickle.load(f)) should be open in binary read (rb) mode, not the default text (r) mode.
with open("path/to/your/pickle.bin", "rb") as f:
empty_model = pickle.load(f)

Want to upload a sqlite.db file to a swift container using python swiftclient and always get a utf-8 error

i am trying to upload a sqlite.db(binary file) to a swift container using swiftclient in my python code.
import swiftclient
swift_conn.put_object
File "/usr/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbc in position 43: invalid start byte
the code i am using is:
import swiftclient
bmdatabase = "./logs/test.db'
with open(bmdatabase, 'r') as bmdatabase_file:
#remote
correctbmdatabasename = bmdatabase.replace("./logs/", "")
swift_conn.put_object(container_name,correctbmdatabasename,
contents=bmdatabase_file.read())
I finally found it by myself, if I want to read a binary file I have to read it with 'rb'
like
import swiftclient
bmdatabase = "./logs/test.db'
with open(bmdatabase, 'rb') as bmdatabase_file:
#remote
correctbmdatabasename = bmdatabase.replace("./logs/", "")
swift_conn.put_object(container_name,correctbmdatabasename,
contents=bmdatabase_file.read())

how to set proper encoding for json.load

I have been trying to load json this way:
data = json.load(f)
For some reasons that JSON has windows1251 encoding. So trying opening it causes error:
File "./labelme2voc.py", line 252, in main
data = json.load(f)
File "/home/dex/anaconda3/lib/python3.6/json/__init__.py", line 296, in load
return loads(fp.read(),
File "/home/dex/anaconda3/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe1 in position 81: invalid continuation byte
How can I fix that? JSON load doesn't have such option to encoding be specified
Try this:
import json
filename = ... # specify filename here
with open(filename, encoding='cp1252') as f:
data = json.loads(f.read())

UnicodeDecodeError, utf-8 invalid continuation byte

I m trying to extract lines from a log file , using that code :
with open('fichier.01') as f:
content = f.readlines()
print (content)
but its always makes the error statement
Traceback (most recent call last):
File "./parsepy", line 4, in <module>
content = f.readlines()
File "/usr/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2213: invalid continuation byte
how can i fix it ??
try one of the following
open('fichier.01', 'rb')
open('fichier.01', encoding ='utf-8')
open('fichier.01', encoding ='ISO-8859-1')
or also you can use io Module:
import io
io.open('fichier.01')
This is a common error when opening files when using Python (or any language really). This is an error you will soon learn to catch.
If it's not encoded as text then you will have to open it in binary mode e.g.:
with open('fichier.01', 'rb') as f:
content = f.readlines()
If it's encoded as something other than UTF-8 and it can be opened in text mode then open takes an encoding argument: https://docs.python.org/3.5/library/functions.html#open
Try to use it to solve it:
with open('fichier.01', errors='ignore') as f:
###

UnicodeEncodeError when reading a file

I am trying to read from rockyou wordlist and write all words that are >= 8 chars to a new file.
Here is the code -
def main():
with open("rockyou.txt", encoding="utf8") as in_file, open('rockout.txt', 'w') as out_file:
for line in in_file:
if len(line.rstrip()) < 8:
continue
print(line, file = out_file, end = '')
print("done")
if __name__ == '__main__':
main()
Some words are not utf-8.
Traceback (most recent call last):
File "wpa_rock.py", line 10, in <module>
main()
File "wpa_rock.py", line 6, in main
print(line, file = out_file, end = '')
File "C:\Python\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u0e45' in position
0: character maps to <undefined>
Update
def main():
with open("rockyou.txt", encoding="utf8") as in_file, open('rockout.txt', 'w', encoding="utf8") as out_file:
for line in in_file:
if len(line.rstrip()) < 8:
continue
out_file.write(line)
print("done")
if __name__ == '__main__':
main()```
Traceback (most recent call last):
File "wpa_rock.py", line 10, in <module>
main()
File "wpa_rock.py", line 3, in main
for line in in_file:
File "C:\Python\lib\codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf1 in position 933: invali
d continuation byte
Your UnicodeEncodeError: 'charmap' error occurs during writing to out_file (in print()).
By default, open() uses locale.getpreferredencoding() that is ANSI codepage on Windows (such as cp1252) that can't represent all Unicode characters and '\u0e45' character in particular. cp1252 is a one-byte encoding that can represent at most 256 different characters but there are a million (1114111) Unicode characters. It can't represent them all.
Pass encoding that can represent all the desired data e.g., encoding='utf-8' must work (as #robyschek suggested)—if your code reads utf-8 data without any errors then the code should be able to write the data using utf-8 too.
Your UnicodeDecodeError: 'utf-8' error occurs during reading in_file (for line in in_file). Not all byte sequences are valid utf-8 e.g., os.urandom(100).decode('utf-8') may fail. What to do depends on the application.
If you expect the file to be encoded as utf-8; you could pass errors="ignore" open() parameter, to ignore occasional invalid byte sequences. Or you could use some other error handlers depending on your application.
If the actual character encoding used in the file is different then you should pass the actual character encoding. bytes by themselves do not have any encoding—that metadata should come from another source (though some encodings are more likely than others: chardet can guess) e.g., if the file content is an http body then see A good way to get the charset/encoding of an HTTP response in Python
Sometimes a broken software can generate mostly utf-8 byte sequences with some bytes in a different encoding. bs4.BeautifulSoup can handle some special cases. You could also try ftfy utility/library and see if it helps in your case e.g., ftfy may fix some utf-8 variations.
Hey I was having a similar issue, in the case of rockyou.txt wordlist, I tried a number of encodings that Python had to offer and I found that encoding = 'kio8_u' worked to read the file.

Categories

Resources