Trouble reading MARC data using MARCReader() and pymarc - python

So I am trying to teach myself python and pymarc for a school project I am working on. I have a sample marc file and I am trying to read it using this simple code:
from pymarc import *
reader = MARCReader(open('dump.mrc', 'rb'), to_unicode=True)
for record in reader:
print(record)
The for loop is to just print out each record to make sure I am getting the correct data. The only thing is I am getting this error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)
I've looked online but could not find an answer to my problem. What does this error mean and how can I go about fixing it? Thanks in advance.

You can set the python environment to support UTF-8 and get record as a dictionary.
Try:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
from pymarc import *
reader = MARCReader(open('dump.mrc', 'rb'), to_unicode=True, force_utf8=True)
for record in reader:
print record.as_dict()
Note:
If you still get the unicode exception, you can set to_unicode=False and skip force_utf8=True.
Also please check if your dump.mrc file is encoded to UTF-8 or not. Try:
$ chardet dump.mrc

Related

Python Syntax Non-ASCII character 'xe6' in file (added #-*-coding:utf-8 -*- )

I want to use Python to read the .csv.
At start I search the answer to add the
#!/usr/bin/python
#-*-coding:utf-8 -*-
so that can avoid the problem of encoding, but it is still wrong, giving the syntax error:
SyntaxError: Non-ASCII character 'xe6' in file csv1.py on line2, but no encoding declared:
My code:
#!/usr/bin/python
# -*-coding:utf-8 -*-
import csv
with open('wtr1.csv', 'rb') as f:
for row in csv.reader(f):
print row
You've got two different errors here. This answer relates to the with warning. The other error is the ascii encoding error.
You appear to be using a very old version of python (2.5). The with statement is not enabled by default in python 2.5. Instead you have to declare a the top of the file that you wish to use it. Your file should now look like:
#!/usr/bin/python
# -*-coding:utf-8 -*-
from __future__ import with_statement
import csv
with open('wtr1.csv', 'rb') as f:
for row in csv.reader(f):
print row

Python: Can't write to file - UnicodeEncodeError

This code should write some text to file.
When I'm trying to write my text to console, everything works. But when I try to write the text into the file, I get UnicodeEncodeError. I know, that this is a common problem which can be solved using proper decode or encode, but I tried it and still getting the same UnicodeEncodeError. What am I doing wrong?
I've attached an example.
print "(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)".decode("utf-8")%(dict.get('name'),dict.get('description'),dict.get('ico'),dict.get('city'),dict.get('ulCislo'),dict.get('psc'),dict.get('weby'),dict.get('telefony'),dict.get('mobily'),dict.get('faxy'),dict.get('emaily'),dict.get('dic'),dict.get('ic_dph'),dict.get('kategorie')[0],dict.get('kategorie')[1],dict.get('kategorie')[2])
(StarBuy s.r.o.,Inzertujte s foto, auto-moto, oblečenie, reality, prácu, zvieratá, starožitnosti, dovolenky, nábytok, všetko pre deti, obuv, stroj....
with open("test.txt","wb") as f:
f.write("(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)".decode("utf-8")%(dict.get('name'),dict.get('description'),dict.get('ico'),dict.get('city'),dict.get('ulCislo'),dict.get('psc'),dict.get('weby'),dict.get('telefony'),dict.get('mobily'),dict.get('faxy'),dict.get('emaily'),dict.get('dic'),dict.get('ic_dph'),dict.get('kategorie')[0],dict.get('kategorie')[1],dict.get('kategorie')[2]))
UnicodeEncodeError: 'ascii' codec can't encode character u'\u010d' in position 50: ordinal not in range(128)
Where could be the problem?
To write Unicode text to a file, you could use io.open() function:
#!/usr/bin/env python
from io import open
with open('utf8.txt', 'w', encoding='utf-8') as file:
file.write(u'\u010d')
It is default on Python 3.
Note: you should not use the binary file mode ('b') if you want to write text.
# coding: utf8 that defines the source code encoding has nothing to do with it.
If you see sys.setdefaultencoding() outside of site.py or Python tests; assume the code is broken.
#ned-batchelder is right. You have to declare that the system default encoding is "utf-8". The coding comment # -*- coding: utf-8 -*- doesn't do this.
To declare the system default encoding, you have to import the module sys, and call sys.setdefaultencoding('utf-8'). However, sys was previously imported by the system and its setdefaultencoding method was removed. So you have to reload it before you call the method.
So, you will need to add the following codes at the beginning:
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
You may need to explicitly declare that python use UTF-8 encoding.
The answer to this SO question explains how to do that: Declaring Encoding in Python
For Python 2:
Declare document encoding on top of the file (if not done yet):
# -*- coding: utf-8 -*-
Replace .decode with .encode:
with open("test.txt","wb") as f:
f.write("(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)".encode("utf-8")%(dict.get('name'),dict.get('description'),dict.get('ico'),dict.get('city'),dict.get('ulCislo'),dict.get('psc'),dict.get('weby'),dict.get('telefony'),dict.get('mobily'),dict.get('faxy'),dict.get('emaily'),dict.get('dic'),dict.get('ic_dph'),dict.get('kategorie')[0],dict.get('kategorie')[1],dict.get('kategorie')[2]))

Python webpage source read with special characters

I am reading a page source from a webpage, then parsing a value from that source.
There I am facing a problem with special characters.
In my python controller file iam using # -*- coding: utf-8 -*-.
But I am reading a webpage source which is using charset=iso-8859-1
So when I read the page content without specifying any encoding it is throwing error as UnicodeDecodeError: 'utf8' codec can't decode byte 0xfc in position 133: invalid start byte
when I use string.decode("iso-8859-1").encode("utf-8") then it is parsing data without any error. But it is displaying the value as 'F\u00fcnke' instead of 'Fünke'.
Please let me know how I can solve this issue.
I would greatly appreciate any suggestions.
Encoding is a PITA in Python3 for sure (and 2 in some cases as well).
Try checking these links out, they might help you:
Python - Encoding string - Swedish Letters
Python3 - ascii/utf-8/iso-8859-1 can't decode byte 0xe5 (Swedish characters)
http://docs.python.org/2/library/codecs.html
Also it would be nice with the code for "So when I read the page content without specifying any encoding" My best guess is that your console doesn't use utf-8 (for instance, windows.. your # -*- coding: utf-8 -*- only tells Python what type of characters to find within the sourcecode, not the actual data the code is going to parse or analyze itself.
For instance i write:
# -*- coding: iso-8859-1 -*-
import time
# Här skriver jag ut tiden (Translation: Here, i print out the time)
print(time.strftime('%H:%m:%s'))

lxml unicode output issue

New to python and lxml so please bear with me. Now stuck with what appears to be unicode issue. I tried .encode, beautiful soup's unicodedammit with no luck. Had searched the forum and web, but my lack of python skill failed to apply suggested solution to my particular code. Appreciate any help, thanks.
Code:
import requests
import lxml.html
sourceUrl = "http://www.hkex.com.hk/eng/market/sec_tradinfo/stockcode/eisdeqty.htm"
sourceHtml = requests.get(sourceUrl)
htmlTree = lxml.html.fromstring(sourceHtml.text)
for stockCodes in htmlTree.xpath('''/html/body/printfriendly/table/tr/td/table/tr/td/table/tr/table/tr/td'''):
string = stockCodes.text
print string
Error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 0: ordinal not in range(128)
When I run your code like this python lx.py, I don't get the error. But when I send the result to sdtout python lx.py > output.txt, it occurs. So try this:
# -*- coding: utf-8 -*-
import requests
import lxml.html
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
This allows you to switch from the default ASCII to UTF-8, which the Python runtime will use whenever it has to decode a string buffer to unicode.
The text attribute always returns pure bytes but the content attribute should try to encode it for you. You could also try: sourceHTML.text.encode('utf-8') or sourceHTML.text.encode('ascii') but I'm fairly certain the latter will cause that same exception.

Python: UnicodeEncodeError when reading from stdin

When running a Python program that reads from stdin, I get the following error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 320: ordinal not in range(128)
How can I fix it?
Note: The error occurs internal to antlr and the line looks like that:
self.strdata = unicode(data)
Since I don't want to modify the source code,
I'd like to pass in something that is acceptable.
The input code looks like that:
#!/usr/bin/python
import sys
import codecs
import antlr3
import antlr3.tree
from LatexLexer import LatexLexer
from LatexParser import LatexParser
char_stream = antlr3.ANTLRInputStream(codecs.getreader("utf8")(sys.stdin))
lexer = LatexLexer(char_stream)
tokens = antlr3.CommonTokenStream(lexer)
parser = LatexParser(tokens)
r = parser.document()
The problem is, that when reading from stdin, python decodes
it using the system default encoding:
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
The input is very likely UTF-8 or Windows-CP-1252, so the program
chokes on non-ASCII-characters.
To convert sys.stdin to a stream with the proper decoder, I used:
import codecs
char_stream = codecs.getreader("utf-8")(sys.stdin)
That fixed the problem.
BTW, this is the method ANTLRs FileStream uses to open a file
with given filename (instead of a given stream):
fp = codecs.open(fileName, 'rb', encoding)
try:
data = fp.read()
finally:
fp.close()
BTW #2: For strings I found
a_string.encode(encoding)
useful.
You're not getting this error on input, you're getting this error when trying to output the read data. You should be decoding data you read, and throwing the unicodes around instead of dealing with bytestrings the whole time.
Here is an excellent writedown about how Python handles encodings:
How to use UTF-8 with Python

Categories

Resources