Maximum recursion depth exceeded when building a whoosh index - python

I am trying to index some documents using Whoosh. However, when I try to add the documents to the Whoosh index, Python eventually gives back the following error:
RecursionError: maximum recursion depth exceeded while calling a Python object
I have tried playing with the limitmb setting of the index writer, as well as changing how often the index is committed to the hard drive. This seemed to change the amount of documents that were indexed succesfully, however the indexing stops with the RecursionError after a short while.
My code is the following:
from whoosh.index import create_in
from whoosh.fields import *
from whoosh.qparser import QueryParser
from bs4 import BeautifulSoup
import os
schema = Schema(title=TEXT(stored=True), docID=ID(stored=True), content=TEXT(stored=True))
ix = create_in("index", schema)
writer = ix.writer(limitmb=1024, procs=4, multisegment=True);
for root, dirs, files in os.walk('aquaint'):
for file in files:
with open(os.path.join(root, file), "r") as f:
soup = BeautifulSoup(f.read(), 'html.parser')
for doc in soup.find_all('doc'):
try:
t = doc.find('headline').string
except:
t = "No title available"
try:
d = doc.find('docno').string
except:
d = "No docID available"
try:
c = doc.find('text').string
except:
c = "No content available"
writer.add_document(title=t, docID=d, content=c)
writer.commit()
The files I am loading in are from the TRAC robust track (https://trec.nist.gov/data/t14_robust.html) and have the following format (due to licensing I can't share the entire file):
<DOC>
<DOCNO> APW1XXXXXXXXX </DOCNO>
<DOCTYPE> NEWS STORY </DOCTYPE>
<DATE_TIME> 1998-01-06 00:17:00 </DATE_TIME>
<HEADER>
XXXX
</HEADER>
<BODY>
<SLUG> BC-Sports-Motorcycling-Grand Prix-Doohan </SLUG>
<HEADLINE>
Doohan calls for upgrade to 1000cc bikes
</HEADLINE>
<TEXT>
News article text here
</TEXT>
(PROFILE
(WS SL:BC-Sports-Motorcycling-Grand Prix-Doohan; CT:s;
(REG:EURO;)
(REG:BRIT;)
(REG:SCAN;)
(REG:MEST;)
(REG:AFRI;)
(REG:INDI;)
(REG:ENGL;)
(REG:ASIA;)
(LANG:ENGLISH;))
)
</BODY>
<TRAILER>
AP-NY-06-01-98 0017EDT
</TRAILER>
</DOC>
Each file loaded in includes several of these documents, beginning and ending with the <DOC> tags.
I don't understand what is causing this error, could someone help me out?
Your help is greatly appreciated!

I found what the problem was, I wrongly assumed that BeautifulSoup would return a string when calling doc.find('headline').string, replacing this with str(doc.find('headline').string) seems to have fixed the issue for me, and Whoosh is now indexing correctly.

Related

parsing invalid xml using xmltodict

I am reading a xml file and converting to df using xmltodict and pandas.
This is how one of the elements in the file looks like
<net>
<ref>https://whois.arin.net/rest/v1/net/NET-66-125-37-120-1</ref>
<endAddress>66.125.37.127</endAddress>
<handle>NET-66-125-37-120-1</handle>
<name>SBC066125037120020307</name>
<netBlocks>
<netBlock>
<cidrLenth>29</cidrLenth>
<endAddress>066.125.037.127</endAddress>
<type>S</type>
<startAddress>066.125.037.120</startAddress>
</netBlock>
</netBlocks>
<pocLinks/>
<orgHandle>C00285134</orgHandle>
<parentNetHandle>NET-66-120-0-0-1</parentNetHandle>
<registrationDate>2002-03-08T00:00:00-05:00</registrationDate>
<startAddress>66.125.37.120</startAddress>
<updateDate>2002-03-08T07:56:59-05:00</updateDate>
<version>4</version>
</net>
since there are a large number of records like this which is being pulled in by an API, sometimes some <net> objects at the end of the file can be partially downloaded.
ex : one tag not having closing tag.
This is what i wrote to parse the xml
xml_data = open('/Users/dgoswami/Downloads/net.xml', 'r').read() # Read data
xml_data = xmltodict.parse(xml_data,
process_namespaces=True,
namespaces={'http://www.arin.net/bulkwhois/core/v1':None})
when that happens, I get an error like so
no element found: line 30574438, column 37
I want to be able to parse till the last valid <net> element.
How can that be done?
You may need to fix your xml beforehand - xmltodict has no ability to do that for you.
You can leverage lxml as described in Python xml - handle unclosed token to fix your xml:
from lxml import etree
def fixme(x):
p = etree.fromstring(x, parser = etree.XMLParser(recover=True))
return etree.tostring(p).decode("utf8")
fixed = fixme("""<start><net>
<endAddress>66.125.37.127</endAddress>
<handle>NET-66-125-37-120-1</handle>
</net><net>
<endAddress>66.125.37.227</endAddress>
<handle>NET-66-125-37-220-1</handle>
""")
and then use the fixed xml:
import xmltodict
print(xmltodict.parse(fixed))
to get
OrderedDict([('start',
OrderedDict([('net', [
OrderedDict([('endAddress', '66.125.37.127'), ('handle', 'NET-66-125-37-120-1')]),
OrderedDict([('endAddress', '66.125.37.227'), ('handle', 'NET-66-125-37-220-1')])
])
]))
])

Why I can't scrape that large XML file using Python?

Does anyone know why this code doesn't do the job? It works perfectly when I want to scrape smaller files with data from a certain date e.g only from 2017 but not with this one. Is this file too big or something? There's no error or anything like that. Every time I run this script but with mentioned smaller file It takes about 30 seconds to download everything and save into a database so there are no mistakes in code I think. After running the script I'm just getting "Process finished with exit code 0" and nothing more.
from bs4 import BeautifulSoup
import urllib.request
from app import db
from models import CveData
from sqlalchemy.exc import IntegrityError
url = "https://cve.mitre.org/data/downloads/allitems.xml"
r = urllib.request.urlopen(url)
xml = BeautifulSoup(r, 'xml')
vuln = xml.findAll('Vulnerability')
for element in vuln:
note = element.findAll('Notes')
title = element.find('CVE').text
for element in note:
desc = element.find(Type="Description").text
test_date = element.find(Title="Published")
if test_date is None:
pass
else:
date = test_date.text
data = CveData(title,date,desc)
try:
db.session.add(data)
db.session.commit()
print("adding... " + title)
# don't stop the stream, ignore the duplicates
except IntegrityError:
db.session.rollback()
I downloaded the file that you said didn't work, and the one you said did and ran these two greps with different results:
grep -c "</Vulnerability>" allitems-cvrf-year-2019.xml
21386
grep -c "</Vulnerability>" allitems.xml
0
The program is not stopping on opening the file, it is running to completion. You aren't getting any output because there are no Vulnerability tags in the xml file. (Now my grep is not technically accurate, as I believe there could be spaces in the Vulnerability closing tag, but I doubt that is the case here.)

Use Minidom to parse XML But just crashes applet

Having some issues with Minidom for parsing an XML file on a remote server.
This is the code I am trying to parse:
<mod n="1">
<body>
Random Body information will be here
</body>
<b>1997-01-27</b>
<d>1460321480</d>
<l>United Kingdom</l>
<s>M</s>
<t>About Denisstoff</t>
</mod>
I'm trying to return the <d> values with Minidom. This is the code I am trying to use to find the value:
expired = True
f = urlreq.urlopen("http://st.chatango.com/profileimg/"+args[:1]+"/"+args[1:2]+"/"+args+"/mod1.xml")
data = f.read().decode("utf-8")
dom = minidom.parseString(data)
itemlist = dom.getElementsByTagName('d')
print(itemlist)
It returns the value is there, but I followed a way to read the data I found here (Below) and it just crashed my python app. This is the code I tried to fix with:
for s in itemlist:
if s.hasAttribute('d'):
print(s.attributes['d'].value)
This is the crash:
AttributeError: 'NodeList' object has no attribute 'value'
I also tried ElementTree but that didn't return any data at all. I have tested the URL and it's correct for the data I want, but I just can't get it to read the data in the tags. Any and all help is appreciated.
if you want to print values from this xml you should use this:
for s in itemlist:
if hasattr(s.childNodes[0], "data"):
print(s.childNodes[0].data)
I hope it help :D

ParseError: not well-formed (invalid token) using cElementTree

I receive xml strings from an external source that can contains unsanitized user contributed content.
The following xml string gave a ParseError in cElementTree:
>>> print repr(s)
'<Comment>dddddddd\x08\x08\x08\x08\x08\x08_____</Comment>'
>>> import xml.etree.cElementTree as ET
>>> ET.XML(s)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
ET.XML(s)
File "<string>", line 106, in XML
ParseError: not well-formed (invalid token): line 1, column 17
Is there a way to make cElementTree not complain?
It seems to complain about \x08 you will need to escape that.
Edit:
Or you can have the parser ignore the errors using recover
from lxml import etree
parser = etree.XMLParser(recover=True)
etree.fromstring(xmlstring, parser=parser)
I was having the same error (with ElementTree). In my case it was because of encodings, and I was able to solve it without having to use an external library. Hope this helps other people finding this question based on the title. (reference)
import xml.etree.ElementTree as ET
parser = ET.XMLParser(encoding="utf-8")
tree = ET.fromstring(xmlstring, parser=parser)
EDIT: Based on comments, this answer might be outdated. But this did work back when it was answered...
This code snippet worked for me. I have an issue with the parsing batch of XML files. I had to encode them to 'iso-8859-5'
import xml.etree.ElementTree as ET
tree = ET.parse(filename, parser = ET.XMLParser(encoding = 'iso-8859-5'))
See this answer to another question and the according part of the XML spec.
The backspace U+0008 is an invalid character in XML documents. It must be represented as escaped entity  and cannot occur plainly.
If you need to process this XML snippet, you must replace \x08 in s before feeding it into an XML parser.
None of the above fixes worked for me. The only thing that worked was to use BeautifulSoup instead of ElementTree as follows:
from bs4 import BeautifulSoup
with open("data/myfile.xml") as fp:
soup = BeautifulSoup(fp, 'xml')
Then you can search the tree as:
soup.find_all('mytag')
This is most probably an encoding error. For example I had an xml file encoded in UTF-8-BOM (checked from the Notepad++ Encoding menu) and got similar error message.
The workaround (Python 3.6)
import io
from xml.etree import ElementTree as ET
with io.open(file, 'r', encoding='utf-8-sig') as f:
contents = f.read()
tree = ET.fromstring(contents)
Check the encoding of your xml file. If it is using different encoding, change the 'utf-8-sig' accordingly.
After lots of searching through the entire WWW, I only found out that you have to escape certain characters if you want your XML parser to work! Here's how I did it and worked for me:
escape_illegal_xml_characters = lambda x: re.sub(u'[\x00-\x08\x0b\x0c\x0e-\x1F\uD800-\uDFFF\uFFFE\uFFFF]', '', x)
And use it like you'd normally do:
ET.XML(escape_illegal_xml_characters(my_xml_string)) #instead of ET.XML(my_xml_string)
A solution for gottcha for me, using Python's ElementTree... this has the invalid token error:
# -*- coding: utf-8 -*-
import xml.etree.ElementTree as ET
xml = u"""<?xml version='1.0' encoding='utf8'?>
<osm generator="pycrocosm server" version="0.6"><changeset created_at="2017-09-06T19:26:50.302136+00:00" id="273" max_lat="0.0" max_lon="0.0" min_lat="0.0" min_lon="0.0" open="true" uid="345" user="john"><tag k="test" v="Съешь же ещё этих мягких французских булок да выпей чаю" /><tag k="foo" v="bar" /><discussion><comment data="2015-01-01T18:56:48Z" uid="1841" user="metaodi"><text>Did you verify those street names?</text></comment></discussion></changeset></osm>"""
xmltest = ET.fromstring(xml.encode("utf-8"))
However, it works with the addition of a hyphen in the encoding type:
<?xml version='1.0' encoding='utf-8'?>
Most odd. Someone found this footnote in the python docs:
The encoding string included in XML output should conform to the
appropriate standards. For example, “UTF-8” is valid, but “UTF8” is
not.
I have been in stuck with similar problem. Finally figured out the what was the root cause in my particular case. If you read the data from multiple XML files that lie in same folder you will parse also .DS_Store file.
Before parsing add this condition
for file in files:
if file.endswith('.xml'):
run_your_code...
This trick helped me as well
lxml solved the issue, in my case
from lxml import etree
for _, elein etree.iterparse(xml_file, tag='tag_i_wanted', unicode='utf-8'):
print(ele.tag, ele.text)
in another case,
parser = etree.XMLParser(recover=True)
tree = etree.parse(xml_file, parser=parser)
tags_needed = tree.iter('TAG NAME')
Thanks to theeastcoastwest
Python 2.7
In my case I got the same error. (using Element Tree)
I had to add these lines:
import xml.etree.ElementTree as ET
from lxml import etree
parser = etree.XMLParser(recover=True,encoding='utf-8')
xml_file = ET.parse(path_xml,parser=parser)
Works in pyhton 3.10.2
What helped me with that error was Juan's answer - https://stackoverflow.com/a/20204635/4433222
But wasn't enough - after struggling I found out that an XML file needs to be saved with UTF-8 without BOM encoding.
The solution wasn't working for "normal" UTF-8.
The only thing that worked for me is I had to add mode and encoding while opening the file like below:
with open(filenames[0], mode='r',encoding='utf-8') as f:
readFile()
Otherwise it was failing every time with invalid token error if I simply do this:
f = open(filenames[0], 'r')
readFile()
this error is coming while you are giving a link . but first you have to find the string of that link
response = requests.get(Link)
root = cElementTree.fromstring(response.content)
I tried the other solutions in the answers here but had no luck. Since I only needed to extract the value from a single xml node I gave in and wrote my function to do so:
def ParseXmlTagContents(source, tag, tagContentsRegex):
openTagString = "<"+tag+">"
closeTagString = "</"+tag+">"
found = re.search(openTagString + tagContentsRegex + closeTagString, source)
if found:
start = found.regs[0][0]
end = found.regs[0][1]
return source[start+len(openTagString):end-len(closeTagString)]
return ""
Example usage would be:
<?xml version="1.0" encoding="utf-16"?>
<parentNode>
<childNode>123</childNode>
</parentNode>
ParseXmlTagContents(xmlString, "childNode", "[0-9]+")

Extracting parts of a webpage with python

So I have a data retrieval/entry project and I want to extract a certain part of a webpage and store it in a text file. I have a text file of urls and the program is supposed to extract the same part of the page for each url.
Specifically, the program copies the legal statute following "Legal Authority:" on pages such as this. As you can see, there is only one statute listed. However, some of the urls also look like this, meaning that there are multiple separated statutes.
My code works for pages of the first kind:
from sys import argv
from urllib2 import urlopen
script, urlfile, legalfile = argv
input = open(urlfile, "r")
output = open(legalfile, "w")
def get_legal(page):
# this is where Legal Authority: starts in the code
start_link = page.find('Legal Authority:')
start_legal = page.find('">', start_link+1)
end_link = page.find('<', start_legal+1)
legal = page[start_legal+2: end_link]
return legal
for line in input:
pg = urlopen(line).read()
statute = get_legal(pg)
output.write(get_legal(pg))
Giving me the desired statute name in the "legalfile" output .txt. However, it cannot copy multiple statute names. I've tried something like this:
def get_legal(page):
# this is where Legal Authority: starts in the code
end_link = ""
legal = ""
start_link = page.find('Legal Authority:')
while (end_link != '</a> '):
start_legal = page.find('">', start_link+1)
end_link = page.find('<', start_legal+1)
end2 = page.find('</a> ', end_link+1)
legal += page[start_legal+2: end_link]
if
break
return legal
Since every list of statutes ends with '</a> ' (inspect the source of either of the two links) I thought I could use that fact (having it as the end of the index) to loop through and collect all the statutes in one string. Any ideas?
I would suggest using BeautifulSoup to parse and search your html. This will be much easier than doing basic string searches.
Here's a sample that pulls all the <a> tags found within the <td> tag that contains the <b>Legal Authority:</b> tag. (Note that I'm using requests library to fetch page content here - this is just a recommended and very easy to use alternative to urlopen.)
import requests
from BeautifulSoup import BeautifulSoup
# fetch the content of the page with requests library
url = "http://www.reginfo.gov/public/do/eAgendaViewRule?pubId=200210&RIN=1205-AB16"
response = requests.get(url)
# parse the html
html = BeautifulSoup(response.content)
# find all the <a> tags
a_tags = html.findAll('a', attrs={'class': 'pageSubNavTxt'})
def fetch_parent_tag(tags):
# fetch the parent <td> tag of the first <a> tag
# whose "previous sibling" is the <b>Legal Authority:</b> tag.
for tag in tags:
sibling = tag.findPreviousSibling()
if not sibling:
continue
if sibling.getText() == 'Legal Authority:':
return tag.findParent()
# now, just find all the child <a> tags of the parent.
# i.e. finding the parent of one child, find all the children
parent_tag = fetch_parent_tag(a_tags)
tags_you_want = parent_tag.findAll('a')
for tag in tags_you_want:
print 'statute: ' + tag.getText()
If this isn't exactly what you needed to do, BeautifulSoup is still the tool you likely want to use for sifting through html.
They provide XML data over there, see my comment. If you think you can't download that many files (or the other end could dislike so many HTTP GET requests), I'd recommend asking their admins if they would kindly provide you with a different way of accessing the data.
I have done so twice in the past (with scientific databases). In one instance the sheer size of the dataset prohibited a download; they ran a SQL query of mine and e-mailed the results (but had previously offered to mail a DVD or hard disk). In another case, I could have done some million HTTP requests to a webservice (and they were ok) each fetching about 1k bytes. This would have taken long, and would have been quite inconvenient (requiring some error-handling, since some of these requests would always time out) (and non-atomic due to paging). I was mailed a DVD.
I'd imagine that the Office of Management and Budget could possibly be similar accomodating.

Categories

Resources