use pyquery to filter html - python

I'm trying to use pyquery parse html. I'm facing one uncertain issue. My code as below:
from pyquery import PyQuery as pq
document = pq('<p id="hello">Hello</p><p id="world">World !!</p>')
p = document('p')
print(p.filter("#hello"))
And the expectation of print result should as following :
<p id="hello">Hello</p>
But the actual response as below:
<p id="hello">Hello</p><p id="world">World !!</p></div></html>
if I just want to the specify part html instead of the rest of the entire html content, how should I write it.
Thanks

You can use built in library ElementTree
import xml.etree.ElementTree as ET
html = '''<html><p id="hello">Hello</p><p id="world">World !!</p></html>'''
root = ET.fromstring(html)
p = root.find('.//p[#id="hello"]')
print(ET.tostring(p))
output
b'<p id="hello">Hello</p>'

Related

How to get value from XML file?

I have that xml file, and I need only to get value from steamID64 (76561198875082603).
<profile>
<steamID64>76561198875082603</steamID64>
<steamID>...</steamID>
<onlineState>online</onlineState>
<stateMessage>...</stateMessage>
<privacyState>public</privacyState>
<visibilityState>3</visibilityState>
<avatarIcon>...</avatarIcon>
<avatarMedium>...</avatarMedium>
<avatarFull>...</avatarFull>
<vacBanned>0</vacBanned>
<tradeBanState>None</tradeBanState>
<isLimitedAccount>0</isLimitedAccount>
<customURL>...</customURL>
<memberSince>December 8, 2018</memberSince>
<steamRating/>
<hoursPlayed2Wk>0.0</hoursPlayed2Wk>
<headline>...</headline>
<location>...</location>
<realname>
<![CDATA[ THEMakci7m87 ]]>
</realname>
<summary>...</summary>
<mostPlayedGames>...</mostPlayedGames>
<groups>...</groups>
</profile>
Now I have only that code:
xml_url = f'{url}?xml=1'
then I don't know what to do.
It's fairly simple with lxml:
import lxml.html as lh
steam = """your html above"""
doc = lh.fromstring(steam)
doc.xpath('//steamid64/text()')
Output:
['76561198875082603']
Edit:
With the actual url, it's clear that the underlying data is xml; so the better way to do it is:
import requests
from lxml import etree
url = 'https://steamcommunity.com/id/themakci7m87/?xml=1'
req = requests.get(url)
doc = etree.XML(req.text.encode())
doc.xpath('//steamID64/text()')
Same output.
You better use builtin XML lib named ElementTree
lxml is an external XML lib that requires a separate installation.
See below
import requests
import xml.etree.ElementTree as ET
r = requests.get('https://steamcommunity.com/id/themakci7m87/?xml=1')
if r.status_code == 200:
root = ET.fromstring(r.text)
steam_id_64 = root.find('./steamID64').text
print(steam_id_64)
else:
print('Failed to read data.')
output:
76561198875082603

Extracting list or dictionary from xml file

Hello I never worked with xml.. Can someone help me with creating a list or dictionary in python which gives an ID a specific name (string) from the xml file.
Here is my xml file:
api.brain-map.org/api/v2/data/query.xml?num_rows=10000&start_row=10001&&criteria=model::Gene,rma::criteria,products[abbreviation$eq%27Mouse%27]
I can show you a snippet:
<Response success="true" start_row="10001" num_rows="9990" total_rows="19991">
<objects>
<object>
<acronym>Hdac4</acronym>
<alias-tags>4932408F19Rik AI047285</alias-tags>
<chromosome-id>34</chromosome-id>
<ensembl-id nil="true"/>
<entrez-id>208727</entrez-id>
<genomic-reference-update-id>491928275</genomic-reference-update-id>
<homologene-id>55946</homologene-id>
<id>84010</id>
<legacy-ensembl-gene-id nil="true"/>
<name>histone deacetylase 4</name>
<organism-id>2</organism-id>
<original-name>histone deacetylase 4</original-name>
<original-symbol>Hdac4</original-symbol>
<reference-genome-id nil="true"/>
<sphinx-id>188143</sphinx-id>
<version-status>no change</version-status>
</object>
<object>
<acronym>Prss54</acronym>
<alias-tags>4931432M23Rik Klkbl4</alias-tags>
<chromosome-id>53</chromosome-id>
<ensembl-id nil="true"/>
<entrez-id>70993</entrez-id>
<genomic-reference-update-id>491928275</genomic-reference-update-id>
<homologene-id>19278</homologene-id>
<id>46834</id>
<legacy-ensembl-gene-id nil="true"/>
<name>protease, serine 54</name>
<organism-id>2</organism-id>
<original-name>protease, serine, 54</original-name>
<original-symbol>Prss54</original-symbol>
<reference-genome-id nil="true"/>
<sphinx-id>65991</sphinx-id>
<version-status>updated</version-status>
</object>
<object>
...
So in the end I want to have a dictionary or list that says:
208727 is Hdac4 and that for all in my 2 xml files..
So I need the entrez ID and the original symbol.
I want to have that out of two xml files:
http://api.brain-map.org/api/v2/data/query.xml?num_rows=10000&start_row=1&&criteria=model::Gene,rma::criteria,products[abbreviation$eq%27Mouse%27]
and
http://api.brain-map.org/api/v2/data/query.xml?num_rows=10000&start_row=10001&&criteria=model::Gene,rma::criteria,products[abbreviation$eq%27Mouse%27]
Can someone help me with that?
I am not sure in which format I should store it.. In the end I want to search for the ID and get the original name.
I see one question about something close to XML and you can try use them.
Using the lib of python lxml, with docs in link
You can start with:
import requests
from lxml import etree, html
# edit: Yes, BeautfulSoup works too, like your friend say before
from bs4 import BeautifulSoup
url = "http://api.brain-map.org/api/v2/data/query.xml?num_rows=10000&start_row=10001&&criteria=model::Gene,rma::criteria,products[abbreviation$eq%27Mouse%27]"
req = requests.get(url)
doc = req.text
root = etree.XML(doc) # Works with this or ...
soup = BeautifulSoup(doc) # works with this
them you need read to docs to see how to navigate by tags
If you have the XML stored in a file called results.xml
Then using BeautifulSoup is as simple as
from bs4 import BeautifulSoup
with open('results.xml') as f:
soup = BeautifulSoup(f.read(), 'xml')
final_dictionary = {}
for object in soup.find_all('object'):
final_dictionary[object.find('acronym').string] = object.find('entrez-id').string
print(final_dictionary)
If instead, you want to retrieve XML from a URL, then that is also simple
import requests
from bs4 import BeautifulSoup
url = "<your_url>"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'xml')
# Once you have the 'soup' variable assigned
# It's the same code as above example from here on
Output
{'Hdac4': '208727', 'Prss54': '70993'}

How Can I Get The Text In Class BeautifulSoup

How Can I Get The Text Of This Line:
<p class="Type__TypeElement-sc-9snywk-0 dHxvMA ProfileSection__value--1bo-L" data-hj-suppress="true" data-qa="Profile Field: Country">SE</p>
I Want To Get The "SE" I tried a lot of things but non of them worked
You could do something along the following lines:
soup.find('p').getText()
Recommend a simple library. Here's an example: https://github.com/yiyedata/simplified-scrapy-demo/tree/master/doc_examples
from simplified_scrapy.simplified_doc import SimplifiedDoc
html = '''
<p class="Type__TypeElement-sc-9snywk-0 dHxvMA ProfileSection__value--1bo-L" data-hj-suppress="true" data-qa="Profile Field: Country">SE</p>
'''
doc = SimplifiedDoc(html)
print(doc.p.text)

Parse element's tail with requests-html

I want to parse an HTML document like this with requests-html 0.9.0:
from requests_html import HTML
html = HTML(html='<span><span class="data">important data</span> and some rubbish</span>')
data = html.find('.data', first=True)
print(data.html)
# <span class="data">important data</span> and some rubbish
print(data.text)
# important data and some rubbish
I need to distinguish the text inside the tag (enclosed by it) from the tag's tail (the text that follows the element up to the next tag). This is the behaviour I initially expected:
data.text == 'important data'
data.tail == ' and some rubbish'
But tail is not defined for Elements. Since requests-html provides access to inner lxml objects, we can try to get it from lxml.etree.Element.tail:
from lxml.etree import tostring
print(tostring(data.lxml))
# b'<html><span class="data">important data</span></html>'
print(data.lxml.tail is None)
# True
There's no tail in lxml representation! The tag with its inner text is OK, but the tail seems to be stripped away. How do I extract 'and some rubbish'?
Edit: I discovered that full_text provides the inner text only (so much for “full”). This enables a dirty hack of subtracting full_text from text, although I'm not positive it will work if there are any links.
print(data.full_text)
# important data
I'm not sure I've understood your problem, but if you just want to get 'and some rubbish' you can use below code:
from requests_html import HTML
from lxml.html import fromstring
html = HTML(html='<span><span class="data">important data</span> and some rubbish</span>')
data = fromstring(html.html)
# or without using requests_html.HTML: data = fromstring('<span><span class="data">important data</span> and some rubbish</span>')
print(data.xpath('//span[span[#class="data"]]/text()')[-1]) # " and some rubbish"
NOTE that data = html.find('.data', first=True) returns you <span class="data">important data</span> node which doesn't contain " and some rubbish" - it's a text child node of parent span!
the tail property exists with objects of type 'lxml.html.HtmlElement'.
I think what you are asking for is very easy to implement.
Here is a very simple example using requests_html and lxml:
from requests_html import HTML
html = HTML(html='<span><span class="data">important data</span> and some rubbish</span>')
data = html.find('span')
print (data[0].text) # important data and some rubbish
print (data[-1].text) # important data
print (data[-1].element.tail) # and some rubbish
The element attribute points to the 'lxml.html.HtmlElement' object.
Hope this helps.

Python - use lxml to return value of title.text attrib

I'm trying to figure out how to use lxml to parse the xml from a url to return the value of the title attribute. Does anyone know what I have wrong or what would return the Title value/text? So in the example below I want to return the value of 'Weeds - S05E05 - Van Nuys - HD TV'
XML from URL:
<?xml version="1.0" encoding="UTF-8"?>
<subsonic-response xmlns="http://subsonic.org/restapi" status="ok" version="1.8.0">
<song id="11345" parent="11287" title="Weeds - S05E05 - Van Nuys - HD TV" album="Season 5" artist="Weeds" isDir="false" created="2009-07-06T22:21:16" duration="1638" bitRate="384" size="782304110" suffix="mkv" contentType="video/x-matroska" isVideo="true" path="Weeds/Season 5/Weeds - S05E05 - Van Nuys - HD TV.mkv" transcodedSuffix="flv" transcodedContentType="video/x-flv"/>
</subsonic-response>
My current Python code:
import lxml
from lxml import html
from urllib2 import urlopen
url = 'https://myurl.com'
tree = html.parse(urlopen(url))
songs = tree.findall('{*}song')
for song in songs:
print song.attrib['title']
With the above code I get no data return, any ideas?
print out of tree =
<lxml.etree._ElementTree object at 0x0000000003348F48>
print out of songs =
[]
First of all, you are not actually using lxml in your code. You import the lxml HTML parser, but otherwise ignore it and just use the standard library xml.etree.ElementTree module instead.
Secondly, you search for data/song but you do not have any data elements in your document, so no matches will be found. And last, but not least, you have a document there that uses namespaces. You'll have to include those when searching for elements, or use a {*} wildcard search.
The following finds songs for you:
from lxml import etree
tree = etree.parse(URL) # lxml can load URLs for you
songs = tree.findall('{*}song')
for song in songs:
print song.attrib['title']
To use an explicit namespace, you'd have to replace the {*} wildcard with the full namespace URL; the default namespace is available in the .nsmap namespace dict on the tree object:
namespace = tree.nsmap[None]
songs = tree.findall('{%s}song' % namespace)
The whole issue is with the fact that the subsonic-response tag has a xmlns attribute indicating that there is an xml namespace in effect. The below code takes that into account and correctly pigs up the song tags.
import xml.etree.ElementTree as ET
root = ET.parse('test.xml').getroot()
print root.findall('{http://subsonic.org/restapi}song')
Thanks for the help guys, I used a combination of both of yours to get it working.
import xml.etree.ElementTree as ET
from urllib2 import urlopen
url = 'https://myurl.com'
root = ET.parse(urlopen(url)).getroot()
for song in root:
print song.attrib['title']

Categories

Resources