Parsing a specific website crashes the Python process - python

Looking to parse an HTML page for images (from http://www.z-img.com), and when I load the page into BeautifulSoup (bs4), Python crashes. The "problem details" shows that etree.pyd was the "Fault Module Name", which means its probably a parsing error, but so far, I can't quite nail down the cause of it.
Here's the simplest code I can boil it down to, on Python2.7:
import requests, bs4
url = r"http://z-img.com/search.php?&ssg=off&size=large&q=test"
r = requests.get(url)
html = r.content
#or
#import urllib2
#html = urllib2.urlopen(url).read()
soup = bs4.BeautifulSoup(html)
along with a sample output on PasteBin (http://pastebin.com/XYT9g4Lb), after I had passed it through JsBeautifier.com.

This is a bug that was fixed in lxml version 2.3.5. Upgrade to version 2.3.5 or later.

Oh, there you go, naturally the first thing I try after I submit the question is the solution: the <!DOCTYPE> tag seems to be at the root of it. I created a new HTML file, temp.html:
<!DOCTYPE>
<html>
</html>
and passed that to BeautifulSoup as an HTML string, and that was enough to crash Python again. So I just need to remove that tag before I pass the HTML to BeautifulSoup in the future:
import requests, bs4
url = r"http://z-img.com/search.php?&ssg=off&size=large&q=test"
r = requests.get(url)
html = r.content
#or
#import urllib2
#html = urllib2.urlopen(url).read()
#replace the declaration with nothing, and my problems are solved
html = html.replace(r"<!DOCTYPE>", "")
soup = bs4.BeautifulSoup(html)
Hope this saves someone else some time.

Related

Extractinf info form HTML that has no tags

I am using both selenium and BeautifulSoup in order to do some web scraping. I have managed myself to obtain the next piece of code:
from selenium.webdriver import Chrome
from bs4 import BeautifulSoup
url = 'https://www.renfe.com/content/renfe/es/es/cercanias/cercanias-valencia/lineas/jcr:content/root/responsivegrid/rftabdetailline/item_1591014181985.html'
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'lxml')
The output soup produces has the following structure:
<html>
<head>
</head>
<body>
<rf-list-detail line-color="245,150,40" line-number="C2" line-text="Línea C2"
list="[{... ;direction":"Place1"}
,... ,
;direction":"Place2"}...
Recall both text and output style have been modified for reading reasons. I attach an image of the actual output just in case it is more convinient.
Does anyone know how could I obtain every PlaceN (in the image, Moixent would be Place1) in a list? Something like
places = [Place1,...,PlaceN]
I have tried parsing it, but as it has no tags (or at least my html knowledge, which is barely none, says so) I obtain nothing. I have also tried using a regular expression, which I have just found out where a thing, but I am not sure how to do it properly.
Any thoughts?
Thank you in advance!!
output of soup
This site responds with non-html structure. So, you need no html-parser like BeautifulSoup or lxml for this task.
Here example using requests library. You can install it like this
pip install requests
import requests
import html
import json
url = 'https://www.renfe.com/content/renfe/es/es/cercanias/cercanias-valencia/lineas/jcr:content/root/responsivegrid/rftabdetailline/item_1591014181985.html'
response = requests.get(url)
data = response.text # get data from site
raw_list = data.split("'")[1] # extract rf-list-detail.list attribute
json_list = html.unescape(raw_list) # decode html symbols
parsed_list = json.loads(json_list) # parse json
print(parsed_list) # printing result
directions = []
for item in parsed_list:
directions.append(item["direction"])
print(directions) # extracting directions
# ['Moixent', 'Vallada', 'Montesa', "L'Alcudia de Crespins", 'Xàtiva', "L'Enova-Manuel", 'La Pobla Llarga', 'Carcaixent', 'Alzira', 'Algemesí', 'Benifaió-Almussafes', 'Silla', 'Catarroja', 'Massanassa', 'Alfafar-Benetússer', 'València Nord']

python crawling text from <em></em>

Hi, I want to get the text(number 18) from em tag as shown in the picture above.
When I ran my code, it did not work and gave me only empty list. Can anyone help me? Thank you~
here is my code.
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = 'https://blog.naver.com/kwoohyun761/221945923725'
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')
likes = soup.find_all('em', class_='u_cnt _count')
print(likes)
When you disable javascript you'll see that the like count is loaded dynamically, so you have to use a service that renders the website and then you can parse the content.
You can use an API: https://www.scraperapi.com/
Or run your own for example: https://github.com/scrapinghub/splash
EDIT:
First of all, I missed that you were using urlopen incorrectly the correct way is described here: https://docs.python.org/3/howto/urllib2.html . Assuming you are using python3, which seems to be the case judging by the print statement.
Furthermore: looking at the issue again it is a bit more complicated. When you look at the source code of the page it actually loads an iframe and in that iframe you have the actual content: Hit ctrl + u to see the source code of the original url, since the side seems to block the browser context menu.
So in order to achieve your crawling objective you have to first grab the initial page and then grab the page you are interested in:
from urllib.request import urlopen
from bs4 import BeautifulSoup
# original url
url = "https://blog.naver.com/kwoohyun761/221945923725"
with urlopen(url) as response:
html = response.read()
soup = BeautifulSoup(html, 'lxml')
iframe = soup.find('iframe')
# iframe grabbed, construct real url
print(iframe['src'])
real_url = "https://blog.naver.com" + iframe['src']
# do your crawling
with urlopen(real_url) as response:
html = response.read()
soup = BeautifulSoup(html, 'lxml')
likes = soup.find_all('em', class_='u_cnt _count')
print(likes)
You might be able to avoid one round trip by analyzing the original url and the URL in the iframe. At first glance it looked like the iframe url can be constructed from the original url.
You'll still need a rendered version of the iframe url to grab your desired value.
I don't know what this site is about, but it seems they do not want to get crawled maybe you respect that.

Extract the source code of this page using Python (https://mobile.twitter.com/i/bookmarks)

how Extract the source code of this page using Python (https://mobile.twitter.com/i/bookmarks) !
The problem is that the actual page code does not appear
import mechanicalsoup as ms
Browser = ms.StatefulBrowser()
Browser.open("https://mobile.twitter.com/login")
Browser.select_form('form[action="/sessions"]')
Browser["session[username_or_email]"] = 'email'
Browser["session[password]"] = 'password'
Browser.submit_selected()
Browser.open("https://mobile.twitter.com/i/bookmarks")
html = Browser.get_current_page()
print html
Use BeautifulSoup.
from urllib import request
from bs4 import BeautifulSoup
url_1 = "http://www.google.com"
page = request.urlopen(url_1)
soup = BeautifulSoup(page)
print(soup.prettify())
From this answer:
https://stackoverflow.com/a/43290890/11034096
Edit:
It looks like the issue is that Twitter is trying to use a JS redirect to load the next page. JS isn't supported by mechanicalsoup, so you'll need to try something like selenium.
The html variable that you are returning is actually a BeautifulSoup object and not the text HTML. I would try using:
print(html.text())
to see if that will print the HTML directly.
Alternatively, from the BeautifulSoup documentation you should be able to use the non-pretty printing of:
str(html)
or
unicode(html.a)

bs4 the second comment <!-- > is missing

I am doing python challenge level-9 with BeautifulSoup.
url = "http://www.pythonchallenge.com/pc/return/good.html".
bs4.version == '4.3.2'.
There are two comments in its page source. The output of soup should be as follows.
However, when BeautifulSoup is applied, the second comment is missing.
It seems kinda weird. Any hint? Thanks!
import requests
from bs4 import BeautifulSoup
url = "http://www.pythonchallenge.com/pc/return/good.html"
page = requests.get(url, auth = ("huge", "file")).text
print page
soup = BeautifulSoup(page)
print soup
Beautiful Soup is a wrapper around an html parser. The default parser is very strict, and when it encounters malformed html silently drops the elements it had trouble with.
You should instead install the package 'html5lib' and use that as your parser, like so:
soup = BeautifulSoup(page, 'html5lib')

get all links from html even with show more link

I am using python and beautifulsoup for html parsing.
I am using the following code :
from BeautifulSoup import BeautifulSoup
import urllib2
import re
url = "http://www.wikipathways.org//index.php?query=signal+transduction+pathway&species=Homo+sapiens&title=Special%3ASearchPathways&doSearch=1&ids=&codes=&type=query"
main_url = urllib2.urlopen(url)
content = main_url.read()
soup = BeautifulSoup(content)
for a in soup.findAll('a',href=True):
print a[href]
but I am not getting output links like :
http://www.wikipathways.org/index.php/Pathway:WP26
and also imp thing is there are 107 pathways. but I will not get all the links as other lins depends on "show links" at the bottom of the page.
so, how can I get all the links (107 links) from that url?
Your problem is line 8, content = url.read(). You're not actually reading the webpage, you're actually just doing nothing (If anything, you should be getting an error).
main_url is what you want to read, so change line 8 to:
content = main_url.read()
You also have another error, print a[href]. href should be a string, so it should be:
print a['href']
I would suggest using lxml its faster and better for parsing html worth investing the time to learn it.
from lxml.html import parse
dom = parse('http://www.wikipathways.org//index.php?query=signal+transduction+pathway&species=Homo+sapiens&title=Special%3ASearchPathways&doSearch=1&ids=&codes=&type=query').getroot()
links = dom.cssselect('a')
That should get you going.

Categories

Resources