I use urllib.request and regex for html parse but when I write in json file there is double backslash in text. How can I replace one backslash?
I have looked at many solutions but none of them have worked.
headers = {}
headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'
req = Request('https://www.manga-tr.com/manga-list.html', headers=headers)
response = urlopen(req).read()
a = re.findall(r'<b><a[^>]* href="([^"]*)"',str(response))
sub_req = Request('https://www.manga-tr.com/'+a[3], headers=headers)
sub_response = urlopen(sub_req).read()
manga = {}
manga['manga'] = []
manga_subject = re.findall(r'<h3>Tan.xc4.xb1t.xc4.xb1m<.h3>[^<]*.n.t([^<]*).t',str(sub_response))
manga['manga'].append({'msubject': manga_subject })
with io.open('allmanga.json', 'w', encoding='utf-8-sig') as outfile:
outfile.write(json.dumps(manga, indent=4))
this is my json file
{
"manga": [
{
"msubject": [
" Minami Ria 16 ya\\xc5\\x9f\\xc4\\xb1ndad\\xc4\\xb1r. \\xc4\\xb0lk erkek arkada\\xc5\\x9f\\xc4\\xb1 sakatani jirou(16) ile yakla\\xc5\\x9f\\xc4\\xb1k 6 ayd\\xc4\\xb1r beraberdir. Herkes taraf\\xc4\\xb1ndan \\xc3\\xa7ifte kumru olarak g\\xc3\\xb6r\\xc3\\xbclmelerine ra\\xc4\\x9fmen ili\\xc5\\x9fkilerinde %1\\'lik bir eksiklik vard\\xc4\\xb1r. Bu eksikli\\xc4\\x9fi tamamlayabilecekler mi?"
}
]
}
Why Is This Happening?
The error is when str is used to convert a bytes object to a str. This does not do the conversion in the desired way.
a = re.findall(r'<b><a[^>]* href="([^"]*)"',str(response))
# ^^^
For example, if the response is the word "Tanıtım", you it would be expressed in UTF-8 as b'Tan\xc4\xb1t\xc4\xb1m'. If you then use str on that, you get:
In [1]: response = b'Tan\xc4\xb1t\xc4\xb1m'
In [2]: str(response)
Out[2]: "b'Tan\\xc4\\xb1t\\xc4\\xb1m'"
If you convert this to JSON, you'll see double backslashes (which are really just ordinary backslashes, encoded as JSON).
In [3]: import json
In [4]: print(json.dumps(str(response)))
"b'Tan\\xc4\\xb1t\\xc4\\xb1m'"
The correct way to convert a bytes object back to a str is by using the decode method, with the appropriate encoding:
In [5]: response.decode('UTF-8')
Out[5]: 'Tanıtım'
Note that the response is not valid UTF-8, unfortunately. The website operators appear to be serving corrupted data.
Quick Fix
Replace every call to str(response) with response.decode('UTF-8', 'replace') and update the regular expressions to match.
a = re.findall(
# "r" prefix to string is unnecessary
'<b><a[^>]* href="([^"]*)"',
response.decode('UTF-8', 'replace'))
sub_req = Request('https://www.manga-tr.com/'+a[3],
headers=headers)
sub_response = urlopen(sub_req).read()
manga = {}
manga['manga'] = []
manga_subject = re.findall(
# "r" prefix to string is unnecessary
'<h3>Tanıtım</h3>([^<]*)',
sub_response.decode('UTF-8', 'replace'))
manga['manga'].append({'msubject': manga_subject })
# io.open is the same as open
with open('allmanga.json', 'w', encoding='utf-8-sig') as fp:
# json.dumps is unnecessary
json.dump(manga, fp, indent=4)
Better Fix
Use "Requests"
The Requests library is much easier than using urlopen. You will have to install it (with pip, apt, dnf, etc, whatever you use), it does not come with Python. It will look like this:
response = requests.get(
'https://www.manga-tr.com/manga-list.html')
And then response.text contains the decoded string, you don't need to decode it yourself. Easier!
Use BeautifulSoup
The Beautiful Soup library can search through HTML documents, and it is more reliable and easier to use than regular expressions. It also needs to be installed. You might use it like this, for example, to find all the summary from a manga page:
soup = BeautifulSoup(response.text, 'html.parser')
subject = soup.find('h3', text='Tanıtım').next_sibling.string
Summary
Here is a Gist containing a more complete example of what the scraper might look like.
Keep in mind that scraping a website can be a bit difficult, just because you might scrape 100 pages and then suddenly discover that something is wrong with your scraper, or you are hitting the website too hard, or something crashes and fails and you need to start over. So scraping well often involves rate-limiting, saving progress and caching responses, and (ideally) parsing robots.txt.
But Requests + BeautifulSoup will at least get you started. Again, see the Gist.
Related
I use Python's request library to access (public) ads.txt files:
import requests
r = requests.get('https://www.sicurauto.it/ads.txt')
print(r.text)
This works fine in most cases, but the text from the URL above begins with some strange symbols:
> google.com, [...]
If I open the URL in my browser, I do not see these three symbols; the text begins with google.com, [...] I am a beginner when it comes to encodings and web protocols ... where might these odd symbols come from?
You need to specify your encoding (in r.encoding) before calling r.text:
import requests
r = requests.get('https://www.sicurauto.it/ads.txt')
r.encoding = 'utf-8-sig' # specify UTF-8-sig encoding
print(r.text)
I am working on a python web scraper to extract data from this webpage. It contains latin characters like ą, č, ę, ė, į, š, ų, ū, ž. I use BeautifulSoup to recognise the encoding:
def decode_html(html_string):
converted = UnicodeDammit(html_string)
print(converted.original_encoding)
if not converted.unicode_markup:
raise UnicodeDecodeError(
"Failed to detect encoding, tried [%s]",
', '.join(converted.tried_encodings))
return converted.unicode_markup
The encoding that it always seems to use is "windows-1252". However, this turns characters like ė into ë and ų into ø when printing to file or console. I use the lxml library to scrape the data. So I would think that it uses the wrong encoding, but what's odd is that if I use lxml.html.open_in_browser(decoded_html), all the characters are back to normal. How do I print the characters to a file without all the mojibake?
This is what I am using for output:
def write(filename, obj):
with open(filename, "w", encoding="utf-8") as output:
json.dump(obj, output, cls=CustomEncoder, ensure_ascii=False)
return
From the HTTP headers set on the specific webpage you tried to load:
Content-Type:text/html; charset=windows-1257
so Windows-1252 will result in invalid results. BeautifulSoup made a guess (based on statistical models), and guessed wrong. As you noticed, using 1252 instead leads to incorrect codepoints:
>>> 'ė'.encode('cp1257').decode('cp1252')
'ë'
>>> 'ų'.encode('cp1257').decode('cp1252')
'ø'
CP1252 is the fallback for the base characterset detection implementation in BeautifulSoup. You can improve the success-rate of BeautifulSoup's character-detection code by installing an external library; both chardet and cchardet are supported. These two libraries guess at MacCyrillic and ISO-8859-13, respectively (both wrong, but cchardet got pretty close, perhaps close enough).
In this specific case, you can make use of the HTTP headers instead. In requests, I generally use:
import requests
from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
resp = requests.get(url)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
The above only uses the encoding from the response if explicitly set by the server, and there was no HTML <meta> header. For text/* mime-types, HTTP specifies that the response should be considered as using Latin-1, which requests adheres too, but that default would be incorrect for most HTML data.
I wrote a program with urllib that gets all article titles from a webpage (in this case nytimes.com). There is only one problem. Some titles have a semicolon, which results in an ugly "There\xe2\x80\x99s" if printed. So I tried to replace the \xe2\x80\x99 with a ' but it does not seem to work. I think there is a problem with Tuples. Unfortunately I can't create a tuple, that results in the same problem.
import urllib.request
import urllib.parse
import re
url = 'https://www.nytimes.com/'
headers = {}
headers['User-Agent'] = 'Mozilla/5.0 (X11; Linux i686)'
req = urllib.request.Request(url, headers = headers)
resp = urllib.request.urlopen(req)
resp_data = resp.read()
par = re.findall(r'story-heading">(.*?)',str(resp_data))
for n in par:
print(n[1])
print(n[1].replace("\xe2\x80\x99","'"))
I tried to create string variables from the tuple but nothing is working. I know there is another solution to this with BeautifulSoup but I thought I'd try to find my own way.
You have to change this one line:
resp_data = resp.read()
to:
resp_data = resp.read().decode("utf8")
And the work will be done.
Explication:
The website is using ut8 encoding, as i'm guessing, so you have to decode the returned bytes into an utf8 string which can be better represented like what you intended to have.
PS: You can use resp.read().decode() without an argument in decode() method and you let Python guessing the encoding type.
You are seeing the repr() of the string, hence the funny characters. If you want, coerce this to a string. See my results:
>>> print repr(n[1])
'There\xe2\x80\x99s'
>>> print str(n[1])
There’s
In Summary: wrap your n[1] in str()
I am using Python 3.x. While using urllib.request to download the webpage, i am getting a lot of \n in between. I am trying to remove it using the methods given in the other threads of the forum, but i am not able to do so. I have used strip() function and the replace() function...but no luck! I am running this code on eclipse. Here is my code:
import urllib.request
#Downloading entire Web Document
def download_page(a):
opener = urllib.request.FancyURLopener({})
try:
open_url = opener.open(a)
page = str(open_url.read())
return page
except:
return""
raw_html = download_page("http://www.zseries.in")
print("Raw HTML = " + raw_html)
#Remove line breaks
raw_html2 = raw_html.replace('\n', '')
print("Raw HTML2 = " + raw_html2)
I am not able to spot out the reason of getting a lot of \n in the raw_html variable.
Your download_page() function corrupts the html (str() call) that is why you see \n (two characters \ and n) in the output. Don't use .replace() or other similar solution, fix download_page() function instead:
from urllib.request import urlopen
with urlopen("http://www.zseries.in") as response:
html_content = response.read()
At this point html_content contains a bytes object. To get it as text, you need to know its character encoding e.g., to get it from Content-Type http header:
encoding = response.headers.get_content_charset('utf-8')
html_text = html_content.decode(encoding)
See A good way to get the charset/encoding of an HTTP response in Python.
if the server doesn't pass charset in Content-Type header then there are complex rules to figure out the character encoding in html5 document e.g., it may be specified inside html document: <meta charset="utf-8"> (you would need an html parser to get it).
If you read the html correctly then you shouldn't see literal characters \n in the page.
If you look at the source you've downloaded, the \n escape sequences you're trying to replace() are actually escaped themselves: \\n. Try this instead:
import urllib.request
def download_page(a):
opener = urllib.request.FancyURLopener({})
open_url = opener.open(a)
page = str(open_url.read()).replace('\\n', '')
return page
I removed the try/except clause because generic except statements without targeting a specific exception (or class of exceptions) are generally bad. If it fails, you have no idea why.
Seems like they are literal \n characters , so i suggest you to do like this.
raw_html2 = raw_html.replace('\\n', '')
I have the following code for urllib and BeautifulSoup:
getSite = urllib.urlopen(pageName) # open current site
getSitesoup = BeautifulSoup(getSite.read()) # reading the site content
print getSitesoup.originalEncoding
for value in getSitesoup.find_all('link'): # extract all <a> tags
defLinks.append(value.get('href'))
The result of it:
/usr/lib/python2.6/site-packages/bs4/dammit.py:231: UnicodeWarning: Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.
"Some characters could not be decoded, and were "
And when i try to read the site i get:
�7�e����0*"I߷�G�H����F������9-������;��E�YÞBs���������㔶?�4i���)�����^W�����`w�Ke��%��*9�.'OQB���V��#�����]���(P��^��q�$�S5���tT*�Z
The page is in UTF-8, but the server is sending it to you in a compressed format:
>>> print getSite.headers['content-encoding']
gzip
You'll need to decompress the data before running it through Beautiful Soup. I got an error using zlib.decompress() on the data, but writing the data to a file and using gzip.open() to read from it worked fine--I'm not sure why.
BeautifulSoup works with Unicode internally; it'll try and decode non-unicode responses from UTF-8 by default.
It looks like the site you are trying to load is using a different encode; for example, it could be UTF-16 instead:
>>> print u"""�7�e����0*"I߷�G�H����F������9-������;��E�YÞBs���������㔶?�4i���)�����^W�����`w�Ke��%��*9�.'OQB���V��#�����]���(P��^��q�$�S5���tT*�Z""".encode('utf-8').decode('utf-16-le')
뿯㞽뿯施뿯붿뿯붿⨰䤢럟뿯䞽뿯䢽뿯붿뿯붿붿뿯붿뿯붿뿯㦽붿뿯붿뿯붿뿯㮽뿯붿붿썙䊞붿뿯붿뿯붿뿯붿뿯붿铣㾶뿯㒽붿뿯붿붿뿯붿뿯붿坞뿯붿뿯붿뿯悽붿敋뿯붿붿뿯⪽붿✮兏붿뿯붿붿뿯䂽뿯붿뿯붿뿯嶽뿯붿뿯⢽붿뿯庽뿯붿붿붿㕓뿯붿뿯璽⩔뿯媽
It could be mac_cyrillic too:
>>> print u"""�7�e����0*"I߷�G�H����F������9-������;��E�YÞBs���������㔶?�4i���)�����^W�����`w�Ke��%��*9�.'OQB���V��#�����]���(P��^��q�$�S5���tT*�Z""".encode('utf-8').decode('mac_cyrillic')
пњљ7пњљeпњљпњљпњљпњљ0*"IяЈпњљGпњљHпњљпњљпњљпњљFпњљпњљпњљпњљпњљпњљ9-пњљпњљпњљпњљпњљпњљ;пњљпњљEпњљY√ЮBsпњљпњљпњљпњљпњљпњљпњљпњљпњљгФґ?пњљ4iпњљпњљпњљ)пњљпњљпњљпњљпњљ^Wпњљпњљпњљпњљпњљ`wпњљKeпњљпњљ%пњљпњљ*9пњљ.'OQBпњљпњљпњљVпњљпњљ#пњљпњљпњљпњљпњљ]пњљпњљпњљ(Pпњљпњљ^пњљпњљqпњљ$пњљS5пњљпњљпњљtT*пњљZ
But I have way too little information about what kind of site you are trying to load nor can I read the output of either encoding. :-)
You'll need to decode the result of getSite() before passing it to BeautifulSoup:
getSite = urllib.urlopen(pageName).decode('utf-16')
Generally, the website will return what encoding was used in the headers, in the form of a Content-Type header (probably text/html; charset=utf-16 or similar).
I ran into the same problem, and as Leonard mentioned, it was due to a compressed format.
This link solved it for me which says to add ('Accept-Encoding', 'gzip,deflate') in the request header. For example:
opener = urllib2.build_opener()
opener.addheaders = [('Referer', referer),
('User-Agent', uagent),
('Accept-Encoding', 'gzip,deflate')]
usock = opener.open(url)
url = usock.geturl()
data = decode(usock)
usock.close()
return data
Where the decode() function is defined by:
def decode (page):
encoding = page.info().get("Content-Encoding")
if encoding in ('gzip', 'x-gzip', 'deflate'):
content = page.read()
if encoding == 'deflate':
data = StringIO.StringIO(zlib.decompress(content))
else:
data = gzip.GzipFile('', 'rb', 9, StringIO.StringIO(content))
page = data.read()
return page