Urllib request takes too long to respond [duplicate] - python

I try to open url with python3:
import urllib.request
fp = urllib.request.urlopen("http://lebed.com/")
mybytes = fp.read()
mystr = mybytes.decode("utf8")
fp.close()
print(mystr)
But it hangs on second line.
What's the reason of this problem and how to fix it?

I suppose the reason is that the url does not support robot visiting a site visit. You need to fake a browser visit by sending browser headers along with your request
import urllib.request
url = "http://lebed.com/"
req = urllib.request.Request(
url,
data=None,
headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
}
)
f = urllib.request.urlopen(req)
Tried this one on my system and it works.

Agree with Arpit Solanki. Shown output for a failed request vs successful.
Failed
GET / HTTP/1.1
Accept-Encoding: identity
Host: www.lebed.com
Connection: close
User-Agent: Python-urllib/3.5
Success
GET / HTTP/1.1
Accept-Encoding: identity
Host: www.lebed.com
Connection: close
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36

Related

Python Requests Get not Working

I have a simple Get request I'd like to make using Python's Request library.
import requests
HEADERS = {'user-agent': ('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5)'
'AppleWebKit/537.36 (KHTML, like Gecko)'
'Chrome/45.0.2454.101 Safari/537.36'),
'referer': 'http://stats.nba.com/scores/'}
url = 'http://stats.nba.com/stats/playbyplayv2?EndPeriod=10&EndRange=55800&GameID=0021500281&RangeType=2&Season=2016-17&SeasonType=Regular+Season&StartPeriod=1&StartRange=0'
response = requests.get(url, timeout=5, headers=HEADERS)
However, when I make the requests.get call, I get the error requests.exceptions.ReadTimeout: HTTPConnectionPool(host='stats.nba.com', port=80): Read timed out. (read timeout=5). But I am able to copy/paste that url into my browser and view the resulting JSON. Why is requests not able to get the result?
Your HEADERS format is wrong. I tried with this code and it worked without any issues:
import requests
HEADERS = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36',
}
url = 'http://stats.nba.com/stats/playbyplayv2?EndPeriod=10&EndRange=55800&GameID=0021500281&RangeType=2&Season=2016-17&SeasonType=Regular+Season&StartPeriod=1&StartRange=0'
response = requests.get(url, timeout=5, headers=HEADERS)
print(response.text)

Able to see image on browser, but urllib.urlretrieve() fails to downlad it. How can I download it?

Image path --> http://markinternational.info/data/out/366/221983609-black-hd-desktop-wallpaper.jpg
Code I am using
import urllib
urllib.urlretrieve("https://markinternational.info/data/out/366/221983609-black-hd-desktop-wallpaper.jpg" , "photu.jpg")
What it returns (returns same thing for successful or unsuccessful attempts)
('photu.jpg', <httplib.HTTPMessage instance at 0x7fe3cfb27d88>)
Can someone help?
You need to fake the user-agent to bypass this restriction by the web server.
I used Python3 and requests library, I managed to get the picture:
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
url = 'https://markinternational.info/data/out/366/221983609-black-hd-desktop-wallpaper.jpg'
res = requests.get(url, headers=headers)
with open('photo.jpg', 'wb') as W:
W.write(res.content)
This might help.
import urllib
f = open('photu.jpg','wb')
f.write(urllib.urlopen('https://markinternational.info/data/out/366/221983609-black-hd-desktop-wallpaper.jpg').read())
f.close()
Since you're sending a raw http request without any User-Agent header, the server is not allowing the request to pass through. You can mock it with a defined User-Agent in header and it'll work as if it works on browser.
url = "https://markinternational.info/data/out/366/221983609-black-hd-desktop-wallpaper.jpg"
req = urllib.request.Request(
url,
data=None,
headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
}
)
with open('image.jpg', 'wb') as img_file:
img_file.write(urllib.request.urlopen(req).read())

python user-agent xpath amazon

I am trying to make a scan of the HTML in 2 requests.
at the first one, it's working but when I am trying to use another one,
The HTML I am trying to locate are not visible its getting it wrong.
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
page = requests.get('https://www.amazon.com/gp/aw/ol/B00DZKQSRQ/ref=mw_dp_olp?ie=UTF8&condition=new, headers=headers)
newHeader = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36'}
pagePrice = requests.get('https://www.amazon.com/gp/aw/ol/B01EQJU8AW/ref=mw_dp_olp?ie=UTF8&condition=new',headers=newHeader)
The first request works fine and gets me the good HTML.
The second request gives bad HTML.
I saw this package, but not success :
https://pypi.python.org/pypi/fake-useragent
And I saw this topic not unaswered :
Double user-agent tag, "user-agent: user-agent: Mozilla/"
Thank you very much!

Website blocking out curl even with real browser's headers

I noticed that http://www.momondo.com.cn/ is using some magic technology:
curl doesn't work on it. The URL displays fine in a web browser, but curl always returns a timeout, even when I add all of the headers like a web browser would.
I also tried Python requests and urllib2, but they didn't work either.
C:\Users\Administrator>curl -v -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.89 Safari/537.36" -H "Connection: Keep-Alive" -H "Accept-Encoding:gzip, deflate, sdch" -H "Cache-Control:no-cache" -H "Upgrade-Insecure-Requests:1" -H "Accept-Language:zh-CN,zh;q=0.8" -H "Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
http://www.momondo.com.cn/
* About to connect() to www.momondo.com.cn port 80 (#0)
* Trying 184.50.91.106...
* connected
* Connected to www.momondo.com.cn (184.50.91.106) port 80 (#0)
> GET / HTTP/1.1
> Host: www.momondo.com.cn
> User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.89 Safari/537.36
> Connection: Keep-Alive
> Accept-Encoding:gzip, deflate, sdch
> Cache-Control:no-cache
> Upgrade-Insecure-Requests:1
> Accept-Language:zh-CN,zh;q=0.8
> Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
>
Why and how does this happen? How does Momondo escape curl?
How are you setting up the request? If you are using requests you should use the Session object type and change the headers there so they can be easily reused. It doesn't look like they are doing anything special because using telnet directly on that site (i.e. telnet www.momondo.com.cn 80) with the headers generated by the browser (captured via tcpdump, just to be sure) resulted in content returned rather than hanging till timeout. Also, it pays attention to look at what CDN (content delivery network) the site is behind, and in this case the address resolves to some subdomain at akamaiedge.net which means it might be useful to check out why they might have blocked you.
Anyway, using the headers you have supplied with a requests.Session object, a response was generated.
>>> from requests import Session
>>> session = Session()
>>> session.headers # check the default headers
{'User-Agent': 'python-requests/2.12.5', 'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'}
>>> session.headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
>>> session.headers['Accept-Language'] = 'en-GB,en-US;q=0.8,en;q=0.6,zh-TW;q=0.4'
>>> session.headers['Cache-Control'] = 'max-age=0'
>>> session.headers['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.89 Safari/537.36'
>>> response = session.get('http://www.momondo.com.cn/')
>>> response
<Response [200]>
Doesn't seem to be anything magic at all.
I figure out the reason:
momondo is using following methods to block unreal web clients.
Detect the user-agent. Can not be curl's default UA.
Detect the "Connection" header. Must use "keep-alive" rather than "Keep-Alive" in my initial test.
Detect the "Accept-Encoding" header. Can not be empty, can use anything.
Finally i can use curl to get the content now:
curl -v -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X
10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.89
Safari/537.36" -H "Connection: keep-alive" -H "Accept-Encoding:
nothing" http://www.momondo.com.cn/
BTW, I have doing webscraping for about seven years. This is the first time i met a website used this anti-scraping method. Mark it.

HTTP Error 406: Not Acceptable Python urllib2

I get the following error with the code below.
HTTP Error 406: Not Acceptable Python urllib2
This is my first step before I use beautifulsoup to parse the page.
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
url = "http://www.choicemoney.us/retail.php"
response = opener.open(url)
All help greatly appreciated.
The resource identified by the request is only capable of generating
response entities which have content characteristics not acceptable
according to the accept headers sent in the request. [RFC2616]
Based on the code and what the RFC describes I assume that you need to set both the key and the value of the User-Agent header correctly.
These are correct examples:
Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11
Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A
Just replace the following.
opener.addheaders = [('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A')]
I believe #ipinak's answer is correct.
urllib2 actually provides a default User-Agent that works here, so if you delete opener.addheaders = [('User-agent', 'Mozilla/5.0')] the response should have status code 200.
I recommend the popular requests library for such jobs as its API is much easier to use.
url = "http://www.choicemoney.us/retail.php"
resp = requests.get(url)
print resp.status_code # 200
print resp.content # can be used in your beautifulsoup.

Categories

Resources