Understanding the parameters in google.search() - python

I am attempting to get the top 5 URL's for specific book abbreviations. I have the parameter num set to 5, which I assumed would return the top 5 results, and stop=1, which I interpreted to mean that after the 5 results were returned, no more HTTP requests would be sent. For some reason, when I set num=5 and stop=1, I only get 3 results, and I get the same 3 results for the proceeding titles searched (which obviously should be different). Additionally, I am hitting HTTP Error 503 when testing to solve this problem despite sleeping the loop, which others on this site suggested would prevent that error. My code is as follows...
import random
import time
count = 0
my_file = open('sometextfile.txt','r')
for aline in my_file:
print("******************************")
print(aline)
count += 1
record_list = aline.split("\t")
if "." in record_list[1]:
search_results = google.search(record_list[2],num=5,stop=1,pause=3.)
for result in search_results:
print(result)
time.sleep(random.randrange(0,3))
and has the following output...
4 Environmental and Behaviour ['0143-005X']
******************************
4 Sustainable Cities and Society ['0143-005X']
******************************
4 Chicago to LA: Making sense of urban theory ['0272-4944']
******************************
4 As adopted by the International Health Conference ['0272-4944']
******************************
5 J. Wetl. ['1442-9985']
https://www.ncbi.nlm.nih.gov/nlmcatalog?term=1442-9985%5BISSN%5D
http://www.wiley.com/bw/journal.asp?ref=1442-9985
http://www.wiley.com/WileyCDA/WileyTitle/productCd-AEC.html
******************************
5 Curr. Opin. Environ. Sustain. ['1442-9985']
https://www.ncbi.nlm.nih.gov/nlmcatalog?term=1442-9985%5BISSN%5D
http://www.wiley.com/bw/journal.asp?ref=1442-9985
http://www.wiley.com/WileyCDA/WileyTitle/productCd-AEC.html
******************************
5 For. Policy Econ. ['1442-9985']
https://www.ncbi.nlm.nih.gov/nlmcatalog?term=1442-9985%5BISSN%5D
http://www.wiley.com/bw/journal.asp?ref=1442-9985
http://www.wiley.com/WileyCDA/WileyTitle/productCd-AEC.html
******************************
5 For. Policy Econ. ['1442-9985']
https://www.ncbi.nlm.nih.gov/nlmcatalog?term=1442-9985%5BISSN%5D
http://www.wiley.com/bw/journal.asp?ref=1442-9985
http://www.wiley.com/WileyCDA/WileyTitle/productCd-AEC.html
******************************
5 Asia. World Dev. ['1442-9985']
Traceback (most recent call last):
File "C:/Users/Peter/Desktop/Programming/Ibata Arens Project/google_search.py", line 27, in <module>
for result in search_results:
File "C:\Users\Peter\Anaconda3\lib\site-packages\google\__init__.py", line 304, in search
html = get_page(url)
File "C:\Users\Peter\Anaconda3\lib\site-packages\google\__init__.py", line 121, in get_page
response = urlopen(request)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 163, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 472, in open
response = meth(req, response)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 582, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 504, in error
result = self._call_chain(*args)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 444, in _call_chain
result = func(*args)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 696, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 472, in open
response = meth(req, response)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 582, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 510, in error
return self._call_chain(*args)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 444, in _call_chain
result = func(*args)
File "C:\Users\Peter\Anaconda3\lib\urllib\request.py", line 590, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 503: Service Unavailable
I am also wondering if it would be better to simply use urllib and scrape through the html returned instead as my goal is simply to retrieve the ISSNs for each abbreviated book title.

Related

Issue with BingX Swap Api Python HTTP Error 405: Method Not Allowed

I'm using urllib.request.
I'm trying to use the BingX Swap Api with Python and I'm getting this error:
I'm trying to use the BingX Swap Api with Python and I'm getting this error:
I'm trying to use the BingX Swap Api with Python and I'm getting this error:
I'm trying to use the BingX Swap Api with Python and I'm getting this error:
Traceback (most recent call last):
File "Desktop\MyBot\MyApi.py", line 118, in <module>
main()
File "Desktop\MyBot\MyApi.py", line 103, in main
print("getLatestPrice:", getLatestPrice())
^^^^^^^^^^^^^^^^
File "Desktop\MyBot\MyApi.py", line 70, in getLatestPrice
return post(url, paramsStr)
^^^^^^^^^^^^^^^^^^^^
File "Desktop\MyBot\MyApi.py", line 19, in post
return urllib.request.urlopen(req).read()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Python\Python311\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Python\Python311\Lib\urllib\request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "Python\Python311\Lib\urllib\request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "Python\Python311\Lib\urllib\request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "Python\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "Python\Python311\Lib\urllib\request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 405: Method Not Allowed
I don't know what I'm doing wrong, I think that my code is ok.
This is My Code:
import urllib.request
import json
import base64
import hmac
import time
APIURL = "https://api-swap-rest.bingbon.pro"
APIKEY = "xxxxxxxxxxxxx"
SECRETKEY = "xxxxxxxxxxxxx"
def genSignature(path, method, paramsMap):
sortedKeys = sorted(paramsMap)
paramsStr = "&".join(["%s=%s" % (x, paramsMap[x]) for x in sortedKeys])
paramsStr = method + path + paramsStr
return hmac.new(SECRETKEY.encode("utf-8"), paramsStr.encode("utf-8"), digestmod="sha256").digest()
def post(url, body):
req = urllib.request.Request(url, data=body.encode("utf-8"), headers={'User-Agent': 'Mozilla/5.0'})
return urllib.request.urlopen(req).read()
def getLatestPrice():
paramsMap = {
"symbol": "BTC-USDT",
}
sortedKeys = sorted(paramsMap)
paramsStr = "&".join(["%s=%s" % (x, paramsMap[x]) for x in sortedKeys])
paramsStr += "&sign=" + urllib.parse.quote(base64.b64encode(genSignature("/api/v1/market/getLatestPrice", "GET", paramsMap)))
url = "%s/api/v1/market/getLatestPrice" % APIURL
return post(url, paramsStr)
def main():
print("getLatestPrice:", getLatestPrice())
if __name__ == "__main__":
main()

i have this error what is the solve for it [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have error I want help for it please help me
the error ;
Traceback (most recent call last):
File "c:/porgrammer/pythons/New folder/Main.py", line 6, in <module>
yt = YouTube(url)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\__main__.py", line 91, in __init__
self.prefetch()
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\__main__.py", line 181, in prefetch
self.vid_info_raw = request.get(self.vid_info_url)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\request.py", line 36, in get
return _execute_request(url).read().decode("utf-8")
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\request.py", line 24, in _execute_request
return urlopen(request) # nosec
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 564, in error
result = self._call_chain(*args)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 756, in http_error_302
return self.parent.open(new, timeout=req.timeout)
response = meth(req, response)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 570, in error
return self._call_chain(*args)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
PS C:\porgrammer\pythons\New folder> & C:/Users/belba/AppData/Local/Programs/Python/Python36-32/python.exe "c:/porgrammer/pythons/New folder/Main.py"
Enter The Link Of The Video: https://www.youtube.com/watch?v=UoHpvfgj3WA
The Link Is : https://www.youtube.com/watch?v=UoHpvfgj3WA
Traceback (most recent call last):
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 1318, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1392, in connect
super().connect()
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 936, in connect
(self.host,self.port), self.timeout, self.source_address)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\socket.py", line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/porgrammer/pythons/New folder/Main.py", line 6, in <module>
yt = YouTube(url)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\__main__.py", line 91, in __init__
self.prefetch()
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\__main__.py", line 162, in prefetch
self.watch_html = request.get(url=self.watch_url)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\request.py", line 36, in get
return _execute_request(url).read().decode("utf-8")
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pytube\request.py", line 24, in _execute_request
return urlopen(request) # nosec
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 526, in open
response = self._open(req, data)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 544, in _open
'_open', req)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 1361, in https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Users\username\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 1320, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 11001] getaddrinfo failed>
The Code ;
from pytube import YouTube
url = input("Enter The Link Of The Video: ")
print("The Link Is : " + url)
print("Copyright Srj")
yt = YouTube(url)
title = yt.title * 60
print("The Title Of The Video Is : " + title )
print("The Views Of The Video Is :" + yt.views)
print("The Lenght Of The Video Is : " + yt.length)
print("The Channel Who Post The Video Is : " + yt.rating)
print("The Description Of The Video Is : " + yt.description)
print("Copyright Srj")
YN = input("Are You Sure You Want To Install The Video?")
if YN == "yes" :
ys = yt.streams.get_highest_resolution()
print("Downloading.......")
ys.download("C:\YT_DOWNLOADS")
print("Downlaod Completed!!")
print("Copyright Srj")
elif YN == "no" :
print("No Proplem")
print("Copyright Srj")
else :
print("Please Retry Again Because It Is Not yes Or no")
print("Copyright Srj")
I modified some things in the code but never got the saame error as you.
This code works for me and even with the exact link of your video :
from pytube import YouTube
url = input("Enter The Link Of The Video: ")
print("The Link Is : " + url)
print("Copyright Srj")
yt = YouTube(url)
title = yt.title
print("The Title Of The Video Is : " + title )
print(f"The Views Of The Video Is : {yt.views}")
print(f"The Lenght Of The Video Is : {yt.length}")
print(f"The Channel Who Post The Video Is : {yt.rating}")
print("The Description Of The Video Is : " + yt.description)
print("Copyright Srj")
while True:
YN = str(input("Are You Sure You Want To Install The Video ? (yes / no)\n"))
if YN.lower() == "yes" :
ys = yt.streams.get_highest_resolution()
print("Downloading.......")
ys.download("C:\YT_DOWNLOADS")
print("Downlaod Completed!!")
print("Copyright Srj")
break
elif YN.lower() == "no" :
print("No Proplem")
print("Copyright Srj")
break
else :
print("Please Retry Again Because It Is Not yes Or no")
print("Copyright Srj")
Hope this works for you :)

Raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 400: Bad Request

So i was trying to follow this tutorial https://www.youtube.com/watch?v=Lg4T9iJkwhE&t=155s to achieve image detection with YOLO.
and when trying to run this code to auto download images of cell phones but it didn't work
import os
import urllib.request as ulib
from bs4 import BeautifulSoup as Soup
import json
url_a = 'https://www.google.com/search?ei=1m7NWePfFYaGmQG51q7IBg&hl=en&q={}'
url_b = '\&tbm=isch&ved=0ahUKEwjjovnD7sjWAhUGQyYKHTmrC2kQuT0I7gEoAQ&start={}'
url_c = '\&yv=2&vet=10ahUKEwjjovnD7sjWAhUGQyYKHTmrC2kQuT0I7gEoAQ.1m7NWePfFYaGmQG51q7IBg'
url_d = '\.i&ijn=1&asearch=ichunk&async=_id:rg_s,_pms:s'
url_base = ''.join((url_a, url_b, url_c, url_d))
headers = {'User-Agent': 'Chrome/41.0.2228.0 Safari/537.36'}
def get_links(search_name):
search_name = search_name.replace(' ', '+')
url = url_base.format(search_name, 0)
request = ulib.Request(url, None, headers)
json_string = ulib.urlopen(request).read()
page = json.loads(json_string)
new_soup = Soup(page[1][1], 'lxml')
images = new_soup.find_all('img')
links = [image['src'] for image in images]
return links
def save_images(links, search_name):
directory = search_name.replace(' ', '_')
if not os.path.isdir(directory):
os.mkdir(directory)
for i, link in enumerate(links):
savepath = os.path.join(directory, '{:06}.png'.format(i))
ulib.urlretrieve(link, savepath)
if __name__ == '__main__':
search_name = 'cell phones'
links = get_links(search_name)
save_images(links, search_name)
I got bunch of errors like this :
C:\Python36\python.exe "C:/dark/darkflow-master/new_model_data/part5 - get_images.py"
Traceback (most recent call last):
File "C:/dark/darkflow-master/new_model_data/part5 - get_images.py", line 39, in <module>
links = get_links(search_name)
File "C:/dark/darkflow-master/new_model_data/part5 - get_images.py", line 19, in get_links
json_string = ulib.urlopen(request).read()
File "C:\Python36\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Python36\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Python36\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python36\lib\urllib\request.py", line 570, in error
return self._call_chain(*args)
File "C:\Python36\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Python36\lib\urllib\request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
Process finished with exit code 1
Someone please help me fix this mess,thanks.

Using Python Script to post data to web server

I am using Python 2.7.3 and I am trying to post data to my local web server. The data I am posting is temperature readings from my raspberry pi. I know the url is right because if I use the postman chrome plugin the data is successfully posted and I get a return message. In postman I can only use form-data though and NOT x-www-form-urlencoded which is how my python script has the content type setup. Can I change it to form-data?
Python Code:
import os
import glob
import time
import threading
import urllib
import urllib2
os.system('modprobe w1-gpio')
os.system('modprobe w1-therm')
base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28*')[0]
device_file = device_folder + '/w1_slave'
def read_temp_raw():
f = open(device_file, 'r')
lines = f.readlines()
f.close()
return lines
def read_temp():
lines = read_temp_raw()
while lines[0].strip()[-3:] != 'YES':
time.sleep(0.2)
lines = read_temp_raw()
equals_pos = lines[1].find('t=')
if equals_pos != -1:
temp_string = lines[1][equals_pos+2:]
temp_c = float(temp_string) / 1000.0
temp_f = temp_c * 9.0 / 5.0 + 32.0
temperature = {'tempf':temp_f, 'tempc':temp_c}
return temperature
def post():
threading.Timer(1800.0, post).start()
temperature = read_temp()
data = temperature
data['room'] = 'Server Room'
print(data)
data=urllib.urlencode(data)
path='http://client.pathtophppage' #the url you want to POST to
req=urllib2.Request(path, data)
req.add_header("Content-type", "application/x-www-form-urlencoded")
page=urllib2.urlopen(req).read()
post()
And the Error:
pi#raspberrypi ~/Documents $ python Temperature.py
{'tempc': 22.0, 'tempf': 71.6, 'room': 'Server Room'}
Traceback (most recent call last):
File "Temperature.py", line 49, in <module>
post()
File "Temperature.py", line 45, in post
page=urllib2.urlopen(req).read()
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 439, in error
result = self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 626, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error
save your time, use this requests lib for httpRequests,
simple app
import requests
url = 'http://url.com'
query = {'field': value}
res = requests.post(url, data=query)
print(res.text)

urllib2.HTTPError: HTTP Error 500: Internal Server Error

if data.find('!exploits') != -1:
nick = data.split('!')[ 0 ].replace(':','')
results = api.exploitdb.search(arg)
sck.send('PRIVMSG ' + chan + " :" + ' Results found: %s' % results['total'] + '\r\n')
for exploit in results['matches'][:5]:
sck.send('PRIVMSG ' + chan + "" + '%s:' % (exploit['description'] + '\r\n'))
This little script searches exploit-db for known exploits, but the script seems not to work when I try to use it within IRC, but its fine when I run it alone,
by alone I mean just this:
from shodan import WebAPI
SHODAN_API_KEY = "MY API KEY"
api = WebAPI(SHODAN_API_KEY)
results = api.exploitdb.search('PHP')
print 'Results found: %s' % results['total']
for exploit in results['matches'][:5]:
print '%s:' % (exploit['description'])
that one works perfect, but I want to use it with IRC
but I get this error:
Traceback (most recent call last):
File "C:\Users\Rabia\Documents\scripts\client.py", line 232, in <module>
results = api.exploitdb.search(arg)
File "C:\Python26\lib\site-packages\shodan\api.py", line 63, in search
return self.parent._request('exploitdb/search', dict(q=query, **kwargs))
File "C:\Python26\lib\site-packages\shodan\api.py", line 116, in _request
data = urlopen(self.base_url + function + '?' + urlencode(params)).read()
File "C:\Python26\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python26\lib\urllib2.py", line 397, in open
response = meth(req, response)
File "C:\Python26\lib\urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python26\lib\urllib2.py", line 435, in error
return self._call_chain(*args)
File "C:\Python26\lib\urllib2.py", line 369, in _call_chain
result = func(*args)
File "C:\Python26\lib\urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error
Your code should be ok.
That's should be the server problem .
Internal Error--the server could not fulfill the request because of an unexpected condition
You can check the return status of what its meaning.

Categories

Resources