Please check this question Python oauth2 - making request I am working with vimeo integration in my web application.
Initially I got an oauth_signature and I had no problems(no errors), I tried those things once again from the first and Now I'm getting ValueError: need more than 1 value to unpack while making this request
>>> r = request.get(url, headers=headers)
You can check out my code here https://gist.github.com/2949182
The error is
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/api.py", line 51, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/api.py", line 39, in request
return s.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/sessions.py", line 159, in request
headers[k] = header_expand(v)
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/utils.py", line 152, in header_expand
for i, (value, params) in enumerate(headers):
ValueError: need more than 1 value to unpack
Thanks!
UPDATE
>>> headers
{'Authorization': u'oauth_body_hash=XXXXXXXXXXXXXXXXXXXXXXXXXX,oauth_nonce=3454768,oauth_timestamp=1340035585,oauth_consumer_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX,oauth_signature_method=HMAC-SHA1,oauth_version=1.0,oauth_signature=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX,oauth_callback=http://127.0.0.1:8000/information/vimeo'}
to be able to unpack a dictionary you would have to use the .items(), so the code would be like this:
for i, (value, params) in enumerate(headers.items()):
now since that is not your code and you can't change it, what the error is telling you is that the headers should not be a dictionary but a tuple (or a list), if you pass the header like this:
headers = [("Authorization", "Values")]
it should work.
EDIT: This doesn't works. Now the dictionary version {"Authorization": "Values"} works for me, maybe updating requests will help.
Related
I have defined the following function to get the redirected URLs using Requests library. However i get the error KeyError: 'location'
def get_redirected_url(r_url):
r = requests.get(r_url, allow_redirects=False)
url = r.headers['Location']
return url
Calling the function
get_redirected_url('http://omgili.com/ri/.wHSUbtEfZQujfav8g98PjRMi_ogV.5EwBTfg476RyS2Gqya3tDAwNIv8Yi8wQ9AK4.U2mxeyq2_xbUjqsOx8NYY8r0qgxD.4Bm2SrouZKnrg1jqRxEfVmGbtTaKTaaDJtOjtS46fYr6A5UJoh9BYxVtDGJIsbSfgshRXR3FVr4-')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in get_redirected_url
File "/home/user/PycharmProjects/untitled/venv/lib/python3.6/site-packages/requests/structures.py", line 54, in __getitem__
return self._store[key.lower()][1]
KeyError: 'location'
Is it failing because the redirection waits for 5 seconds? If so, how do we incorporate that as well?
I have tried the other answers like this and this. But unable to crack it.
It is simple: r.headers doesn't have 'Location' key. You may have use the wrong key.
Edit: the site you want to browse with requests is protected.
def i(bot,update,args):
coin=args
infoCall =requests.get("https://api.coingecko.com/api/v3/coins/").json()
coinId = infoCall ['categories']
update.message.reply_text(coinId)
I would like to add to the end of the api request the args declared in coins=args so that it retrieves the info my user requests but this is the error i get
coinId = infoCall ['categories']
KeyError: 'categories'
which my guess is because its not formating the request correctly so the api is giving a 404 and not the info being requested
def i(bot,update,args):
coin=args
infoCall =requests.get("https://api.coingecko.com/api/v3/coins/").json()
infoCall = json.loads(infoCall)+str(coins)
coinId = infoCall['categories']
update.message.reply_text(str (coinId))
after adding this, this is the new error i get
Traceback (most recent call last):
File "C:\Users\Matthew\AppData\Local\Programs\Python\Python37-32\lib\site-packages\telegram\ext\dispatcher.py", line 279, in process_update
handler.handle_update(update, self)
File "C:\Users\Matthew\AppData\Local\Programs\Python\Python37-32\lib\site-packages\telegram\ext\commandhandler.py", line 173, in handle_update
return self.callback(dispatcher.bot, update, **optional_args)
File "C:/Users/Matthew/Desktop/coding_crap/CryptoBotBetav2.py", line 78, in i
infoCall = json.loads(infoCall)+str(coins)
File "C:\Users\Matthew\AppData\Local\Programs\Python\Python37-32\lib\json\__init__.py", line 341, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not list
Basically you are not appending the args param to the api point that's why you were getting the error. You need to append the 'bitcoin' to the api point before you make the request rather than on the output.
A typical example would be as follow. I have removed the update and other unused variables. You can put them as you need.
import requests
def i(args):
coin=args
infoCall =requests.get("https://api.coingecko.com/api/v3/coins/"+ args).json()
coinId = infoCall ['categories']
print(coinId)
# update.message.reply_text(coinId)
i('bitcoin')
Output:
['Cryptocurrency']
I am trying at scrape an API that accepts some value in headers only in float form, when I send it in string form, it gives 400 Bad Request, and when I try to send headers in float form scrapy gives Error like this:
self.headers = Headers(headers or {}, encoding=encoding)
File "C:\Python27\lib\site-packages\scrapy\http\headers.py", line 12, in __init__
super(Headers, self).__init__(seq)
File "C:\Python27\lib\site-packages\scrapy\utils\datatypes.py", line 193, in __init__
self.update(seq)
File "C:\Python27\lib\site-packages\scrapy\utils\datatypes.py", line 229, in update
super(CaselessDict, self).update(iseq)
File "C:\Python27\lib\site-packages\scrapy\utils\datatypes.py", line 228, in <genexpr>
iseq = ((self.normkey(k), self.normvalue(v)) for k, v in seq)
File "C:\Python27\lib\site-packages\scrapy\http\headers.py", line 27, in normvalue
return [self._tobytes(x) for x in value]
File "C:\Python27\lib\site-packages\scrapy\http\headers.py", line 40, in _tobytes
raise TypeError('Unsupported value type: {}'.format(type(x)))
TypeError: Unsupported value type: <type 'float'>
None
Anyone have any solution or faced similer kind of problem?
First of all, headers are always sent as string. There is no data-type for headers like int, bool, float.
I might send an Api a header X-RELOAD-TIME 2.0003355 but that doesn't mean i need 2.0003355 as float. And that is what the library is complaining
So in your headers make sure
headers["Name-Of-Float-Header"] = str(float_value)
and then call should be able to go through
The title pretty much says it all. Here's my code:
from urllib2 import urlopen as getpage
print = getpage("www.radioreference.com/apps/audio/?ctid=5586")
and here's the traceback error I get:
Traceback (most recent call last):
File "C:/Users/**/Dropbox/Dev/ComServ/citetest.py", line 2, in <module>
contents = getpage("www.radioreference.com/apps/audio/?ctid=5586")
File "C:\Python25\lib\urllib2.py", line 121, in urlopen
return _opener.open(url, data)
File "C:\Python25\lib\urllib2.py", line 366, in open
protocol = req.get_type()
File "C:\Python25\lib\urllib2.py", line 241, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: www.radioreference.com/apps/audio/?ctid=5586
My best guess is that urllib can't retrieve data from untidy php URLs. if this is the case, is there a work around? If not, what am I doing wrong?
You should first try to add 'http://' in front of the url. Also, do not store the results in print, as it is binding the reference to another (non callable) object.
So this line should be:
page_contents = getpage("http://www.radioreference.com/apps/audio/?ctid=5586")
This returns a file like object. To read its contents you need to use different file manipulation methods, like this:
for line in page_contents.readlines():
print line
You need to pass a full URL: ie it must begin with http://.
Simply use http://www.radioreference.com/apps/audio/?ctid=5586 and it'll work fine.
In [24]: from urllib2 import urlopen as getpage
In [26]: print getpage("http://www.radioreference.com/apps/audio/?ctid=5586")
<addinfourl at 173987116 whose fp = <socket._fileobject object at 0xa5eb6ac>>
I am getting the following error:
InvalidURLError: ApplicationError: 1
Checked my code, and logged some various things and the url's causing this error to appear look pretty normal. They are being quoted through urllib.quote and visiting them through a browser results in a normal result.
The error is happening with many URL's, not one. The URL points to an API service and is constructed within the app.
Btw,here's a link to the google.appengine.api.urlfetch source code: http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/api/urlfetch.py?r=56.
The docstrings say that the error should happen when: "InvalidURLError if the url was invalid." and "If the URL is an empty string or obviously invalid, we throw an urlfetch.InvalidURLError"
Just to make it simple for those who would like to test this:
url = 'http://api.embed.ly/1/oembed?key=REMOVEDKEY&maxwidth=400&urls=http%3A//V.interesting.As,http%3A//abcn.ws/z26G9a,http%3A//apne.ws/z37VyP,http%3A//bambuser.com/channel/baba-omer/broadcast/2348417,http%3A//bambuser.com/channel/baba-omer/broadcast/2348417,http%3A//bambuser.com/channel/baba-omer/broadcast/2348417,http%3A//bbc.in/xFx3rc,http%3A//bbc.in/zkkLJq,http%3A//billingsgazette.com/news/local/former-president-bush-to-speak-at-billings-fundraiser-in-may/article_f7ef425a-349c-56a9-a399-606b48033f35.html,http%3A//billingsgazette.com/news/local/former-president-bush-to-speak-at-billings-fundraiser-in-may/article_f7ef425a-349c-56a9-a399-606b48033f35.html,http%3A//billingsgazette.com/news/local/friday-forecast-calls-for-cloudy-windy-day-nighttime-snow-possible/article_d3eb3159-68b0-5559-8255-03fce56eaedd.html,http%3A//billingsgazette.com/news/local/gallery-toy-run/collection_f5042a31-bfd4-5f63-a901-2a8c3e8fb26a.html%230,http%3A//billingsgazette.com/news/local/gas-prices-continue-to-drop-in-billings/article_4e8fd07e-0e1e-5c0e-b551-4162b60c4b60.html,http%3A//billingsgazette.com/news/local/gas-prices-continue-to-drop-in-billings/article_713a0c32-32c9-59f1-9aeb-67b8462bbe88.html,http%3A//billingsgazette.com/news/local/gas-prices-continue-to-fall-in-billings-area/article_2bdebf4b-242c-569e-b414-f388a48f4a14.html,http%3A//billingsgazette.com/news/local/gas-prices-dip-below-a-gallon-at-some-billings-stations/article_c7f4d373-dc2b-55c0-b457-10346c0274a6.html,http%3A//billingsgazette.com/news/local/gas-prices-keep-dropping-in-billings-area/article_3666cf9c-4552-5108-9d5c-de2bba12fa3f.html,http%3A//billingsgazette.com/news/local/government-and-politics/city-picks-st-vincent-as-care-provider-for-health-insurance/article_a899f885-15e1-5b98-b899-75acc01e8feb.html,http%3A//billingsgazette.com/news/local/government-and-politics/linder-settles-in-after-first-year-as-sheriff/article_55a9836e-2196-546d-80f0-48bdef717fa3.html,http%3A//billingsgazette.com/news/local/government-and-politics/new-council-members-city-judge-sworn-in/article_bb7ac948-1d45-579c-a057-1323fb2e643d.html'
from google.appengine.api import urlfetch
result = urlfetch.fetch(url=url)
Here's the traceback:
Traceback (most recent call last):
File "", line 1, in
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/urlfetch.py", line 263, in fetch return rpc.get_result()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/urlfetch.py", line 359, in _get_fetch_result
raise InvalidURLError(str(err))
InvalidURLError: ApplicationError: 1
I wonder if it's something very simple that I'm missing from all of this. Would appreciate your comments and ideas. Thanks!
Your URL is too long, there is a limit on the length of URLs.