def i(bot,update,args):
coin=args
infoCall =requests.get("https://api.coingecko.com/api/v3/coins/").json()
coinId = infoCall ['categories']
update.message.reply_text(coinId)
I would like to add to the end of the api request the args declared in coins=args so that it retrieves the info my user requests but this is the error i get
coinId = infoCall ['categories']
KeyError: 'categories'
which my guess is because its not formating the request correctly so the api is giving a 404 and not the info being requested
def i(bot,update,args):
coin=args
infoCall =requests.get("https://api.coingecko.com/api/v3/coins/").json()
infoCall = json.loads(infoCall)+str(coins)
coinId = infoCall['categories']
update.message.reply_text(str (coinId))
after adding this, this is the new error i get
Traceback (most recent call last):
File "C:\Users\Matthew\AppData\Local\Programs\Python\Python37-32\lib\site-packages\telegram\ext\dispatcher.py", line 279, in process_update
handler.handle_update(update, self)
File "C:\Users\Matthew\AppData\Local\Programs\Python\Python37-32\lib\site-packages\telegram\ext\commandhandler.py", line 173, in handle_update
return self.callback(dispatcher.bot, update, **optional_args)
File "C:/Users/Matthew/Desktop/coding_crap/CryptoBotBetav2.py", line 78, in i
infoCall = json.loads(infoCall)+str(coins)
File "C:\Users\Matthew\AppData\Local\Programs\Python\Python37-32\lib\json\__init__.py", line 341, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not list
Basically you are not appending the args param to the api point that's why you were getting the error. You need to append the 'bitcoin' to the api point before you make the request rather than on the output.
A typical example would be as follow. I have removed the update and other unused variables. You can put them as you need.
import requests
def i(args):
coin=args
infoCall =requests.get("https://api.coingecko.com/api/v3/coins/"+ args).json()
coinId = infoCall ['categories']
print(coinId)
# update.message.reply_text(coinId)
i('bitcoin')
Output:
['Cryptocurrency']
Related
I'm querying from a database in Python 3.8.2
I need the urlencoded results to be:
data = {"where":{"date":"03/30/20"}}
needed_results = ?where=%7B%22date%22%3A%20%2203%2F30%2F20%22%7D
I've tried the following
import urllib.parse
data = {"where":{"date":"03/30/20"}}
print(urllib.parse.quote_plus(data))
When I do that I get the following
Traceback (most recent call last):
File "C:\Users\Johnathan\Desktop\Python Snippets\test_func.py", line 17, in <module>
print(urllib.parse.quote_plus(data))
File "C:\Users\Johnathan\AppData\Local\Programs\Python\Python38-32\lib\urllib\parse.py", line 855, in quote_plus
string = quote(string, safe + space, encoding, errors)
File "C:\Users\Johnathan\AppData\Local\Programs\Python\Python38-32\lib\urllib\parse.py", line 839, in quote
return quote_from_bytes(string, safe)
File "C:\Users\Johnathan\AppData\Local\Programs\Python\Python38-32\lib\urllib\parse.py", line 864, in quote_from_bytes
raise TypeError("quote_from_bytes() expected bytes")
TypeError: quote_from_bytes() expected bytes
I've tried a couple of other methods and received:?where=%7B%27date%27%3A+%2703%2F30%2F20%27%7D
Long Story Short, I need to url encode the following
data = {"where":{"date":"03/30/20"}}
needed_encoded_data = ?where=%7B%22date%22%3A%20%2203%2F30%2F20%22%7D
Thanks
where is a dictionary - that can't be url-encoded. You need to turn that into a string or bytes object first.
You can do that with json.dumps
import json
import urllib.parse
data = {"where":{"date":"03/30/20"}}
print(urllib.parse.quote_plus(json.dumps(data)))
Output:
%7B%22where%22%3A+%7B%22date%22%3A+%2203%2F30%2F20%22%7D%7D
I am trying at scrape an API that accepts some value in headers only in float form, when I send it in string form, it gives 400 Bad Request, and when I try to send headers in float form scrapy gives Error like this:
self.headers = Headers(headers or {}, encoding=encoding)
File "C:\Python27\lib\site-packages\scrapy\http\headers.py", line 12, in __init__
super(Headers, self).__init__(seq)
File "C:\Python27\lib\site-packages\scrapy\utils\datatypes.py", line 193, in __init__
self.update(seq)
File "C:\Python27\lib\site-packages\scrapy\utils\datatypes.py", line 229, in update
super(CaselessDict, self).update(iseq)
File "C:\Python27\lib\site-packages\scrapy\utils\datatypes.py", line 228, in <genexpr>
iseq = ((self.normkey(k), self.normvalue(v)) for k, v in seq)
File "C:\Python27\lib\site-packages\scrapy\http\headers.py", line 27, in normvalue
return [self._tobytes(x) for x in value]
File "C:\Python27\lib\site-packages\scrapy\http\headers.py", line 40, in _tobytes
raise TypeError('Unsupported value type: {}'.format(type(x)))
TypeError: Unsupported value type: <type 'float'>
None
Anyone have any solution or faced similer kind of problem?
First of all, headers are always sent as string. There is no data-type for headers like int, bool, float.
I might send an Api a header X-RELOAD-TIME 2.0003355 but that doesn't mean i need 2.0003355 as float. And that is what the library is complaining
So in your headers make sure
headers["Name-Of-Float-Header"] = str(float_value)
and then call should be able to go through
I am having trouble with the following code:
import praw
import argparse
# argument handling was here
def main():
r = praw.Reddit(user_agent='Python Reddit Image Grabber v0.1')
for i in range(len(args.subreddits)):
try:
r.get_subreddit(args.subreddits[i]) # test to see if the subreddit is valid
except:
print "Invalid subreddit"
else:
submissions = r.get_subreddit(args.subreddits[i]).get_hot(limit=100)
print [str(x) for x in submissions]
if __name__ == '__main__':
main()
subreddit names are taken as arguments to the program.
When an invalid args.subreddits is passed to get_subreddit, it throws an exception which should be caught in the above code.
When a valid args.subreddit name is given as an argument, the program runs fine.
But when an invalid args.subreddit name is given, the exception is not thrown, and instead the following uncaught exception is outputted.
Traceback (most recent call last):
File "./pyrig.py", line 33, in <module>
main()
File "./pyrig.py", line 30, in main
print [str(x) for x in submissions]
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 434, in get_content
page_data = self.request_json(url, params=params)
File "/usr/local/lib/python2.7/dist-packages/praw/decorators.py", line 95, in wrapped
return_value = function(reddit_session, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 469, in request_json
response = self._request(url, params, data)
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 342, in _request
response = handle_redirect()
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 316, in handle_redirect
url = _raise_redirect_exceptions(response)
File "/usr/local/lib/python2.7/dist-packages/praw/internal.py", line 165, in _raise_redirect_exceptions
.format(subreddit))
praw.errors.InvalidSubreddit: `soccersdsd` is not a valid subreddit
I can't tell what I am doing wrong. I have also tried rewriting the exception code as
except praw.errors.InvalidSubreddit:
which also does not work.
EDIT: exception info for Praw can be found here
File "./pyrig.py", line 30, in main
print [str(x) for x in submissions]
The problem, as your traceback indicates is that the exception doesn't occur when you call get_subreddit In fact, it also doesn't occur when you call get_hot. The first is a lazy invocation that just creates a dummy Subreddit object but doesn't do anything with it. The second, is a generator that doesn't make any requests until you actually try to iterate over it.
Thus you need to move the exception handling code around your print statement (line 30) which is where the request is actually made that results in the exception.
The title pretty much says it all. Here's my code:
from urllib2 import urlopen as getpage
print = getpage("www.radioreference.com/apps/audio/?ctid=5586")
and here's the traceback error I get:
Traceback (most recent call last):
File "C:/Users/**/Dropbox/Dev/ComServ/citetest.py", line 2, in <module>
contents = getpage("www.radioreference.com/apps/audio/?ctid=5586")
File "C:\Python25\lib\urllib2.py", line 121, in urlopen
return _opener.open(url, data)
File "C:\Python25\lib\urllib2.py", line 366, in open
protocol = req.get_type()
File "C:\Python25\lib\urllib2.py", line 241, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: www.radioreference.com/apps/audio/?ctid=5586
My best guess is that urllib can't retrieve data from untidy php URLs. if this is the case, is there a work around? If not, what am I doing wrong?
You should first try to add 'http://' in front of the url. Also, do not store the results in print, as it is binding the reference to another (non callable) object.
So this line should be:
page_contents = getpage("http://www.radioreference.com/apps/audio/?ctid=5586")
This returns a file like object. To read its contents you need to use different file manipulation methods, like this:
for line in page_contents.readlines():
print line
You need to pass a full URL: ie it must begin with http://.
Simply use http://www.radioreference.com/apps/audio/?ctid=5586 and it'll work fine.
In [24]: from urllib2 import urlopen as getpage
In [26]: print getpage("http://www.radioreference.com/apps/audio/?ctid=5586")
<addinfourl at 173987116 whose fp = <socket._fileobject object at 0xa5eb6ac>>
Please check this question Python oauth2 - making request I am working with vimeo integration in my web application.
Initially I got an oauth_signature and I had no problems(no errors), I tried those things once again from the first and Now I'm getting ValueError: need more than 1 value to unpack while making this request
>>> r = request.get(url, headers=headers)
You can check out my code here https://gist.github.com/2949182
The error is
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/api.py", line 51, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/api.py", line 39, in request
return s.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/sessions.py", line 159, in request
headers[k] = header_expand(v)
File "/usr/local/lib/python2.7/dist-packages/requests-0.10.1-py2.7.egg/requests/utils.py", line 152, in header_expand
for i, (value, params) in enumerate(headers):
ValueError: need more than 1 value to unpack
Thanks!
UPDATE
>>> headers
{'Authorization': u'oauth_body_hash=XXXXXXXXXXXXXXXXXXXXXXXXXX,oauth_nonce=3454768,oauth_timestamp=1340035585,oauth_consumer_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX,oauth_signature_method=HMAC-SHA1,oauth_version=1.0,oauth_signature=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX,oauth_callback=http://127.0.0.1:8000/information/vimeo'}
to be able to unpack a dictionary you would have to use the .items(), so the code would be like this:
for i, (value, params) in enumerate(headers.items()):
now since that is not your code and you can't change it, what the error is telling you is that the headers should not be a dictionary but a tuple (or a list), if you pass the header like this:
headers = [("Authorization", "Values")]
it should work.
EDIT: This doesn't works. Now the dictionary version {"Authorization": "Values"} works for me, maybe updating requests will help.