I'm trying to write a redditbot; I decided to start with a simple one, to make sure I was doing things properly, and I got a RequestException.
my code (bot.py):
import praw
for s in praw.Reddit('bot1').subreddit("learnpython").hot(limit=5):
print s.title
my praw.ini file:
# The URL prefix for OAuth-related requests.
oauth_url=https://oauth.reddit.com
# The URL prefix for regular requests.
reddit_url=https://www.reddit.com
# The URL prefix for short URLs.
short_url=https://redd.it
[bot1]
client_id=HIDDEN
client_secret=HIDDEN
password=HIDDEN
username=HIDDEN
user_agent=ILovePythonBot0.1
(where HIDDEN replaces the actual id, secret, password and username.)
my Traceback:
Traceback (most recent call last):
File "bot.py", line 3, in <module>
for s in praw.Reddit('bot1').subreddit("learnpython").hot(limit=5):
File "/usr/local/lib/python2.7/dist-packages/praw/models/listing/generator.py", line 79, in next
return self.__next__()
File "/usr/local/lib/python2.7/dist-packages/praw/models/listing/generator.py", line 52, in __next__
self._next_batch()
File "/usr/local/lib/python2.7/dist-packages/praw/models/listing/generator.py", line 62, in _next_batch
self._listing = self._reddit.get(self.url, params=self.params)
File "/usr/local/lib/python2.7/dist-packages/praw/reddit.py", line 322, in get
data = self.request('GET', path, params=params)
File "/usr/local/lib/python2.7/dist-packages/praw/reddit.py", line 406, in request
params=params)
File "/usr/local/lib/python2.7/dist-packages/prawcore/sessions.py", line 131, in request
params=params, url=url)
File "/usr/local/lib/python2.7/dist-packages/prawcore/sessions.py", line 70, in _request_with_retries
params=params)
File "/usr/local/lib/python2.7/dist-packages/prawcore/rate_limit.py", line 28, in call
response = request_function(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/prawcore/requestor.py", line 48, in request
raise RequestException(exc, args, kwargs)
prawcore.exceptions.RequestException: error with request request() got an unexpected keyword argument 'json'
Any help would be appreciated. PS, I am using Python 2.7., on Ubuntu 14.04. Please ask me for any other information you may need.
The way i see it, it seems you have a problem with your request to Reddit API. Maybe try changing the user-agent in your in-file configuration. According to PRAW basic configuration Options the user-agent should follow a format <platform>:<app ID>:<version string> (by /u/<reddit username>) . Try that see what happens.
Related
I have a config.ini file that looks like this
[REDDIT]
client_id = 'myclientid23jd934g'
client_secret = 'myclientsecretjf30gj5g'
password = 'mypassword'
user_agent = 'myuseragent'
username = 'myusername'
When I try to use reddit's API praw like this:
import configparser
import praw
class redditImageScraper:
def __init__(self, sub, limit):
config = configparser.ConfigParser()
config.read('config.ini')
self.sub = sub
self.limit = limit
self.reddit = praw.Reddit(client_id=config.get('REDDIT','client_id'),
client_secret=config.get('REDDIT','client_secret'),
password=config.get('REDDIT','password'),
user_agent=config.get('REDDIT','user_agent'),
username=config.get('REDDIT','username'))
def get_content(self):
submissions = self.reddit.subreddit(self.sub).hot(limit=self.limit)
for submission in submissions:
print(submission.id)
def main():
scraper = redditImageScraper('aww', 25)
scraper.get_content()
if __name__ == '__main__':
main()
I get this traceback
Traceback (most recent call last):
File "config.py", line 30, in <module>
main()
File "config.py", line 27, in main
scraper.get_content()
File "config.py", line 22, in get_content
for submission in submissions:
File "C:\Users\Evan\Anaconda3\lib\site-packages\praw\models\listing\generator.py", line 61, in __next__
self._next_batch()
File "C:\Users\Evan\Anaconda3\lib\site-packages\praw\models\listing\generator.py", line 71, in _next_batch
self._listing = self._reddit.get(self.url, params=self.params)
File "C:\Users\Evan\Anaconda3\lib\site-packages\praw\reddit.py", line 454, in get
data = self.request("GET", path, params=params)
File "C:\Users\Evan\Anaconda3\lib\site-packages\praw\reddit.py", line 627, in request
method, path, data=data, files=files, params=params
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\sessions.py", line 185, in request
params=params, url=url)
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\sessions.py", line 116, in _request_with_retries
data, files, json, method, params, retries, url)
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\sessions.py", line 101, in _make_request
params=params)
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\rate_limit.py", line 35, in call
kwargs['headers'] = set_header_callback()
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\sessions.py", line 145, in _set_header_callback
self._authorizer.refresh()
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\auth.py", line 328, in refresh
password=self._password)
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\auth.py", line 138, in _request_token
response = self._authenticator._post(url, **data)
File "C:\Users\Evan\Anaconda3\lib\site-packages\prawcore\auth.py", line 31, in _post
raise ResponseException(response)
prawcore.exceptions.ResponseException: received 401 HTTP response
However when I manually insert the credentials, my code runs exactly as expected. Also, if I run the line
print(config.get('REDDIT', 'client_id'))
I get the output 'myclientid23jd934g' as expected.
Is there some reason that praw won't allow me to pass my credentials using configparser?
Double check what your inputs to praw.Reddit are:
kwargs = dict(client_id=config.get('REDDIT','client_id'),
client_secret=config.get('REDDIT','client_secret'),
password=config.get('REDDIT','password'),
user_agent=config.get('REDDIT','user_agent'),
username=config.get('REDDIT','username')))
print(kwargs)
praw.Reddit(**kwargs)
You're overcomplicating configuration here — PRAW will take care of this for you.
If you rename config.ini to praw.ini, you can replace your whole initialization with just
self.reddit = praw.Reddit('REDDIT')
This is because PRAW will look for a praw.ini file and parse it for you. If you want to give the section a more descriptive name, make sure to update it in the praw.ini as well as in the single parameter passed to Reddit (which specifies the section of the file to use).
See https://praw.readthedocs.io/en/latest/getting_started/configuration/prawini.html.
As this page notes, values like username and password should not have quotation marks around them. For example,
password=mypassword
is correct, but
password="mypassword"
is incorrect.
I am trying to make app that download comics but whenever I try to download an image, it says no host supplied.
I really searched and there was nothing.
This is the code:
import requests,bs4
url='https://www.marvel.com/comics/issue/71314/edge_of_spider-geddon_2018_1'
res=requests.get(url,stream=True)
res.raise_for_status()
soup=bs4.BeautifulSoup(res.text)
elem=soup.select('div[class="row-item-image"] img')#.viewer-cnt .row .col-xs-12 #ppp img')
#print(elem)
comicurl='https:'+elem[0].get('src')
res=requests.get(comicurl,stream=True,allow_redirects=True)
res.raise_for_status()
with open(comicurl[comicurl.rfind('/')+1:],'wb') as i:
for chunk in res.iter_content(100000):
i.write(chunk)
I expect it to download the image but it gives me this error:
Traceback (most recent call last):
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\comicdownloader.py", line 10, in <module>
res=requests.get(comicurl,stream=True,allow_redirects=True)
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 519, in request
prep = self.prepare_request(req)
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 462, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\models.py", line 313, in prepare
self.prepare_url(url, params)
File "C:\Users\Islam\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\models.py", line 390, in prepare_url
raise InvalidURL("Invalid URL %r: No host supplied" % url)
requests.exceptions.InvalidURL: Invalid URL 'https:https://i.annihil.us/u/prod/marvel/i/mg/6/b0/5b6c5e4154f75/portrait_uncanny.jpg': No host supplied
And it gives it to me whenever I try it on any website.
it looks like elem[0].get('src') evaluates to https://i.annihil.us/u/prod/marvel/i/mg/6/b0/5b6c5e4154f75/portrait_uncanny.jpg.
so on line comicurl='https:'+elem[0].get('src') you add http: in front of an already well formed url, making it invalid
Can't argue with this: Invalid URL 'https:https://i.annihil.us/u/prod -- the URL is really invalid, probably you should get rid of https in the following statement:
comicurl='https:'+elem[0].get('src')
I want to be able to access my Mega account using python. I looked to the following link but I have issues concerning the login process.
https://github.com/richardasaurus/mega.py
With the examples given, it looks quite easy
mega = Mega()
m = mega.login(email, password)
But when I do it gives the following error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mega\mega.py", line 26, in login
instance.login_user(email, password)
File "mega\mega.py", line 32, in login_user
resp = self.api_request({'a': 'us', 'user': email, 'uh': uh})
File "mega\mega.py", line 86, in api_request
timeout=self.timeout)
File "requests\api.py", line 84, in post
return request('post', url, data=data, **kwargs)
File "requests\api.py", line 39, in request
return s.request(method=method, url=url, **kwargs)
File "requests\sessions.py", line 200, in request
r.send(prefetch=prefetch)
File "requests\models.py", line 489, in send
cert_loc = __import__('certifi').where()
ImportError: No module named certifi
I guess I did not install correctly mega.py but I can't make it work.
Thanks
As seen in comments, the solution is just to install https://pypi.python.org/pypi/certifi
I'm using manual on https://www.dropbox.com/developers/core/start/python .
Have made everything equal to manual, including creation of my app in account, copy-past of app keys, allowing to use the app using the key (in fact I open the link in my browser, click allow and copy confirmation code).
After this, I want to finish authorization, but I get such error text:
>>> access_token, user_id = flow.finish(code)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/dropbox/client.py", line 1233, in finish
return self._finish(code, None)
File "/usr/local/lib/python2.7/dist-packages/dropbox/client.py", line 1101, in _finish
response = self.rest_client.POST(url, params=params)
File "/usr/local/lib/python2.7/dist-packages/dropbox/rest.py", line 316, in POST
return cls.IMPL.POST(*n, **kw)
File "/usr/local/lib/python2.7/dist-packages/dropbox/rest.py", line 254, in POST
post_params=params, headers=headers, raw_response=raw_response)
File "/usr/local/lib/python2.7/dist-packages/dropbox/rest.py", line 218, in request
preload_content=False
File "/usr/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 112, in urlopen
conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
File "/usr/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 84, in connection_from_host
pool = pool_cls(host, port, **self.connection_pool_kw)
TypeError: __init__() got an unexpected keyword argument 'ssl_version'
P.S. Flow object is alive => http://screencloud.net/v/nDi0
Is seems you use urllib3 version 1.5 or older. Upgrade it to 1.6 or 1.7.
I'm following the tutorial on https://github.com/Shopify/shopify_python_api but at step 4 I always get an "500 Internal Server Error".
I'm not sure whether I do follow the steps correctly.
After step 3 I visit the URL in permission_url in my browser click "Install" and then copy the data from the URL I get redirected to into a python dict called params.
On executing step 4 I get:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File ".../lib/python2.7/site-packages/shopify/session.py", line 53, in __init__
self.token = self.request_token(params['code'])
File ".../lib/python2.7/site-packages/shopify/session.py", line 90, in request_token
response = connection.post(access_token_path, ShopifyResource.headers)
File ".../lib/python2.7/site-packages/pyactiveresource/connection.py", line 313, in post
return self._open('POST', path, headers=headers, data=data)
File ".../lib/python2.7/site-packages/shopify/base.py", line 18, in _open
self.response = super(ShopifyConnection, self)._open(*args, **kwargs)
File ".../lib/python2.7/site-packages/pyactiveresource/connection.py", line 258, in _open
response = Response.from_httpresponse(self._handle_error(err))
File ".../lib/python2.7/site-packages/pyactiveresource/connection.py", line 367, in _handle_error
raise ServerError(err)
ServerError: HTTP Error 500: Internal Server Error
For a private application you do not need to go through the authorization steps to get a token. The token is simply private applications password. So activating a session just requires doing:
session = shopify.Session(SHOP_URL)
session.token = PRIVATE_APPLICATION_PASSWORD
shopify.ShopifyResource.activate_session(session)