Capture access_token from facebook's server-side authentication in Django - python

I want to capture the access_token returned by this url(below)
https://graph.facebook.com/oauth/access_token?client_id=YOUR_APP_ID&redirect_uri=YOUR_REDIRECT_URI&client_secret=YOUR_APP_SECRET&code=CODE_GENERATED_BY_FACEBOOK
But if I HttpResponseredirect, it takes me to a blank page with access_token and expiry secs printed. I want to capture the returned access_token and use it later. Below is my code
def fb_return(request):
code = request.GET.get('code')
fb_id = settings.FB_ID
fb_s = settings.FB_SECRET
url = 'https://graph.facebook.com/oauth/access_token?client_id=%(id)s&redirect_uri=http://127.0.0.1:8000/facebook/return&client_secret=%(secret)s&code=%(code)s'%{'id':fb_id,'secret':fb_s,'code':code}
return HttpResponseRedirect(url)

You can use urllib to perform the request:
import urllib2
url = 'https://graph.facebook.com/oauth/access_token?client_id=%(id)s&redirect_uri=http://127.0.0.1:8000/facebook/return&client_secret=%(secret)s&code=%(code)s'%{'id':fb_id,'secret':fb_s,'code':code}
response = urllib2.urlopen(url)
html = response.read()
If the response is json, you can decode it like so:
import simplejson
json = response.read()
dict = simplejson.load(json)
Here's a similar question dealing with this
Depending on what you are trying to do, there are probably easier ways to interact with Facebook:
If you only need to do client side thing, you can use the Facebook javascript SDK.
If you are creating a canvas app you can use django-fandjango.
If you are creating a website with Facebook login, you can use django-social-auth
If you want server side interaction with the graph api you can use facepy

Related

screen-scraping iTunes Connect: Getting through login page

To perform screen-scraping upon iTunes Connect data, I am trying to get past its login using Python, Requests, BeautifulSoup.
This is iTunes Connect login page:
https://itunesconnect.apple.com/itc/static/login
Typically, to begin screen-scraping upon other websites, I am able to get through the login by grabbing a token from webpage's hidden contents and then I am able to perform login with credentials allowing the website to think login request is coming through a valid browser.
For example, this has been my rough steps of performing login access using pseudo-python coding:
session = requests.Session()
response = session(GET, [URL LOGIN], ...)
soup = BeautifulSoup(response.text, 'html.parser')
token_tag = soup.find_all(...)
TOKEN = token_tag.get(...)
response = session(POST, [URL LOGIN], [CREDENTIALS + TOKEN])
login_html = response.text
login_soup = BeautifulSoup(login_html, 'html.parser')
However, I am difficulty with iTune Connects' login.
Have others tried, and what is the trick?
Thanks
I'm spit-balling here, but the problem is likely a lack of base64 encoding on the login credentials and token as they are passed on through the POST.
Your request should look like something along the lines of:
import requests
import base64
r = requests.post(<url login>,
headers={"Authorization": "Basic " + base64.b64encode(b'username:password'),
data=payload)

how to use python requests to login to website

Im trying to login and scrape a job site and send me notification when ever certain key words are found.I think i have correctly traced the xpath for the value of feild "login[iovation]" but i cannot extract the value, here is what i have done so far to login
import requests
from lxml import html
header = {"User-Agent":"Mozilla/4.0 (compatible; MSIE 5.5;Windows NT)"}
login_url = 'https://www.upwork.com/ab/account-security/login'
session_requests = requests.session()
#get csrf
result = session_requests.get(login_url)
tree=html.fromstring(result.text)
auth_token = list(set(tree.xpath('//*[#name="login[_token]"]/#value')))
auth_iovat = list(set(tree.xpath('//*[#name="login[iovation]"]/#value')))
# create payload
payload = {
"login[username]": "myemail#gmail.com",
"login[password]": "pa$$w0rD",
"login[_token]": auth_token,
"login[iovation]": auth_iovation,
"login[redir]": "/home"
}
#perform login
scrapeurl='https://www.upwork.com/ab/find-work/'
result=session_requests.post(login_url, data = payload, headers = dict(referer = login_url))
#test the result
print result.text
This is screen shot of form data when i login successfully
This is because upworks uses something called iOvation (https://www.iovation.com/) to reduce fraud. iOvation uses digital fingerprint of your device/browser, which are sent via login[iovation] parameter.
If you look at the javascripts loaded on your site, you will find two javascript being loaded from iesnare.com domain. This domain and many others are owned by iOvaiton to drop third party javascript to identify your device/browser.
I think if you copy the string from the successful login and send it over along with all the http headers as is including the browser agent in python code, you should be okie.
Are you sure that result is fetching 2XX code
When I am this code result = session_requests.get(login_url)..its fetching me a 403 status code, which means I am not going to login_url itself
They have an official API now, no need for scraping, just register for API keys.

How to accept data from a form into python without reloading a page

So I have been trying to use Bit.ly API to make a small section with link shortening, but I don't want the page to refresh to show me the short link. I am new to Python, and web dev, and this is what I have, I tried looking in google, and here as well, but I don't really know what to ask google for. Thank you for your help in advance.
#!/Python27/python
# Import modules for CGI handling
import cgi, cgitb
import requests
import json
# Header
print "Content-type:text/html\r\n\r\n"
print ""
# Create instance of FieldStorage
form = cgi.FieldStorage()
#print "form created</br>"
# Get data from fields
long_url = form.getvalue('longurl')
#print str(long_url) + "long url saved to the form field"
# Process the link to the bit.ly
query_params = {'access_token': 'API_KEY',
'login': 'Yakumanification',
'longUrl': long_url}
endpoint = 'https://api-ssl.bitly.com/v3/shorten'
response = requests.get(endpoint, params=query_params, verify=True)
data = json.loads(response.content)
print data ['data']['url']
Use ajax. You can utilize jquery if you want to utilize post or get via ajax. Google jquery ajax to get more detail.

Github api v3 access via python oauth2 library - Redirect issue

Environment - Python 2.7.3, webpy.
I'm trying a simple oauth 3 way authentication for github using Python web.py. Per the basic oauth guide on github I'm doing something like this:
import web,requests
import oauth2,pymongo,json
from oauth2client.client import OAuth2WebServerFlow
urls=('/', 'githublogin',
'/session','session',
'/githubcallback','githubCallback');
class githublogin:
def GET(self):
new_url = 'https://github.com/login/oauth/authorize'
pay_load = {'client_id': '',
'client_secret':'',
'scope':'gist'
}
headers = {'content-type': 'application/json'}
r = requests.get(new_url, params=pay_load, headers=headers)
return r.content
This is sending me to the GH login page. Once I sign in - GH is not redirecting me to the callback. The redirect_uri parameter is configured in the github application. I've double checked to make sure that's correct.
class githubCallback:
def POST(self):
data = web.data()
print data
def GET(self):
print "callback called"
Instead in the browser I see
http://<hostname>:8080/session
and a 404 message, because I haven't configured the session URL. That's problem no 1. Problem no 2 - If I configure the session URL and print out the post message
class session:
def POST(self):
data = web.data()
print data
def GET(self):
print "callback called"
I can see some data posted to the URL with something called 'authenticity_token'.
I've tried to use the python_oauth2 library but can't get past the authorization_url call. So I've tried this much simpler requests library. Can someone please point out to me whats going wrong here.
So here's how I solved this. Thanks to #Ivanzuzak for the requestb.in tip.
I'm using Python webpy.
import web,requests
import oauth2,json
urls=('/', 'githublogin',
'/githubcallback','githubCallback');
render = web.template.render('templates/')
class githublogin:
def GET(self):
client_id = ''
url_string = "https://github.com/login/oauth/authorize?client_id=" + client_id
return render.index(url_string)
class githubCallback:
def GET(self):
data = json.loads(json.dumps(web.input()))
print data['code']
headers = {'content-type': 'application/json'}
pay_load = {'client_id': '',
'client_secret':'',
'code' : data['code'] }
r = requests.post('https://github.com/login/oauth/access_token', data=json.dumps(pay_load), headers=headers)
token_temp = r.text.split('&')
token = token_temp[0].split('=')
access_token = token[1]
repo_url = 'https://api.github.com/user?access_token=' + access_token
response = requests.get(repo_url)
final_data = response.content
print final_data
app = web.application(urls,globals())
if __name__ == "__main__":
app.run()
I was not using a html file before, but sending the request directly from the githublogin class. That didn't work. Here I'm using a html to direct the user first from where he'll login to gh. With this I added a html and rendered it using the templator.
def with (parameter)
<html>
<head>
</head>
<body>
<p>Well, hello there!</p>
<p>We're going to now talk to the GitHub API. Ready? <a href=$parameter>Click here</a> to begin!</a></p>
<p>If that link doesn't work, remember to provide your own Client ID!</p>
</body>
</html>
This file is taken straight from the dev guide, with just the client_id parameter changed.
Another point to be noted is that in the requests.post method - passing the pay_load directly doesn't work. It has to be serialized using json.dumps.
I'm not sure what the problem is at your end, but try reproducing this flow below, first manually using the browser, and then using your python library. It will help you debug the issue.
create a request bin on http://requestb.in/. A request bin is basically a service that logs all HTTP requests sent to it. You will use this instead of the callback, to log what is being sent to the callback. Copy the URL of the request bin, which is something like http://requestb.in/123a546b
Go to your OAuth application setup on GitHub (https://github.com/settings/applications), enter the setup of your specific application, and set the Callback URL to the URL of the request bin you just created.
Make a request to the GitHub OAuth page, with the client_id defined. Just enter this URL below into your browser, but change the YOUR_CLIENT_ID_HERE to be the client id of your OAuth application:
https://github.com/login/oauth/authorize?client_id=YOUR_CLIENT_ID_HERE
Enter your username and password and click Authorize. The GitHub app will then redirect you to the request bin service you created, and the URL in the browser should be something like (notice the code query parameter):
http://requestb.in/YOUR_REQUEST_BIN_ID?code=GITHUB_CODE
(for example, http://requestb.in/abc1def2?code=123a456b789cdef)
Also, the content of the page in the browser should be "ok" (this is the content returned by the request bin service).
Go to the request bin page that you created and refresh it. You will now see a log entry for the HTTP GET request that the GitHub OAuth server sent you, together with all the HTTP headers. Basically, you will see there the same code parameter that is present in the URL that you were redirected to. If you get this parameter, you are now ready to make a POST request with this code and your client secret, as described in step 2 of the guide you are using: http://developer.github.com/v3/oauth/#web-application-flow
Let me know if any of these steps are causing problems for you.

How to mirror a reddit moderator page with python

I'm trying to create a mirror of specific moderator pages (i.e. restricted) of a subreddit on my own server, for transparency purposes. Unfortunately my python-fu is weak and after struggling a bit with the reddit API, its python wrapper and even some answers in here, I'm no closer to having a working solution.
So what I need to do is login to reddit with a specific user, access a moderator only page and copy its html to a file on my own server for others to access
The problem I'm running into is that the API and its wrapper is not very well documented so I haven't found if there's a way to retrieve a reddit page after logging in. If I can do that, then I could theoretically copy the result to a simple html page on my server.
When trying to do it outside the python API, I can't figure out how to use the built-in modules of python to login and then read a restricted page.
Any help appreciated.
I don't use PRAW so I'm not sure about that, but if I were to do what you wanted to do, I'd do something like: login, save the modhash, grab the HTML from the url of the place you want to go:
It also looks like it's missing some CSS or something when I save it, but it's recognizable enough as it is. You'll need the requests module, along with pprint and json
import requests, json
from pprint import pprint as pp2
#----------------------------------------------------------------------
def login(username, password):
"""logs into reddit, saves cookie"""
print 'begin log in'
#username and password
UP = {'user': username, 'passwd': password, 'api_type': 'json',}
headers = {'user-agent': '/u/STACKOVERFLOW\'s API python bot', }
#POST with user/pwd
client = requests.session()
r = client.post('http://www.reddit.com/api/login', data=UP)
#if you want to see what you've got so far
#print r.text
#print r.cookies
#gets and saves the modhash
j = json.loads(r.text)
client.modhash = j['json']['data']['modhash']
print '{USER}\'s modhash is: {mh}'.format(USER=username, mh=client.modhash)
#pp2(j)
return client
client = login(USER, PASSWORD)
#mod mail url
url = r'http://www.reddit.com/r/mod/about/message/inbox/'
r = client.get(url)
#here's the HTML of the page
pp2(r.text)

Categories

Resources