Python - Facebook connect without browser based-redirect - python

title might be a bit of a misnomer.
I have two subdomains setup in a web application, one for the application frontend at www. (which is using a loose PHP router coupled with backbone, require, and jquery for UI) and a data tier setup at data. (built entire in python utilizing Werkzeug)
We decided to go to this kind of architecture since at some point mobile app will be integrated into the equation, and they can just as easily send HTTP connections to the data subdomain. The data subdomain renders all its responses in JSON, which the JS frontend expects universally as a response be it a valid query response or an error.
This feels like the most organized arrangement I've ever had on a project, until Facebook connect appeared on the manifesto.
Ideally, I'd like to keep as much of the 'action' of the site behind the scenes on the data subdomain, especially authentication stuff, so that the logic is available to the phones and mobile devices as well down the line if need be. Looking into the facebook docs, it appears however that the browser and its session object are critical components in their OAuth flow.
User authentication and app authorization are handled at the same time
by redirecting the user to our OAuth Dialog. When invoking this
dialog, you must pass in your app id that is generated when you create
your application in our Developer App (the client_id parameter) and
the URL that the user's browser will be redirected back to once app
authorization is completed (the redirect_uri parameter). The
redirect_uri must be in the path of the Site URL you specify in
Website section of the Summary tab in the Developer App. Note, your
redirect_uri can not be a redirector.
I've done this before in PHP and am familiar with the process, and it's a trivial matter of about 3 header redirects when the logic in question has direct access to the browser. My problem is that the way our new app is structured, all the business logic is sequestered on the other subdomain. What are my options to simulate the redirect? can I have the python subdomain send the HTTP data, receiving it from the www domain using CURL like all the other requests? Can I return a json object to the user's browser that instructs on doing the redirects? Will Facebook even accept requests from a data subdomain, whose redirect_uri is at subdomain www? I find that last sentence in the quoted section probably discounts that as a possibility, but I've looked through their docs and it doesn't explicity say that this would be treated as a violation.
Has anyone had any experience setting up something like this with Facebook with a similar architecture? Perhaps I should just fold and put the FB login logic directly into PHP and handle it there.
Thanks!

Related

How do I secure a Python API with in-the-open authorization tokens?

I have setup a simple REST API server (using Django REST framework) that responds to POST requests by doing some processing on an image uploaded to the server in the request. Previously I used it to power my own frontend (http://lips.kyleingraham.com) as a demonstration but I would now like to open the API to other users.
I would like for an end-user to be able to sign up and, from a dashboard, generate a token based on their credentials that could then be hard-coded into their web app. The sign-up part I believe I can handle but I am unclear on how to restrict a generated token to a user's web app domain. I know that the code for a web app is easily inspected so any API token I provide would need to be policed on my backend.
How can I restrict an authorization token to a users' web app domain so that even if the token was leaked, another user would not be able to utilize it?
If you want to hard-code url into user web app, in that way you can't guarantee that if someone get the token, he won't be able to use it.
The only idea is to set some time limit for each token

Understanding the Python requests module

So I'm currently learning the python requests module but I'm a bit confused and was wondering if someone could steer me in the right direction. I've seen some people post headers when they want to log into the website, but where do they get these headers from and when do you need them? I've also seen some people say you need an authentication token, but I've seen some other solutions not even use headers or an authentication token at all. This is supposedly the authentication token but I'm not sure where to go from here after I post my username and password.
<input type="hidden" name="lt" value="LT-970332-9KawhPFuLomjRV3UQOBWs7NMUQAQX7" />
Although your question is a bit vague, I'll try to help you.
Authentication
A web browser (client) can authenticate on the target server by providing data, usually the pair login/password, which is usually encoded for security reasons.
This data can be passed from client to server using the following parts of HTTP request:
URL parameters (http://httpbin.org/get?foo=bar)
headers
body (this is where POST parameters from HTML forms usually go)
Tokens
After successful authentication server generates a unique token and sends it to client. If server wants client to store token as a cookie, it includes Set-Cookie header in its response.
A token usually represents a unique identifier of a user session. In most cases token has an expiration date for security reasons.
Web browsers usually store token as a cookie in internal cookie storage and use them in all subsequent requests to corresponding website. A single website can use multiple tokens and other cookies for a single user.
Research
Every web site has its own authentication format, rules and restrictions, so first thing you need to do is a little research on target website. You need to get information about the client sends auth information to server, what server replies and where session data is being stored (usually you can find it in client request headers).
In order to do that, you may use a proxy (Burp for example) to intercept browser traffic. It can help you to get the data passed from client to server and back.
Try to authenticate and then browse some pages on target site using your web browser with a proxy. After that, using your proxy, examine what parts of HTTP request/response do client and browser use to store information about sessions and authentication.
After that you can finally use python and requests to do what you want.

How To Reflect The Website Authentication Done With Python Script Into The Browser?

My question may sound stupid, but I just want to know if this is possible to browse the web pages which needs authentication after doing the authentication with python requests library.
I've a script to login into the application, which successfully authenticate the user into application, but is there a way to reflect that in a browser like in Chrome? so that user could directly access the authenticated page without having to fill the form and login. It's happening inside my application, so I'm not breaching any privacy policy of such things.
Any suggestions would be great.
For example I authenticated myself into http://example.com/login through the script, I want to be able to directly browse http://example.com/user/home in the browser. How this could be possible?
The simplest way is to have your python application log in the web application, and the have it request an token. Then the script will open the browser, passing along the token. Your web application takes that token and uses it in lieu of the login form to authenticate your user and create their session.
Take a look at OAuth, I think it has specific workflows for this kind of scenario. Otherwise you can craft your own.

Scraping data from external site with username and password

I have an application with many users, some of these users have an account on an external website with data I want to scrape.
This external site has a members area protected with a email/password form. This sets some cookies when submitted (a couple of ASP ones). You can then pull up the needed page and grab the data the external site holds for the user that just logged in.
The external site has no API.
I envisage my application asking users for their credentials to the external site, logging in on their behalf and grabbing the data we want.
How would I go about this in Python, i.e. do I need to run a GUI web browser on the server that Python prods to handle the cookies (I'd rather not)?
Find the call the page makes to the backend by inspecting what is the format of the login call in your browser's inspector.
Make the same request after using either getpass to get user credentials from the terminal or via a GUI. You can use urllib2 to make the requests.
Save all the cookies from the response in a cookiejar.
Reuse the cookies in subsequent requests and fetch data.
Then, profit.
Usually, this is performed with session.
I'm recommending you to use requests library (http://docs.python-requests.org/en/latest/) in order to do that.
You can use the Session feature (http://docs.python-requests.org/en/latest/user/advanced/#session-objects). Simply perform an authentication HTTP request (url and parameters depends of the website you want to request), and then, perform a request towards the ressource you want to scrape.
Without further information, we cannot help you more.

Twitter Oauth problem - I need a website to register an app

I need to use Oauth for a personal twitter script I am making. Its not commercial or anything like that. To register it here: https://dev.twitter.com/apps/new I need a website even though it is a client. It wont let me register my app without a website.
Is there anything I can do? If I just created a blog that explains the concept behind the script I am using - would they accept that and let me register the "app" (just a script I use?).
If it is just for personal use, you could put pretty much any url in that field. As far as I know it isn't double checked or subjected to approval.
Choose Client here:
In place of Application Website you can put any link. Its just the link-back url.
As mentioned above, the callback URL can be anything, but I would choose Browser, not Client, because Twitter lets you override the value of Application Website with the parameter oauth_callback. This lets you automate the final step of the OAuth flow.
Usually, since you are running a script, you would need to set oauth_callback=oob and put the user through PIN authentication, which sucks. Here is an alternative:
Choose Browser and set Application Website to http://www.whatever.com (doesn't matter).
Register your script with your operative system to handle a custom scheme, eg: myscript://
Pass oauth_callback=myscript://anystring during the OAuth flow.
The result is that once the user is authenticated, Twitter calls myscript://anything from the web browser with the two last parameters you need for the final authentication step, and the OAuth flow will complete without user interaction.

Categories

Resources