Python: unable to authenticate using requests module - python

I'm having some trouble authenticating myself into the PJ website using the Python requests module. My code is as follows:
import requests
with requests.Session() as s:
s.auth = ("my_email", "my_password")
r = s.get( "https://www.example.com/")
However, I receive a response which indicates that I haven't logged in. Is it simply impossible to do this? Am I missing something (e.g. CSRF token)?
EDIT: I did some poking around to see what happens when I manually log in. You can see a screengrab of the request my browser sends here: PJ Login Request

Figured it out, following Jon's guidance:
with requests.Session() as s:
payload = {'user':'my_email','pass':'my_password', 'target':'/order/menu'}
r = s.post( "https://www.example.com/order/signin", data=payload)
I had already figured out the correct payload from the request (using the screengrab in my edit above), but I was sending it to the wrong location (i.e. the home page instead of the user login page).

Try with Post Method which is not sent in plaintext if using http.
With https BOTH GET and POST are sent securely and not in plaintext.
import requests
with requests.Session() as s:
s.auth = ("my_email", "my_password")
r = s.post( "https://www.papajohns.com/")
>>> r
>>> <Response [200]>

Related

How can i add cookie in headers?

i want to automation testing tool using api.
at first, i login the site and get a cookie.
my code is python3
import urllib
import urllib3
from bs4 import BeautifulSoup
url ='http://ip:port/api/login'
login_req = urllib.parse.urlencode(login_form)
http = urllib3.PoolManager()
r= http.request('POST',url,fields={'userName':'id','password':'passoword'})
soup = BeautifulSoup(r.data.decode('utf-8'),'lxml')
cookie = r.getheaders().get('Set-Cookie')
str1 = r.getheaders().get('Set-Cookie')
str2 = 'JSESSIONID' +str1.split('JSESSIONID')[1]
str2 = str2[0:-2]
print(str2)
-- JSESSIONID=df0010cf-1273-4add-9158-70d817a182f7; Path=/; HttpOnly
and then, i add cookie on head another web site api.
but it is not working!
url2 = 'http://ip:port/api/notebook/job/paragraph'
r2 = http.request('POST',url2)
r2.headers['Set-Cookie']=str2
r2.headers['Cookie']=str2
http.request('POST',url2, headers=r2.headers)
why is not working? it shows another cookie
if you know this situation, please explain to me..
error contents is that.
HTTP ERROR 500
Problem accessing /api/login;JSESSIONID=b8f6d236-494b-4646-8723-ccd0d7ef832f.
Reason: Server Error
Caused by:</h3><pre>javax.servlet.ServletException: Filtered request failed.
ProtocolError: ('Connection aborted.', BadStatusLine('<html>\n',))
thanks a lot!
Use requests module in python 3.x. You have to create a session which you are not doing now that's why you are facing problems.
import requests
s=requests.Session()
url ='http://ip:port/api/login'
r=s.get(url)
dct=s.cookies.get_dict() #it will return cookies(if any) and save it in dict
Take which ever cookie is wanted by the server and all the headers requested and pass it in header
jid=dct["JSESSIONID"]
head = {JSESSIONID="+jid,.....}
payload = {'userName':'id','password':'passoword'}
r = s.post(url, data=payload,headers=head)
r = s.get('whatever url after login')
To get info about which specific headers you have to pass and all the parameters required for POST
Open link in google chrome.
Open Developers Console(fn + F12).
There search for login doc (if cannot find, input wrong details and submit).
You will get info about request headers and POST parameters.

Trouble getting data from REST http service using requests package

This works fine, I can get data returned:
r = urllib2.Request("http://myServer.com:12345/myAction")
data = json.dumps(q) #q is a python dict
r.add_data(data)
r=urllib2.urlopen(r)
But doing the same with requests package fails:
r=requests.get("http://myServer.com:12345/myAction", data=q)
r.text #This will return a message that says method is not allowed.
It works if I make it a post request: r=requests.post("http://myServer.com:12345/myAction", data=json.dumps(q))
But why?
According to the urllib2.urlopen documentation:
the HTTP request will be a POST instead of a GET when the data parameter is provided.
This way, r=urllib2.urlopen(r) is also making a POST request. That is why your requests.get does not work, but requests.post does.
Set up a session
import session
session = requests.Session()
r = session.get("http://myServer.com:12345/myAction", data=q)
print r.content (<- or could us r.raw)

How to login to website through python using request?

I have gone through all the questions on stackoverflow related to this but I can't solve my problem. When I am implementing the follwing code,it runs successfully without any error but nothing gets printed.
import requests
payload = {'username': 'user','password': 'pass'}
with requests.Session() as s:
p = s.post(' file-transfers.in/login.php?rid=worlddomains',
params=payload)
r = s.get('http://file-transfers.in/member_arean.php')
print r.text
There is a whole chapter about Authentication in the Python requests docs.
Basic Authentication
Many web services that require authentication accept HTTP Basic Auth.
This is the simplest kind, and Requests supports it straight out of
the box.
Making requests with HTTP Basic Auth is very simple:
>>> from requests.auth import HTTPBasicAuth
>>> requests.get('https://api.github.com/user', auth=HTTPBasicAuth('user', 'pass'))
<Response [200]>
In fact, HTTP Basic Auth is so common that Requests provides a handy
shorthand for using it:
>>> requests.get('https://api.github.com/user', auth=('user', 'pass'))
<Response [200]>
Providing the credentials in a tuple like this is exactly the same as
the HTTPBasicAuth example above.
The website in question is expecting you to send your username/pass as POST data, not as URL params so:
payload = {'username': 'user','password': 'pass'}
with requests.Session() as s:
p = s.post('http://file-transfers.in/login.php?rid=worlddomains', data=payload)
r = s.get('http://file-transfers.in/member_arean.php')
print(r.text)
There might be more to it once the login passes, but without an account we cannot check what's going on.

Python - login to website using requests

I have to admit I am complitely clueless about this: I need to login to this site https://segreteriaonline.unisi.it/Home.do and then perform some actions. Problem is I cannot find the form to use in the source of the webpage, and I basically have never tried to login to a website via python.
This is the simple code I wrote.
import requests
url_from = 'https://segreteriaonline.unisi.it/Home.do'
url_in = 'https://segreteriaonline.unisi.it/auth/Logon.do'
data = {'form':'1', 'username':'myUser', 'password':'myPass'}
s = requests.session()
s.get(url_from)
r = s.post(url_in, data)
print r
Obviously, what i get is:
<Response [401]>
Any suggestions?
Thanks in advance.
You need to use the requests authentication header.
Please check here:
from requests.auth import HTTPBasicAuth
requests.get('https://api.github.com/user', auth=HTTPBasicAuth('user', 'pass'))
<Response [200]>
That site appears to not have a login form, but instead uses HTTP Basic auth (causing the browser to request the username and password). requests supports that via the auth argument to get - so you should be able to do something like this:
s.get(url_in, auth=('myUser', 'myPass'))

session error ? using POST using requests or urllib2

trying to authenticate to a website and fill out a form using requests lib
import requests
payload = {"name":"someone", "password":"somepass", "submit":"Submit"}
s = requests.Session()
s.post("https://someurl.com", data=payload)
next_payload = {"item1":"something", "item2":"something", "submit":"Submit"}
r = s.post("https://someurl.com", data=next_payload)
print r.text
authentication works and i verified that i can post to forms but this one i am having problem with gives The action could not be completed, perhaps because your session had expired. Please try again
Attempted in urllib2 and same result -- dont think its an issue with a cookie.
I am wondering if javascript on this page has something to do with giving session error? other form page doesnt have any javascripts.
Thanks for your input...

Categories

Resources