How do I go about two factor authentication using requests library? - python

I have a post and a get request to a website that uses 2FA (DUO mobile). When I execute the code, I get a push notification on my phone asking me to accept or reject the request for a login. I don't know what to do after I've done this much. I know how 2FA works but I'm new to requests and stuff.
url = "https://www.something.edu/apps/account/cas/login?service=https%3A%2F%2Frf.something.something.edu"
r = session.get(url, headers = headers)
soup = BeautifulSoup(r.content, 'html5lib')
login_data['lt'] = soup.find('input', attrs={'name' : 'lt' })['value']
r = session.post(url, data = login_data, headers = headers, )
print(r.content)```
It does not print anything and the program keeps running

I solved it by setting allow_redirects=False while making the post request. Apparently, the post request got stuck in an infinite loop of redirects. The post request now would be
r = session.post(url, data = login_data, headers = headers, allow_redirects=False)

Related

How can I make post requests with querystrings using sessions in Python?

i'm using requests module and i'm interested in sending a post request with querystrings using sessions, how can I do that? I haven't found anything related with request.Sessions and querystrings
with Sessions (it returns me a http 500 response code)
response = self.session.post(self.url, data = payload, headers = self.headers, params = querystring)
without Sessions ( it works fine)
response = requests.request("POST", self.url, json=payload, headers=self.headers, params=querystring)
Maybe you can provide the url and a little bit more code.
In the session example you pass data=payload.
In the second example json=payload.
Did you create the session correctly?

Web Scraping using Requests - Python

I am trying to get data using the Resquest library, but I’m doing something wrong. My explanation, manual search:
URL - https://www9.sabesp.com.br/agenciavirtual/pages/template/siteexterno.iface?idFuncao=18
I fill in the “Informe o RGI” field and after clicking on the Prosseguir button (like Next):
enter image description here
I get this result:
enter image description here
Before I coding, I did the manual search and checked the Form Data:
enter image description here
And then I tried it with this code:
import requests
data = { "frmhome:rgi1": "0963489410"}
url = "https://www9.sabesp.com.br/agenciavirtual/block/send-receive-updates"
res = requests.post(url, data=data)
print(res.text)
My output is:
<session-expired/>
What am I doing wrong?
Many thanks.
When you go to the site using the browser, a session is created and stored in a cookie on your machine. When you make the POST request, the cookies are sent with the request. You receive an session-expired error because you're not sending any session data with your request.
Try this code. It requests the entry page first and stores the cookies. The cookies are then sent with the POST request.
import requests
session = requests.Session() # start session
# get entry page with cookies
response = session.get('https://www9.sabesp.com.br/agenciavirtual/pages/home/paginainicial.iface', timeout=30)
cks = session.cookies # save cookies with Session data
print(session.cookies.get_dict())
data = { "frmhome:rgi1": "0963489410"}
url = "https://www9.sabesp.com.br/agenciavirtual/block/send-receive-updates"
res = requests.post(url, data=data, cookies=cks) # send cookies with request
print(res.text)

Making a successful Python HTTP POST Request

I am trying to write a python script that will make a request to a desktop application listening to 8080 port. The below is the code that I use to make the request.
import requests
payload = {"url":"abcdefghiklmnopqrstuvwxyz=",
"password":"qertyuioplkjhgfdsazxvnm=",
"token":"abcdefghijklmn1254786=="}
headers = {'Content-Type':'application/json'}
r = requests.post('http://localhost:9015/login',params = payload, headers=headers)
response = requests.get("http://localhost:9015/login")
print(r.status_code)
After making the request, I get a response code of 401.
However, when I try the same using the Postman app, I get a successful response. The following are the details I give in Postman:
URL: http://localhost:9015/login
METHOD : POST
Headers: Content-Type:application/json
Body: {"url":"abcdefghiklmnopqrstuvwxyz=",
"password":"qertyuioplkjhgfdsazxvnm=",
"token":"abcdefghijklmn1254786=="}
Can I get some suggestions on where I am going wrong with my python script?
You pass params, when you should pass data, or, even better, json for setting Content-Type automatically. So, it should be:
import json
r = requests.post('http://localhost:9015/login', data=json.dumps(payload), headers=headers)
or
r = requests.post('http://localhost:9015/login', json=payload)
(params adds key-value pairs to query parameters in the url)

fail to login to website with request

At first here is my code:
import requests
payload = {'name':'loginname',
'password': 'loginpassword'}
requests.post('http://myurl/auth',data=payload,verify=False)
rl = requests.get('http://myurl/dashboard',verify=False)
print(rl.text)
My problem:
I get the status code 200 which means that the login was successfull.
But when i try to visit the protected page http://myurl/dashboard my output doesn't fit. It shows me the first page for login and i don't get it why.
I know there are many questions like that but i studied every answer and the docs but i dont get it.
Any help would be very nice. It drives me cracy. Thanks in advance!
You need to use a Session object to have requests maintain track of your cookies, such as the login cookie which would get set after your requests.post login action. Quoting the[first example there:
s = requests.Session()
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
r = s.get('http://httpbin.org/cookies')
print(r.text)
# '{"cookies": {"sessioncookie": "123456789"}}'
So in your case:
import requests
payload = {'name': 'loginname',
'password': 'loginpassword'}
s = requests.Session()
r1 = s.post('http://myurl/auth', data=payload, verify=False)
if r1.status_code == 200: # not necessary if you are sure it would login successfully
r2 = s.get('http://myurl/dashboard', verify=False)

Passing a CSRF token into requests.post()

I saw this post - Passing csrftoken with python Requests
I've been working through it trying to make it work for Greenhouse. I'm trying to build a script that will automate profile creation.
I can fetch data using GET and cookies, but I think I'm I'm getting stuck with X-CSRF. I downloaded the Live HTTP headers plugin for Mozilla to get the CSRF token, but I'm unsure how to pass it in.
So far what I have:
csrf = 'some_csrf_token'
cookie = 'some_cookie_id'
data = dict('person_first_name'='Morgan') ## this is submitting my name on the form
url = 'https://app.greenhouse.io/people/new?hiring_plan_id=24047' ##submission form page
headers = {'Cookie':cookie}
r = requests.post(url, data=data, headers=headers)
Any thoughts how I should construct my requests.post?
If you want requests to handle the cookies for you, you should set a session.
session = requests.session()
logindata = {'authenticity_token': 'whatevertokenis',
'user[email]': 'your#loginemail.com',
'user[password]': 'yourpassword',
'user[remember_me]': '0'}
login = session.post('https://app.greenhouse.io/users/sign_in', data=logindata) #this should log in in, i don't have an account there to test.
data = dict('person_first_name'='Morgan')
url = 'https://app.greenhouse.io/people/new?hiring_plan_id=24047'
r = session.post(url, data=data) #unless you need to set a user agent or referrer address you may not need the header to be added.

Categories

Resources