Trying to making HTTP Post, passing two params for authentication: user,password
import requests
url = 'http://10.10.13.3:8000/api/login'
payload = {'user': 'admin', 'password': 'admin'}
response = requests.post(url,data=payload)
print response.url
print response.text
What's weird is, this code which is returning me, it's the same when I login with user/password wrong, but testing login on the website, it's working. Is this the right code to make post authentication?
you should replace data with json.
l
like this:
import requests
url = 'http://10.10.13.3:8000/api/login'
payload = {'user': 'admin', 'password': 'admin'}
response = requests.post(url,json=payload)
print response.url
print response.text
Related
I am trying to login to a website using python.
The login URL is :
https://login.flash.co.za/apex/f?p=pwfone:login
and the 'form action' url is shown as :
https://login.flash.co.za/apex/wwv_flow.accept
When I use the ' inspect element' on chrome when logging in manually, these are the form posts that show up (pt_02 = password):
There a few hidden items that I'm not sure how to add into the python code below.
When I use this code, the login page is returned:
import requests
url = 'https://login.flash.co.za/apex/wwv_flow.accept'
values = {'p_flow_id': '1500',
'p_flow_step_id': '101',
'p_page_submission_id': '3169092211412',
'p_request': 'LOGIN',
'p_t01': 'solar',
'p_t02': 'password',
'p_checksum': ''
}
r = requests.post(url, data=values)
print r.content
How can I adjust this code to perform a login?
Chrome network:
This is more or less your script should look like. Use session to handle the cookies automatically. Fill in the username and password fields manually.
import requests
from bs4 import BeautifulSoup
logurl = "https://login.flash.co.za/apex/f?p=pwfone:login"
posturl = 'https://login.flash.co.za/apex/wwv_flow.accept'
with requests.Session() as s:
s.headers = {"User-Agent":"Mozilla/5.0"}
res = s.get(logurl)
soup = BeautifulSoup(res.text,"lxml")
values = {
'p_flow_id': soup.select_one("[name='p_flow_id']")['value'],
'p_flow_step_id': soup.select_one("[name='p_flow_step_id']")['value'],
'p_instance': soup.select_one("[name='p_instance']")['value'],
'p_page_submission_id': soup.select_one("[name='p_page_submission_id']")['value'],
'p_request': 'LOGIN',
'p_arg_names': soup.select_one("[name='p_arg_names']")['value'],
'p_t01': 'username',
'p_arg_names': soup.select_one("[name='p_arg_names']")['value'],
'p_t02': 'password',
'p_md5_checksum': soup.select_one("[name='p_md5_checksum']")['value'],
'p_page_checksum': soup.select_one("[name='p_page_checksum']")['value']
}
r = s.post(posturl, data=values)
print r.content
since I cannot recreate your case I can't tell you what exactly to change, but when I was doing such things I used Postman to intercept all requests my browser sends. So I'd install that, along with browser extension and then perform login. Then you can view the request in Postman, also view the response it received there, what's more it provides you with Python code of request too, so you could simply copy and use it then.
Shortly, use Pstman, perform login, clone their request.
I am trying to send a simple POST to an api.
import requests
url ="http://someapi/v1/auth"
payload = {'username': '', 'password': ''}
s1 = requests.post(url, headers={"content-type":"application/x-www-form-urlencoded"}, data=json.dumps(payload))
print s1.status_code
I keep getting status code 401.
Same steps Works fine in POSTMAN.
Any Ideas/pointers ?
Post data in raw format.
payload = "username=;password=;"
s1 = requests.post(
url,
headers={"content-type":"application/x-www-form-urlencoded"},
data=payload)
FWIW, you can click on Code below the Save button on the top right corner of Postman to view code in a couple of languages for your request.
It will only works if the API accepts also JSON body.
Otherwise you can use the #Oluwafemi Sule's answer.
import requests
url ="http://someapi/v1/auth"
payload = {'username': '', 'password': ''}
s1 = requests.post(url, headers={"content-type":"application/json"}, data=json.dumps(payload))
print s1.status_code
This code worked for me.
import requests
from requests_ntlm import HttpNtlmAuth
payload = "username=;password=;"
s= requests.post(
"http://someapi/v1/auth",
headers={"content-type":"application/x-www-form-urlencoded"},
data = payload,
auth=HttpNtlmAuth('',''))
print s.status_code
I need to log me in a website with requests, but all I have try don't work :
from bs4 import BeautifulSoup as bs
import requests
s = requests.session()
url = 'https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5'
def authenticate():
headers = {'username': 'myuser', 'password': 'mypasss', '_Id': 'submit'}
page = s.get(url)
soup = bs(page.content)
value = soup.form.find_all('input')[2]['value']
headers.update({'value_name':value})
auth = s.post(url, params=headers, cookies=page.cookies)
authenticate()
or :
import requests
payload = {
'inUserName': 'user',
'inUserPass': 'pass'
}
with requests.Session() as s:
p = s.post('https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5', data=payload)
print(p.text)
print(p.status_code)
r = s.get('A protected web page url')
print(r.text)
When I try this with the .status_code, it return 200 but I want 401 or 403 for do a script like 'if login'...
I have found this but I think it works in python 2, but I use python 3 and I don't know how to convert... :
import requests
import sys
payload = {
'username': 'sopier',
'password': 'somepassword'
}
with requests.Session(config={'verbose': sys.stderr}) as c:
c.post('http://m.kaskus.co.id/user/login', data=payload)
r = c.get('http://m.kaskus.co/id/myform')
print 'sopier' in r.content
Somebody know how to do ?
Because each I have test test all script I have found and it don't work...
When you submit the logon, the POST request is sent to https://www.ent-place.fr/CookieAuth.dll?Logon not https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5 -- You get redirected to that URL afterwards.
When I tested this, the post request contains the following parameters:
curl:Z2F
flags:0
forcedownlevel:0
formdir:5
username:username
password:password
SubmitCreds.x:69
SubmitCreds.y:9
SubmitCreds:Ouvrir une session
So, you'll likely need to supply those additional parameters as well.
Also, the line s.post(url, params=headers, cookies=page.cookies) is not correct. You should pass headers into the keyword argument data not params -- params encodes to the request url -- you need to pass it in the form data. And I'm assuming you really mean payload when you say headers
s.post(url, data=headers, cookies=page.cookies)
The site you're trying to login to has an onClick JavaScript when you process the login form. requests won't be able to execute JavaScript for you. This may cause issues with the site functionality.
I'm coding a crawler for www.researchgate.net, but it seems that I'll be stuck in the login page forever.
Here's my code:
import requests
from bs4 import BeautifulSoup
session = requests.Session()
params = {'login': 'my_email', 'password': 'my_password'}
session.post("https://www.researchgate.net/application.Login.html", data = params)
s = session.get("https://www.researchgate.net/search.Search.html?type=researcher&query=zhang")
print BeautifulSoup(s.text).title
Can anybody find anything wrong with my code? Why does s redirect to login page every time?
There are hidden fields in the login form that probably need to be supplied (I can't test - I don't have a login there).
One is request_token which is set to a long base64 encoded string. Others are invalidPasswordCount and loginCookie which might also be required.
Further to that there is a session cookie that you might need to send with the login credentials.
To make this work will require an initial GET to get the request_token, which you need to extract somehow - e.g. with BeautifulSoup. If you use your requests session then the cookie will be presented in the following POST, so you shouldn't need to worry about that.
import requests
from bs4 import BeautifulSoup
session = requests.Session()
# initial GET to retrieve token and set cookies
r = session.get('https://www.researchgate.net/application.Login.html')
soup = r.BeautifulSoup(r.text)
request_token = soup.find('input', attrs={'name':'request_token'})['value']
params = {'login': 'my_email', 'password': 'my_password', 'request_token': request_token, 'invalidPasswordCount': 0, 'loginCookie': 'yes'}
session.post("https://www.researchgate.net/application.Login.html", data=params)
s = session.get("https://www.researchgate.net/search.Search.html?type=researcher&query=zhang")
print BeautifulSoup(s.text).title
Thanks for mhawke, I modified my original code as he suggested and I finally logged in successfully.
Here's my new code:
import requests
from bs4 import BeautifulSoup
session = requests.Session()
loginpage = session.get("https://www.researchgate.net/application.Login.html")
request_token = BeautifulSoup(loginpage.text).form.find("input",{"name":"request_token"}).attrs["value"]
print request_token
params = {"request_token":request_token,
"invalidPasswordCount":"0",
'login': 'my_email',
'password': 'my_password',
"setLoginCookie":"yes"
}
session.post("https://www.researchgate.net/application.Login.html", data = params)
#print s.cookies.get_dict()
s = session.get("https://www.researchgate.net/search.Search.html?type=researcher&query=zhang")
print BeautifulSoup(s.text).title
I know my question may be is not really good. but as a person who is new with python I have a question:
I wrote a code with python that make me login to my page:
import urllib, urllib2, cookielib
email = 'myuser'
password = 'mypass'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'email' : email, 'password' : password})
opener.open('http://test.com/signin', login_data)
resp = opener.open('http://test.com/dashboard')
print resp.read()
and now I an connected to my page....
This is my tamper data when I want to send message to site:
How can I send hello with python now?
could you possibly complete the code and tell me how it is done?
UPDATE
I changed my code like so:
import requests
url1 = 'http://test.com/signin'
data1 = {
'email': 'user',
'password': 'pass',
}
requests.post(url1, data=data1)
url2 = 'http://test.com/dashboard'
data2 = {
'post_temp_id': '61jm5by188',
'message': 'hello',
}
requests.post(url2, data=data2)
But no result
Thank you
Although you could start off using urllib, you'll be happier using requests. How to use the POST method:
import requests
resp = requests.post('http://test.com/dashboard', data={'post_temp_id': '61jm5by188', 'message': 'hello'})
Pretty simple, right? Dictionaries can be used to define headers, cookies, and whatever else you'd want to include in your request. Most requests will only need a single line of code.
EDIT1: I don't have a test.com account, but you may try using this script to test out the POST method. This website will echo what you submit in the form, and the script should get you the same response:
import requests
resp = requests.post('http://hroch486.icpf.cas.cz/cgi-bin/echo.pl',
data={'your_name': 'myname',
'fruit': ['Banana', 'Lemon', 'Plum']})
idx1 = resp.text.index('Parsed values')
idx2 = resp.text.index('No cookies')
print resp.text[idx1:idx2]
From the HTML you received, here's what you should see:
Parsed values</H2>
<UL>
<LI>fruit:
<UL compact type=square>
<LI>Banana
<LI>Lemon
<LI>Plum
</UL>
<LI>your_name = myname
</UL>
<H2>
EDIT2: How to use a session object:
from requests import Session
s = Session()
# Don't just copy this; set your data accordingly...
url1 = url2 = data1 = data2 = ...
resp1 = s.post(url1, data=data1)
resp2 = s.post(url2, data=data2)
The advantage of a session object is that it stores any cookies and headers from previous responses.