CSRF Token Missing When Posting Request To DVWA Using Python Requests Library - python

I'm trying to make a program that will allow me to submit username and password on a website. For this, I am using DVWA(Damn Vulnerable Web Application) which is running on localhost:8080.
But whenever I try to send post request, it always returns an error.
csrf token is incorrect
Here's my code:
import requests
url = 'http://192.168.43.1:8080/login.php'
data_dict = {"username": "admin", "password": "password", "Login": "Login"}
response = requests.post(url, data_dict)
print(response.text)

You need to make GET request for that URL first, and parse the correct "CSRF" value from the response (in this case user_token). From response HTML, you can find hidden value:
<input type="hidden" name="user_token" value="28e01134ddf00ec2ea4ce48bcaf0fc55">
Also, it seems that you need to include cookies from first GET request for following request - this can be done automatically by using request.Session() object. You can see cookies by for example print(resp.cookies) from first response.
Here is modified code. I'm using BeautifulSoup library for parsing the html - it finds correct input field, and gets value from it.
POST method afterwards uses this value in user_token parameter.
from bs4 import BeautifulSoup
import requests
with requests.Session() as s:
url = 'http://192.168.43.1:8080/login.php'
resp = s.get(url)
parsed_html = BeautifulSoup(resp.content, features="html.parser")
input_value = parsed_html.body.find('input', attrs={'name':'user_token'}).get("value")
data_dict = {"username": "admin", "password": "password", "Login": "Login", "user_token":input_value}
response = s.post(url, data_dict)
print(response.content)

Related

Module 'requests' doesn't go through with the login

I am trying to get information from a website by using the requests module. To get to the information you have to be logged in and then you can access the page. I looked into the input tags and noticed that they are called login_username and login_password but for some reasons the post doesn't go through. I also read here that he solved it by waiting for few seconds before going thorugh the other page, it didn't helped either..
Here is my code:
import requests
import time
#This URL will be the URL that your login form points to with the "action" tag.
loginurl = 'https://jadepanel.nephrite.ro/login'
#This URL is the page you actually want to pull down with requests.
requesturl = 'https://jadepanel.nephrite.ro/clan/view/123'
payload = {
'login_username': 'username',
'login_password': 'password'
}
with requests.Session() as session:
post = session.post(loginurl, data=payload)
time.sleep(3)
r = session.get(requesturl)
print(r.text)
login_username and login_password are not all the necessary parameters. If you look at the /login/ POST request in the browser developer tools, you would see that there is also a _token being sent.
This is something you would need to parse out of the login HTML. So the flow would be the following:
get the https://jadepanel.nephrite.ro/login page
HTML parse it and extract _token value
make a POST request with login, password and token
use the logged in session to navigate the site
For the HTML parsing we could use BeautifulSoup (there are other options, of course):
from bs4 import BeautifulSoup
login_html = session.get(loginurl).text
soup = BeautifulSoup(login_html, "html.parser")
token = soup.find("input", {"name": "_token"})["value"]
payload = {
'login_username': 'username',
'login_password': 'password',
'_token': token
}
Complete code:
import time
import requests
from bs4 import BeautifulSoup
# This URL will be the URL that your login form points to with the "action" tag.
loginurl = 'https://jadepanel.nephrite.ro/login'
# This URL is the page you actually want to pull down with requests.
requesturl = 'https://jadepanel.nephrite.ro/clan/view/123'
with requests.Session() as session:
login_html = session.get(loginurl).text
soup = BeautifulSoup(login_html, "html.parser")
token = soup.find("input", {"name": "_token"})["value"]
payload = {
'login_username': 'username',
'login_password': 'password',
'_token': token
}
post = session.post(loginurl, data=payload)
time.sleep(3)
r = session.get(requesturl)
print(r.text)

Python web scraping login

I am trying to login to a website using python.
The login URL is :
https://login.flash.co.za/apex/f?p=pwfone:login
and the 'form action' url is shown as :
https://login.flash.co.za/apex/wwv_flow.accept
When I use the ' inspect element' on chrome when logging in manually, these are the form posts that show up (pt_02 = password):
There a few hidden items that I'm not sure how to add into the python code below.
When I use this code, the login page is returned:
import requests
url = 'https://login.flash.co.za/apex/wwv_flow.accept'
values = {'p_flow_id': '1500',
'p_flow_step_id': '101',
'p_page_submission_id': '3169092211412',
'p_request': 'LOGIN',
'p_t01': 'solar',
'p_t02': 'password',
'p_checksum': ''
}
r = requests.post(url, data=values)
print r.content
How can I adjust this code to perform a login?
Chrome network:
This is more or less your script should look like. Use session to handle the cookies automatically. Fill in the username and password fields manually.
import requests
from bs4 import BeautifulSoup
logurl = "https://login.flash.co.za/apex/f?p=pwfone:login"
posturl = 'https://login.flash.co.za/apex/wwv_flow.accept'
with requests.Session() as s:
s.headers = {"User-Agent":"Mozilla/5.0"}
res = s.get(logurl)
soup = BeautifulSoup(res.text,"lxml")
values = {
'p_flow_id': soup.select_one("[name='p_flow_id']")['value'],
'p_flow_step_id': soup.select_one("[name='p_flow_step_id']")['value'],
'p_instance': soup.select_one("[name='p_instance']")['value'],
'p_page_submission_id': soup.select_one("[name='p_page_submission_id']")['value'],
'p_request': 'LOGIN',
'p_arg_names': soup.select_one("[name='p_arg_names']")['value'],
'p_t01': 'username',
'p_arg_names': soup.select_one("[name='p_arg_names']")['value'],
'p_t02': 'password',
'p_md5_checksum': soup.select_one("[name='p_md5_checksum']")['value'],
'p_page_checksum': soup.select_one("[name='p_page_checksum']")['value']
}
r = s.post(posturl, data=values)
print r.content
since I cannot recreate your case I can't tell you what exactly to change, but when I was doing such things I used Postman to intercept all requests my browser sends. So I'd install that, along with browser extension and then perform login. Then you can view the request in Postman, also view the response it received there, what's more it provides you with Python code of request too, so you could simply copy and use it then.
Shortly, use Pstman, perform login, clone their request.

Login in a website with requests

I need to log me in a website with requests, but all I have try don't work :
from bs4 import BeautifulSoup as bs
import requests
s = requests.session()
url = 'https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5'
def authenticate():
headers = {'username': 'myuser', 'password': 'mypasss', '_Id': 'submit'}
page = s.get(url)
soup = bs(page.content)
value = soup.form.find_all('input')[2]['value']
headers.update({'value_name':value})
auth = s.post(url, params=headers, cookies=page.cookies)
authenticate()
or :
import requests
payload = {
'inUserName': 'user',
'inUserPass': 'pass'
}
with requests.Session() as s:
p = s.post('https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5', data=payload)
print(p.text)
print(p.status_code)
r = s.get('A protected web page url')
print(r.text)
When I try this with the .status_code, it return 200 but I want 401 or 403 for do a script like 'if login'...
I have found this but I think it works in python 2, but I use python 3 and I don't know how to convert... :
import requests
import sys
payload = {
'username': 'sopier',
'password': 'somepassword'
}
with requests.Session(config={'verbose': sys.stderr}) as c:
c.post('http://m.kaskus.co.id/user/login', data=payload)
r = c.get('http://m.kaskus.co/id/myform')
print 'sopier' in r.content
Somebody know how to do ?
Because each I have test test all script I have found and it don't work...
When you submit the logon, the POST request is sent to https://www.ent-place.fr/CookieAuth.dll?Logon not https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5 -- You get redirected to that URL afterwards.
When I tested this, the post request contains the following parameters:
curl:Z2F
flags:0
forcedownlevel:0
formdir:5
username:username
password:password
SubmitCreds.x:69
SubmitCreds.y:9
SubmitCreds:Ouvrir une session
So, you'll likely need to supply those additional parameters as well.
Also, the line s.post(url, params=headers, cookies=page.cookies) is not correct. You should pass headers into the keyword argument data not params -- params encodes to the request url -- you need to pass it in the form data. And I'm assuming you really mean payload when you say headers
s.post(url, data=headers, cookies=page.cookies)
The site you're trying to login to has an onClick JavaScript when you process the login form. requests won't be able to execute JavaScript for you. This may cause issues with the site functionality.

how to send cookies inside post request

trying to send Post request with the cookies on my pc from get request
#! /usr/bin/python
import re #regex
import urllib
import urllib2
#get request
x = urllib2.urlopen("http://www.example.com) #GET Request
cookies=x.headers['set-cookie'] #to get the cookies from get request
url = 'http://example' # to know the values type any password to know the cookies
values = {"username" : "admin",
"passwd" : password,
"lang" : "" ,
"option" : "com_login",
"task" : "login",
"return" : "aW5kZXgucGhw" }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
result = response.read()
cookies=response.headers['set-cookie'] #to get the last cookies from post req in this variable
then i searched in google
how to send cookies inside same post request and found
opener = urllib2.build_opener() # send the cookies
opener.addheaders.append(('Cookie', cookies)) # send the cookies
f = opener.open("http://example")
but i don't exactly where should i type it in my code
what i need to do exactly is to
send GET request, put the cookies from the request in variable,then make post request with the value that i got from the GET request
if anyone know answer i need edit on my code
Just create a HTTP opener and a cookiejar handler. So cookies will be retrieved and will be passed together to next request automatically. See:
import urllib2 as net
import cookielib
import urllib
cookiejar = cookielib.CookieJar()
cookiejar.clear_session_cookies()
opener = net.build_opener(net.HTTPCookieProcessor(cookiejar))
data = urllib.urlencode(values)
request = net.Request(url, urllib.urlencode(data))
response = opener.open(request)
As opener is a global handler, just make any request and the previous cookies sent from previous request will be in the next request (POST/GET), automatically.
You should really look into the requests library python has to offer. All you need to do is make a dictionary for you cookies key/value pair and pass it is as an arg.
Your entire code could be replaced by
#import requests
url = 'http://example' # to know the values type any password to know the cookies
values = {"username" : "admin",
"passwd" : password,
"lang" : "" ,
"option" : "com_login",
"task" : "login",
"return" : "aW5kZXgucGhw" }
session = requests.Session()
response = session.get(url, data=values)
cookies = session.cookies.get_dict()
response = reqeusts.post(url, data=values, cookies=cookies)
The second piece of code is probably what you want, but depends on the format of the response.

python-requests and complicated forms

I'm trying to make a web scraper for my university web, but I can't get past the login page.
import requests
URL = "https://login.ull.es/cas-1/login?service=https%3A%2F%2Fcampusvirtual.ull.es%2Flogin%2Findex.php%3FauthCAS%3DCAS"
USER = "myuser"
PASS = "mypassword"
payload = {
"username": USER,
"password": PASS,
"warn": "false",
"lt": "LT-2455188-fQ7b5JcHghCg1cLYvIMzpjpSEd0rlu",
"execution": "e1s1",
"_eventId": "submit",
"submit": "submit"
}
with requests.Session() as s:
r = s.post(URL, data=payload)
#r = s.get(r"http://campusvirtual.ull.es/my/index.php")
with open("test.html","w") as f:
f.write(r.text)
That code is obviously not working and I don't know where's the mistake, I tried putting only the username and the password in the payload (the other values are in the source code of the web that are marked as hidden) but that is also failing.
Can anyone point me in the right direction? Thanks. (sorry for my english)
The "lt": "LT-2455188-fQ7b5JcHghCg1cLYvIMzpjpSEd0rlu" is a session ID or some sort of anti-CSRF protection or similar (wild guess: hmac-ed random id number). What matters is that it is not a constant value, you will have to read it from the same URL by issuing a GET request.
In the GET response you have something like:
<input type="hidden" name="lt" value="LT-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" />
Additionally, there is a JSESSIONID cookie that might be important.
This should be your flow:
GET the URL
extract the lt parameter and the JSESSIONID cookie from the response
fill the payload['lt'] field
set cookie header
POST the same URL.
Extracting the cookie is very simple, see the requests documentation.
Extracting the lt param is a bit more difficult, but you can do it using BeautifulSoup package. Assuming that you have the response in a variable named text, you can use:
from BeautifulSoup import BeautifulSoup as soup
payload['lt'] = soup(text).find('input', {'name': 'lt', 'type': 'hidden'}).get('value')

Categories

Resources