trying to send Post request with the cookies on my pc from get request
#! /usr/bin/python
import re #regex
import urllib
import urllib2
#get request
x = urllib2.urlopen("http://www.example.com) #GET Request
cookies=x.headers['set-cookie'] #to get the cookies from get request
url = 'http://example' # to know the values type any password to know the cookies
values = {"username" : "admin",
"passwd" : password,
"lang" : "" ,
"option" : "com_login",
"task" : "login",
"return" : "aW5kZXgucGhw" }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
result = response.read()
cookies=response.headers['set-cookie'] #to get the last cookies from post req in this variable
then i searched in google
how to send cookies inside same post request and found
opener = urllib2.build_opener() # send the cookies
opener.addheaders.append(('Cookie', cookies)) # send the cookies
f = opener.open("http://example")
but i don't exactly where should i type it in my code
what i need to do exactly is to
send GET request, put the cookies from the request in variable,then make post request with the value that i got from the GET request
if anyone know answer i need edit on my code
Just create a HTTP opener and a cookiejar handler. So cookies will be retrieved and will be passed together to next request automatically. See:
import urllib2 as net
import cookielib
import urllib
cookiejar = cookielib.CookieJar()
cookiejar.clear_session_cookies()
opener = net.build_opener(net.HTTPCookieProcessor(cookiejar))
data = urllib.urlencode(values)
request = net.Request(url, urllib.urlencode(data))
response = opener.open(request)
As opener is a global handler, just make any request and the previous cookies sent from previous request will be in the next request (POST/GET), automatically.
You should really look into the requests library python has to offer. All you need to do is make a dictionary for you cookies key/value pair and pass it is as an arg.
Your entire code could be replaced by
#import requests
url = 'http://example' # to know the values type any password to know the cookies
values = {"username" : "admin",
"passwd" : password,
"lang" : "" ,
"option" : "com_login",
"task" : "login",
"return" : "aW5kZXgucGhw" }
session = requests.Session()
response = session.get(url, data=values)
cookies = session.cookies.get_dict()
response = reqeusts.post(url, data=values, cookies=cookies)
The second piece of code is probably what you want, but depends on the format of the response.
Related
I'm trying to make a program that will allow me to submit username and password on a website. For this, I am using DVWA(Damn Vulnerable Web Application) which is running on localhost:8080.
But whenever I try to send post request, it always returns an error.
csrf token is incorrect
Here's my code:
import requests
url = 'http://192.168.43.1:8080/login.php'
data_dict = {"username": "admin", "password": "password", "Login": "Login"}
response = requests.post(url, data_dict)
print(response.text)
You need to make GET request for that URL first, and parse the correct "CSRF" value from the response (in this case user_token). From response HTML, you can find hidden value:
<input type="hidden" name="user_token" value="28e01134ddf00ec2ea4ce48bcaf0fc55">
Also, it seems that you need to include cookies from first GET request for following request - this can be done automatically by using request.Session() object. You can see cookies by for example print(resp.cookies) from first response.
Here is modified code. I'm using BeautifulSoup library for parsing the html - it finds correct input field, and gets value from it.
POST method afterwards uses this value in user_token parameter.
from bs4 import BeautifulSoup
import requests
with requests.Session() as s:
url = 'http://192.168.43.1:8080/login.php'
resp = s.get(url)
parsed_html = BeautifulSoup(resp.content, features="html.parser")
input_value = parsed_html.body.find('input', attrs={'name':'user_token'}).get("value")
data_dict = {"username": "admin", "password": "password", "Login": "Login", "user_token":input_value}
response = s.post(url, data_dict)
print(response.content)
I am trying to make a POST request in Python 2, using urllib2. My code is currently as follows;
url = 'http://' + server_url + '/playlists/upload?'
data = urllib.urlencode(OrderedDict([("sectionID", section_id), ("path", current_playlist), ("X-Plex-Token", plex_token)]))
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
d = response.read()
print(d)
'url' and 'data' return correctly formatted with the variables, I know this because I can copy their output into Postman for checking and the POST works fine (see example url below)
http://192.168.1.96:32400/playlists/upload?sectionID=11&path=D%3A%5CMedia%5CPPP%5Ctmp%5Cplex%5CAmbient.m3u&X-Plex-Token=XXXXXXXXX
When I run my Python code I get a 401 error returned, presumably meaning the X-Plex-Token parameter was not correctly sent, hence I am not allowed access.
Can anyone tell me where I'm going wrong? Help is greatly appreciated.
Have you tried removing the question mark and not using OrderedDict (no idea why you would need that) ?
url = 'http://' + server_url + '/playlists/upload'
data = urllib.urlencode({"sectionID":section_id), "path":current_playlist,"X-Plex-Token":plex_token})
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
d = response.read()
print(d)
Of course you should be using requests instead anyway:
import requests
r = requests.post('http://{}/playlists/upload'.format(server_url), data = {"sectionID":section_id), "path":current_playlist,"X-Plex-Token":plex_token})
print r.url
print r.text
print r.json
I've ended up switching to Python 3, as I didn't realise that the requests module was included by default. Still no idea why the above wasn't working, but maybe something to do with the lack of headers
headers = {'cache-control': "no-cache"}
edit:
This is what I'm using now, as mentioned above I probably don't need OrderedDict.
import requests
url = 'http://' + server_url + '/playlists/upload'
headers = {'cache-control': "no-cache"}
querystring = urllib.parse.urlencode(OrderedDict([("sectionID", section_id), ("path", current_playlist), ("X-Plex-Token", plex_token)]))
response = requests.request("POST", url, data = "", headers = headers, params = querystring)
print(response.text)
I need to log me in a website with requests, but all I have try don't work :
from bs4 import BeautifulSoup as bs
import requests
s = requests.session()
url = 'https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5'
def authenticate():
headers = {'username': 'myuser', 'password': 'mypasss', '_Id': 'submit'}
page = s.get(url)
soup = bs(page.content)
value = soup.form.find_all('input')[2]['value']
headers.update({'value_name':value})
auth = s.post(url, params=headers, cookies=page.cookies)
authenticate()
or :
import requests
payload = {
'inUserName': 'user',
'inUserPass': 'pass'
}
with requests.Session() as s:
p = s.post('https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5', data=payload)
print(p.text)
print(p.status_code)
r = s.get('A protected web page url')
print(r.text)
When I try this with the .status_code, it return 200 but I want 401 or 403 for do a script like 'if login'...
I have found this but I think it works in python 2, but I use python 3 and I don't know how to convert... :
import requests
import sys
payload = {
'username': 'sopier',
'password': 'somepassword'
}
with requests.Session(config={'verbose': sys.stderr}) as c:
c.post('http://m.kaskus.co.id/user/login', data=payload)
r = c.get('http://m.kaskus.co/id/myform')
print 'sopier' in r.content
Somebody know how to do ?
Because each I have test test all script I have found and it don't work...
When you submit the logon, the POST request is sent to https://www.ent-place.fr/CookieAuth.dll?Logon not https://www.ent-place.fr/CookieAuth.dll?GetLogon?curl=Z2F&reason=0&formdir=5 -- You get redirected to that URL afterwards.
When I tested this, the post request contains the following parameters:
curl:Z2F
flags:0
forcedownlevel:0
formdir:5
username:username
password:password
SubmitCreds.x:69
SubmitCreds.y:9
SubmitCreds:Ouvrir une session
So, you'll likely need to supply those additional parameters as well.
Also, the line s.post(url, params=headers, cookies=page.cookies) is not correct. You should pass headers into the keyword argument data not params -- params encodes to the request url -- you need to pass it in the form data. And I'm assuming you really mean payload when you say headers
s.post(url, data=headers, cookies=page.cookies)
The site you're trying to login to has an onClick JavaScript when you process the login form. requests won't be able to execute JavaScript for you. This may cause issues with the site functionality.
I am trying to send GET request through a proxy with authentification.
I have the following existing code:
import httplib
username = 'myname'
password = '1234'
proxyserver = "136.137.138.139"
url = "http://google.com"
c = httplib.HTTPConnection(proxyserver, 83, timeout = 30)
c.connect()
c.request("GET", url)
resp = c.getresponse()
data = resp.read()
print data
when running this code, I get an answer from the proxy saying that I must provide authentification, which is correct.
In my code, I don't use login and password. My problem is that i don't know how to use them !
Any idea ?
You can refer this code if you specifically want to use httplib.
https://gist.github.com/beugley/13dd4cba88a19169bcb0
But you could also use the easier requests module.
import requests
proxies = {
"http": "http://username:password#proxyserver:port/",
# "https": "https://username:password#proxyserver:port/",
}
url = 'http://google.com'
data = requests.get(url, proxies=proxies)
I'm trying to write a simple script to log into Wikipedia and perform some actions on my user page, using the Mediawiki api. However, I never seem to get past the first login request (from this page: https://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Logging_in). I don't think the session cookie that I set is being sent. This is my code so far:
import Cookie, urllib, urllib2, xml.etree.ElementTree
url = 'https://en.wikipedia.org/w/api.php?action=login&format=xml'
username = 'user'
password = 'password'
user_data = [('lgname', username), ('lgpassword', password)]
#Login step 1
#Make the POST request
request = urllib2.Request(url)
data = urllib.urlencode(user_data)
login_raw_data1 = urllib2.urlopen(request, data).read()
#Parse the XML for the login information
login_data1 = xml.etree.ElementTree.fromstring(login_raw_data1)
login_tag = login_data1.find('login')
token = login_tag.attrib['token']
cookieprefix = login_tag.attrib['cookieprefix']
sessionid = login_tag.attrib['sessionid']
#Set the cookies
cookie = Cookie.SimpleCookie()
cookie[cookieprefix + '_session'] = sessionid
#Login step 2
request = urllib2.Request(url)
session_cookie_header = cookieprefix+'_session='+sessionid+'; path=/; domain=.wikipedia.org; HttpOnly'
request.add_header('Set-Cookie', session_cookie_header)
user_data.append(('lgtoken', token))
data = urllib.urlencode(user_data)
login_raw_data2 = urllib2.urlopen(request, data).read()
I think the problem is somewhere in the request.add_header('Set-Cookie', session_cookie_header) line, but I don't know for sure. How do I use these python libraries to send cookies in the header with every request (which is necessary for a lot of API functions).
The latest version of requests has support for sessions (as well as being really simple to use and generally great):
with requests.session() as s:
s.post(url, data=user_data)
r = s.get(url_2)