How can I send HTTP requests as string with python? something like this:
r = """GET /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.stackoverflow.com
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive"""
answer = send(r)
print answer # gives me the response as string
Assuming python 3, it is recommended that you use urllib.request.
But since you specifically ask for providing the HTTP message as a string,
I assume you want to do low level stuff, so you can also use http.client:
import http.client
connection = http.client.HTTPConnection('www.python.org')
connection.request('GET', '/')
response = connection.getresponse()
print(response.status)
print(response.msg)
answer = response.read()
print(answer)
Related
I'm trying to provide an "mfa-code" parameter to a post request using python requests, but the response I'm getting is that the parameter "mfa-code" is missing, even though I try and provide it via requests.post(url, data={"mfa-code": "0000"}) and also tried requests.post(url, json={"mfa-code": "0000"}).
Here is what I'm trying to send.
POST /login2 HTTP/1.1
Host: redacted.net
Cookie: session=qexMWyQnLtSlBI8B005qnVW4OYvEwEV2; verify=wiener
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://redacted.net/login2
Content-Type: application/x-www-form-urlencoded
Content-Length: 13
Origin: https://redacted.net
Upgrade-Insecure-Requests: 1
Dnt: 1
Sec-Gpc: 1
Te: trailers
Connection: close
mfa-code=0000
And this is the request I'm sending with my python script
import requests
url = "redacted.net"
data={"mfa-code": "0000"}
r = reqeusts.post(url, data=data)
print(r.text)
This results in a response only stating:
"Missing parameter 'mfa-code'"
I took note in the response and how mfa-code is surrounded with ', so I went to burp repeater and put single quotes on mfa-code and sure enough received the same error.
I then tried with other options like json=json.dumps(data), but to the same result as the requests needs a POST body parameter of the type variable=data and not a json object.
What am I missing here?
Or is this something python requests cannot do?
I am using cakephp 2.4.5. I want to send a HTTP POST with URL parameters. I am using python 2.7 request module to send the HTTP POST. Please assume the payload is structured correctly as I have tested that part.
URL_post = http://127.0.0.1/webroot/TestFunc?identity_number=S111A/post
r = requests.post(URL_post, payload)
On the cakephp side, the controller looks something like this;
public function TestFunc($id=null)
{
$identity_number = $this->request->query['identity_number'];
$this->request->data['Model']['associated_id']=$identity_number;
$this->Model->saveAll($this->request->data, array('deep' => true));
}
I have tested that the query is not received correctly. However, if I am not using HTTP POST and just throwing in a normal URL, the query can be received correctly.
What have I done wrong?
The query part of the url is sent correctly:
import requests
requests.post('http://localhost/webroot/TestFunc?identity_number=S111A/post',
{'Model': 'data'})
The Request
POST /webroot/TestFunc?identity_number=S111A/post HTTP/1.1
Host: localhost
User-Agent: python-requests/2.2.1 CPython/3.4 Linux/3.2
Accept: */*
Accept-Encoding: gzip, deflate, compress
Content-Type: application/x-www-form-urlencoded
Content-Length: 10
Model=data
You could also make the requests using params:
requests.post('http://localhost/webroot/TestFunc',
data={'Model': 'data'},
params={'identity_number': 'S111A/post'})
The only difference is that S111A/post is sent as S111A%2Fpost (the same url in the end).
Look at http://docs.python-requests.org/en/latest/user/quickstart/#passing-parameters-in-urls.
payload = {"identity_number": "S111A/post"}
URL_post = "http://127.0.0.1/webroot/TestFunc"
req = requests.post(URL_post, params=payload)
print(req.status_code)
I would like to modify following code to run use 'requests' module.
I have the following code which is working on a website:
def post(url, message, key, sign):
curl = pycurl.Curl()
curl.setopt(pycurl.URL, url)
curl.setopt(pycurl.SSL_VERIFYPEER, 0)
curl.setopt(pycurl.SSL_VERIFYHOST, 0)
buf = cStringIO.StringIO()
curl.setopt(pycurl.WRITEFUNCTION, buf.write)
curl.setopt(pycurl.POSTFIELDS, message)
curl.setopt(pycurl.HTTPHEADER, ['Key:' + key,
'Sign:' + (sign)])
curl.perform()
response = buf.getvalue()
buf.close()
return response
I tried accessing the website with requests and got rejected on invalid request values using following code:
def post(url, message, key, sign):
import requests
session = requests.session()
session.headers = {'Key': key, 'Sign': sign}
response = session.post(url, message)
return response
What am I doing wrong that these methods don't behave the same?
Thank you.
Using Pycurl:
POST /post HTTP/1.1
User-Agent: PycURL/7.32.0
Host: 127.0.0.1:4000
Accept: */*
Key:key
Sign:sign
Content-Length: 3
Content-Type: application/x-www-form-urlencoded
foo
With requests:
POST /post HTTP/1.1
Host: 127.0.0.1:4000
Accept-Encoding: identity
Content-Length: 3
Key: key
Sign: sign
foo
There are several differences which could lead to your error:
Missing User-Agent and Accept headers. This is because you overwrite the session.headers attribute which contains those default headers. Try this instead:
session.headers.update({'Key': key, 'Sign': sign})
Missing Content-Type header. I think you passed a string as the message parameter.
Requests doesn't know that this is application/x-www-form-urlencoded and therefore doesn't set the relevant header.
Either:
Set the header yourself
Better: pass requests a dictionary of your parameters. They will be encoded and declared correctly
so I have made a request to a server with pythons request library. The code looks like this (it uses an adapter so it needs to match a certain pattern)
def getRequest(self, url, header):
"""
implementation of a get request
"""
conn = requests.get(url, headers=header)
newBody = conn.content
newHeader = conn.headers
newHeader['status'] = conn.status_code
response = {"Headers" : newHeader, "Body" : newBody.decode('utf-8')}
self._huddleErrors.handleResponseError(response)
return response
the header parameter I am parsing in is this
{'Authorization': 'OAuth2 handsOffMyToken', 'Accept': 'application/vnd.huddle.data+json'}
however I am getting an xml response back from the server. After checking fiddler I see the request being sent is:
Accept-Encoding: identity
Accept: */*
Host: api.huddle.dev
Authorization: OAuth2 HandsOffMyToken
Accept: application/vnd.huddle.data+json
Accept-Encoding: gzip, deflate, compress
User-Agent: python-requests/1.2.3 CPython/3.3.2 Windows/2008ServerR2
As we can see there are 2 Accept Headers! The requests library is adding in this Accept:* / * header which is throwing off the server. Does anyone know how I can stop this?
As stated in comments it seems this is a problem with the requests library in 3.3. In requests there are default headers (which can be found in the utils folder). When you don't specify your own headers these default headers are used. However if you specify your own headers instead requests tries to merge the headers together to make sure you have all the headers you need.
The problem shows its self in def request() method in sessions.py. Instead of merging all the headers it puts in its headers and then chucks in yours. For now I have just done the dirty hack of removing the accept header from the default headers found in util
I have a question about Python regex. I don't have much information about Python regex. I am working with HTTP request messages and parsing them with regex. As you know, the HTTP GET messages are in this format.
GET / HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: 10.2.0.12
Connection: Keep-Alive
I want to parse the URI, method, user-agent, and the host areas of the message. My regex for this job is:
r'^({0})\s+(\S+)\s+[^\n]*$\n.*^User-Agent:\s*(\S+)[^\n]*$\n.*^Host:\s*(\S+)[^\n]*$\n'.format('|'.join(methods)), re.MULTILINE|re.DOTALL)
But, when the message comes up with like
GET / HTTP/1.0
Host: 10.2.0.12
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Connection: Keep-Alive
I can not catch them because of the places of host or, user-agent changed. So I need a generic regex that will catch all of them, even if the places of host, method, uri are changed in the message.
Readability Counts (The Zen of Python)
Use findall() for each subexpression you want to find. This way your regex will be short, readable, and independent of the location of the subexpression.
Define a simple, readable regex:
>>> user=re.compile("User-Agent: (.*?)\n")
Test it with two different http headers:
>>> s1='''GET / HTTP/1.0
Host: 10.2.0.12
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Connection: Keep-Alive'''
>>> s2='''GET / HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: 10.2.0.12
Connection: Keep-Alive'''
>>> user.findall(s1)
['Wget/1.12 (linux-gnu)']
>>> user.findall(s2)
['Wget/1.12 (linux-gnu)']
Parse the whole headers into a dictionary like so?
headers = """GET / HTTP/1.0
Host: 10.2.0.12
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Connection: Keep-Alive"""
headers = headers.splitlines()
firstLine = headers.pop(0)
(verb, url, version) = firstLine.split()
d = {'verb' : verb, 'url' : url, 'version' : version}
for h in headers:
h = h.split(': ')
if len(h) < 2:
continue
field=h[0]
value= h[1]
d[field] = value
print d
print d['User-Agent']
print d['url']