How to parse the "request body" using python CGI? - python

I just need to write a simple python CGI script to parse the contents of a POST request containing JSON. This is only test code so that I can test a client application until the actual server is ready (written by someone else).
I can read the cgi.FieldStorage() and dump the keys() but the request body containing the JSON is nowhere to be found.
I can also dump the os.environ() which provides lots of info except that I do not see a variable containing the request body.
Any input appreciated.
Chris

If you're using CGI, just read data from stdin:
import sys
data = sys.stdin.read()

notice that if you call cgi.FieldStorage() before in your code, you can't get the body data from stdin, because it just be read once.

Related

How to read all HTTP headers in Python CGI script?

I have a python CGI script that receives a POST request containing a specific HTTP header. How do you read and parse the headers received? I am not using BaseHTTPRequestHandler or HTTPServer. I receive the body of the post with sys.stdin.read(). Thanks.
It is possible to get a custom request header's value in an apache CGI script with python. The solution is similar to this.
Apache's mod_cgi will set environment variables for each HTTP request header received, the variables set in this manner will all have an HTTP_ prefix, so for example x-client-version: 1.2.3 will be available as variable HTTP_X_CLIENT_VERSION.
So, to read the above custom header just call os.environ["HTTP_X_CLIENT_VERSION"].
The below script will print all HTTP_* headers and values:
#!/usr/bin/env python
import os
print "Content-Type: text/html"
print "Cache-Control: no-cache"
print
print "<html><body>"
for headername, headervalue in os.environ.iteritems():
if headername.startswith("HTTP_"):
print "<p>{0} = {1}</p>".format(headername, headervalue)
print "</html></body>"
You might want to look at the cgi module included with Python's standard library. It appears to have a cgi.parse_header(string) function that you might find to be helpful in trying to get the headers.

How do I allow a JSON response in a mechanize test?

I have a web service that returns JSON responses when successful. Unfortunately, when I try to test this service via multi-mechanize, I get an error - "not viewing HTML". Obviously it's not viewing HTML, it's getting content clearly marked as JSON. How do I get mechanize to ignore this error and accept the JSON it's getitng back?
It turns out mechanize isn't set up to accept JSON responses out of the box. For a quick and dirty solution to this, update mechanize's _headersutil.py file (check /usr/local/lib/python2.7/dist-packages/mechanize).
In the is_html() method, change the line:
html_types = ["text/html"]
to read:
html_types = ["text/html", "application/json"]

Python : Postfix stdin

I want to make postfix send all emails to a python script that will scan the emails.
However, how do I pipe the output from postfix to python ?
What is the stdin for Python ?
Can you give a code example ?
Rather than calling sys.stdin.readlines() then looping and passing the lines to email.FeedParser.FeedParser().feed() as suggested by Michael, you should instead pass the file object directly to the email parser.
The standard library provides a conveinience function, email.message_from_file(fp), for this purpose. Thus your code becomes much simpler:
import email
msg = email.message_from_file(sys.stdin)
To push mail from postfix to a python script, add a line like this to your postfix alias file:
# send to emailname#example.com
emailname: "|/path/to/script.py"
The python email.FeedParser module can construct an object representing a MIME email message from stdin, by doing something like this:
# Read from STDIN into array of lines.
email_input = sys.stdin.readlines()
# email.FeedParser.feed() expects to receive lines one at a time
# msg holds the complete email Message object
parser = email.FeedParser.FeedParser()
msg = None
for msg_line in email_input:
parser.feed(msg_line)
msg = parser.close()
From here, you need to iterate over the MIME parts of msg and act on them accordingly. Refer to the documentation on email.Message objects for the methods you'll need. For example email.Message.get("Header") returns the header value of Header.

Facing issue while writing data into a file in python script

I have to write 7231 bytes into a file using python script. In a client-server program, my python script act like client and it received 7231 bytes from server. If I check in TCP-Dump, its shows complete data. But when I try to write into a file; I am missing the content.
My script:
def SendOnce(self, req='/gpsData=1',method="GET"):
conn = httplib.HTTPConnection(self.proxy)
self.Logresponse("\nConnection Open\n<br />")
conn.request(method,req)
Log="\nRequest Send: %s\n<br \>\n" %req
self.Logresponse(Log)
response = conn.getresponse()
Log = "\nResponse Code: %s\n<br \>\n" %response.status
self.Logresponse(Log)
Log = "\nSarav -- Get Header: %s \n version= %s <br \>\n" %(response.msg,response.version)
self.Logresponse(Log)
if (response.status==200):
Log = response.read()
self.Logresponse(Log)
conn.close()
self.Logresponse("\nConnection Close\n<br \>")
return response
this "self.Logresponse(Log)" is writing into file. If i receive 1023 bytes, its writing full content into that. Please help me out how to write complete data.
Note: I am writing Hexa Format data.
First of all, 7231 bytes is not exactly huge...
With the limited info you gave, I would guess that you might have forgotten to take the OS's write buffer into account. You probably try to read the file before all the content was written to it.
Python generally uses the system's standard buffer (you can change that). You can decrease that buffer, or force a flush yourself.
I'm just guessing, it might be that the .read() function doesn't return all data in one chunk; can you try to modify the inner part like this:
if (response.status==200):
while 1:
Log = response.read()
if not Log:
break
self.Logresponse(Log)

Python: Downloading a large file to a local path and setting custom http headers

I am looking to download a file from a http url to a local file. The file is large enough that I want to download it and save it chunks rather than read() and write() the whole file as a single giant string.
The interface of urllib.urlretrieve is essentially what I want. However, I cannot see a way to set request headers when downloading via urllib.urlretrieve, which is something I need to do.
If I use urllib2, I can set request headers via its Request object. However, I don't see an API in urllib2 to download a file directly to a path on disk like urlretrieve. It seems that instead I will have to use a loop to iterate over the returned data in chunks, writing them to a file myself and checking when we are done.
What would be the best way to build a function that works like urllib.urlretrieve but allows request headers to be passed in?
What is the harm in writing your own function using urllib2?
import os
import sys
import urllib2
def urlretrieve(urlfile, fpath):
chunk = 4096
f = open(fpath, "w")
while 1:
data = urlfile.read(chunk)
if not data:
print "done."
break
f.write(data)
print "Read %s bytes"%len(data)
and using request object to set headers
request = urllib2.Request("http://www.google.com")
request.add_header('User-agent', 'Chrome XXX')
urlretrieve(urllib2.urlopen(request), "/tmp/del.html")
If you want to use urllib and urlretrieve, subclass urllib.URLopener and use its addheader() method to adjust the headers (ie: addheader('Accept', 'sound/basic'), which I'm pulling from the docstring for urllib.addheader).
To install your URLopener for use by urllib, see the example in the urllib._urlopener section of the docs (note the underscore):
import urllib
class MyURLopener(urllib.URLopener):
pass # your override here, perhaps to __init__
urllib._urlopener = MyURLopener
However, you'll be pleased to hear wrt your comment to the question comments, reading an empty string from read() is indeed the signal to stop. This is how urlretrieve handles when to stop, for example. TCP/IP and sockets abstract the reading process, blocking waiting for additional data unless the connection on the other end is EOF and closed, in which case read()ing from connection returns an empty string. An empty string means there is no data trickling in... you don't have to worry about ordered packet re-assembly as that has all been handled for you. If that's your concern for urllib2, I think you can safely use it.

Categories

Resources