Python Bottle multiple responses for one route - python

I'm looking for a way to send multiple responses for one route.
The problem is that from what I've read I have to return the content data.
For example :
#route('/events')
def positions():
for i in xrange(5):
response.content_type = 'text/event-stream'
response.set_header('Cache-Control', 'no-cache')
now = datetime.datetime.now().time().replace(microsecond=0)
return "data: %s\n\n"%now
Is there a way to replace the last line in some function call, so I can send all the responses and then exit the route ?
Thanks,
Omer.

I'm not 100% sure I understand what you're asking so I might not be answering correctly, but would this do what you want?
#route('/events')
def positions():
output = ''
for i in xrange(5):
now = datetime.datetime.now().time().replace(microsecond=0)
output += "%s\n\n"%now
response.content_type = 'text/event-stream'
response.set_header('Cache-Control', 'no-cache')
return "data: " + output

Related

How to pass a parameter as string which include special character in flask app route

I want to pass "top3/1" as parameter in url. but it return 404 error.
here is my code:
#APP.route('/get_agent_details_on_voucher_code/<string:code>',methods=['GET'])
def get_agent_details_on_voucher_code(code):
result = []
if code == 'TOP3':
voucher_earning_list = [{'voucher_code':'TOP3/1','agent_id':12345,'voucher_id':8}]
if len(voucher_earning_list)>0:
for item in voucher_earning_list:
get_details = requests.get('http://192.168.1.55:5000/get_agent_details/'+str(item['agent_id']))
get_result = get_details.json()
if len(get_result) > 0:
get_ag_mobileno = get_result[0]['mobile']
result.append({'mobile_no':get_ag_mobileno,'agent_id':item['agent_id'],'voucher_id':item['voucher_id']})
response = jsonify(result)
else:
response = jsonify(result)
else:
response = jsonify(result)
else:
response = jsonify(result)
return response
That's a tricky question: generally with flask you have to specify the routes up to the final endpoint.
If you want to match all the sub routes you can use the type path instead of string, os you just need to change the route into:
#APP.route('/get_agent_details_on_voucher_code/<path:code>',methods=['GET'])
In this way you will obtain all the sub routes of /get_agent_details_on_voucher_code/, keep in mind the code will be a string, you have to extract the information you need parsing it.

script to serve from url, for requests matching regular expression

I am a complete n00b in Python and am trying to figure out a stub for mitmproxy.
I have tried the documentation but they assume we know Python so i am at a stalemate.
I've been working with a script:
original_url = 'http://production.domain.com/1/2/3'
new_content_path = '/home/andrepadez/proj/main.js'
body = open(new_content_path, 'r').read()
def response(context, flow):
url = flow.request.get_url()
if url == original_url:
flow.response.content = body
As you can predict, the proxy takes every request to 'http://production.domain.com/1/2/3' and serves the content of my file.
I need this to be more dynamic:
for every request to 'http://production.domain.com/*', i need to serve a correspondent URL, for example:
http://production.domain.com/1/4/3 -> http://develop.domain.com/1/4/3
I know i have to use a regular expression, so i can capture and map it correctly, but i don't know how to serve the contents of the develop url as "flow.response.content".
Any help will be welcome
You would have to do something like this:
import re
# In order not to re-read the original file every time, we maintain
# a cache of already-read bodies.
bodies = { }
def response(context, flow):
# Intercept all URLs
url = flow.request.get_url()
# Check if this URL is one of "ours" (check out Python regexps)
m = re.search('REGEXP_FOR_ORIGINAL_URL/(\d+)/(\d+)/(\d+)', url)
if None != m:
# It is, and m will contain this information
# The three numbers are in m.group(1), (2), (3)
key = "%d.%d.%d" % ( m.group(1), m.group(2), m.group(3) )
try:
body = bodies[key]
except KeyError:
# We do not yet have this body
body = // whatever is necessary to retrieve this body
= open("%s.txt" % ( key ), 'r').read()
bodies[key] = body
flow.response.content = body

Python TPCServer rfile.read blocks

I am writing a simple SocketServer.TCPServer request handler (StreamRequestHandler) that will capture the request, along with the headers and the message body. This is for faking out an HTTP server that we can use for testing.
I have no trouble grabbing the request line or the headers.
If I try to grab more from the rfile than exists, the code blocks. How can I grab all of the request body without knowing its size? In other words, I don't have a Content-Size header.
Here's a snippet of what I have now:
def _read_request_line(self):
server.request_line = self.rfile.readline().rstrip('\r\n')
def _read_headers(self):
headers = []
for line in self.rfile:
line = line.rstrip('\r\n')
if not line:
break
parts = line.split(':', 1)
header = (parts[0].strip(), parts[0].strip())
headers.append(header)
server.request_headers = headers
def _read_content(self):
server.request_content = self.rfile.read() # blocks
Keith's comment is correct. Here's what it looks like
length = int(self.headers.getheader('content-length'))
data = self.rfile.read(length)

How to abstract from pycurl the correct way

I would like to write a pseudo module that makes one do a GET request that keeps on going (pretty much like the one to consume the Twitter Streamming API), but making unnecessary to give in all the parameters everytime someone wants to call a function to make that same GET request.
In my module.py I have
class viewResults():
def __init__(self,username,password,keyname,consume):
self.buffer = ""
self.consume = consume
self.conn = pycurl.Curl()
self.conn.setopt(pycurl.USERPWD, "%s:%s" % (username, password))
self.conn.setopt(pycurl.URL, "http://crowdprocess.no.de/"+keyname+"/results")
self.conn.setopt(pycurl.WRITEFUNCTION, self.on_receive)
# self.conn.setopt(pycurl.VERBOSE, 1)
# self.conn.setopt(pycurl.DEBUGFUNCTION, self.debug)
self.conn.perform()
# def debug(self,debug_type,debug_message):
# print 'type: '+str(debug_type)+' message'+str(debug_message)
def on_receive(self, data):
self.buffer += data
if data.endswith("\r\n") and self.buffer.strip():
content = json.loads(self.buffer)
self.consume(content)
self.buffer = ""
And on index.py I have
from module import viewResults
def consume(content):
print content
viewResults('username','password','keyname',consume)
So I wanted to pass only the parameters username, password, keyname and the "consume" function that should be called when the buffer is full of valid JSON data...
What's happening is that the request is actually made, if VERBOSE is on I can see all the data arriving, but that "higher level consume" function get's nothing...
How can I achieve this ?
Thanks.
As I understand you want to archive debug data?
Create your custom debug function to store you data: custom_debug(debug_type, debug_msg)
>>> import human_curl as hurl
>>> import json
>>> r = hurl.get("http://crowdprocess.no.de/"+keyname+"/results"",
... debug=custom_debug, auth=('username', 'password'),)
>>> consume(json.loads(r.content))
I could not see in your post code that on_receive(self, data): print something.
Add to it sys.stderr.write("%s\n" % data)
def on_receive(self, data):
# -- print data to stderr --
import sys
sys.stderr.write("%s\n" % data)
# -- end --
self.buffer += data
if data.endswith("\r\n") and self.buffer.strip():
content = json.loads(self.buffer)
self.consume(content)
self.buffer = ""

How to deal with deflated response by urllib2? [duplicate]

This question already has answers here:
Python: Inflate and Deflate implementations
(2 answers)
Closed 4 years ago.
I currently use following code to decompress gzipped response by urllib2:
opener = urllib2.build_opener()
response = opener.open(req)
data = response.read()
if response.headers.get('content-encoding', '') == 'gzip':
data = StringIO.StringIO(data)
gzipper = gzip.GzipFile(fileobj=data)
html = gzipper.read()
Does it handle deflated response too or do I need to write seperate code to handle deflated response?
You can try
if response.headers.get('content-encoding', '') == 'deflate':
html = zlib.decompress(response.read())
if fail, here is another way, I found it in requests source code,
if response.headers.get('content-encoding', '') == 'deflate':
html = zlib.decompressobj(-zlib.MAX_WBITS).decompress(response.read())
There is a better way outlined at:
http://rationalpie.wordpress.com/2010/06/02/python-streaming-gzip-decompression/
The author explains how to decompress chunk by chunk, rather than all at once in memory. This is the preferred method when larger files are involved.
Also found this helpful site for testing:
http://carsten.codimi.de/gzip.yaws/
To answer from above comment, the HTTP spec (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3) says:
If no Accept-Encoding field is present in a request, the server MAY assume that the client will accept any content coding. In this case, if "identity" is one of the available content-codings, then the server SHOULD use the "identity" content-coding, unless it has additional information that a different content-coding is meaningful to the client.
I take that to mean it should use identity. I've never seen a server that doesn't.
you can see the code in urllib3
class DeflateDecoder(object):
def __init__(self):
self._first_try = True
self._data = binary_type()
self._obj = zlib.decompressobj()
def __getattr__(self, name):
return getattr(self._obj, name)
def decompress(self, data):
if not data:
return data
if not self._first_try:
return self._obj.decompress(data)
self._data += data
try:
return self._obj.decompress(data)
except zlib.error:
self._first_try = False
self._obj = zlib.decompressobj(-zlib.MAX_WBITS)
try:
return self.decompress(self._data)
finally:
self._data = None
class GzipDecoder(object):
def __init__(self):
self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
def __getattr__(self, name):
return getattr(self._obj, name)
def decompress(self, data):
if not data:
return data
return self._obj.decompress(data)

Categories

Resources