Maybe it is a stupid question but I cannot figure out how to a http status code in webpy.
In the documentation I can see a list of types for the main status codes, but is there a generic function to set the status code?
I'm trying to implement an unAPI server and it's required to reply with a 300 Multiple Choices to a request with only an identifier. More info here
Thanks!
EDIT: I just discovered that I can set it through web.ctx doing
web.ctx.status = '300 Multiple Choices'
is this the best solution?
The way web.py does this for 301 and other redirect types is by subclassing web.HTTPError (which in turn sets web.ctx.status). For example:
class MultipleChoices(web.HTTPError):
def __init__(self, choices):
status = '300 Multiple Choices'
headers = {'Content-Type': 'text/html'}
data = '<h1>Multiple Choices</h1>\n<ul>\n'
data += ''.join('<li>{0}</li>\n'.format(c)
for c in choices)
data += '</ul>'
web.HTTPError.__init__(self, status, headers, data)
Then to output this status code you raise MultipleChoices in your handler:
class MyHandler:
def GET(self):
raise MultipleChoices(['http://example.com/', 'http://www.google.com/'])
It'll need tuning for your particular unAPI application, of course.
See also the source for web.HTTPError in webapi.py.
Related
I have a report service in tornado application.
I would like to re-use the function that creates reports from a report Json.
Meaning, in the new handler that "regenerate" existing report, I would like to reuse an existing handler that knows how to create reports from a Json.
server.py:
def create_server():
return tornado.web.Application([
(r"/task", generator.GenHandler),
(r"/task/(.+)", generator.GenHandler),
url(r"/regenerate_task", generator.GenHandler, name="regenerate_task"),
url(r"/regenerate_task/(.+)", generator.GenHandler, name="regenerate_task"),
(r"/report_status/regenerate", report_status.Regenerate)
genHandler.class:
class GenHandler(tornado.web.RequestHandler):
async def post(self):
try:
LOGGER.info(str(self.request.body))
gen_args = self.parsed_body
# create here report using the parsed body
and this is the handler I am trying to create.
It will take a saved json from DB and create a completely new report with the original report logic.
class Regenerate(tornado.web.RequestHandler):
async def post(self):
rep_id = self.request.arguments.get('rep_id')[0].decode("utf-8") if self.request.arguments.get('rep_id') \
else 0
try:
report = db_handler.get_report_by_id(rep_id)
if *REPORT IS VALID*:
return self.reverse_url("regenerate_task", report)
else:
report = dict(success=True, report_id=rep_id, report=[])
except Exception as ex:
report = dict(success=False, report_id=rep_id, report=[], error=str(ex))
finally:
self.write(report)
Right now, nothing happens. I just get the JSON I needed, but no entry for GenHandler and no report being regenerated
reverse_url returns a url for specified alias, but doesn't invoke it.
You have such problem where you have to invoke another handler because you have poor code organisation. Storing a report generation code (i.e. business logic) in handler is a bad practice and you should move it to a separate class (which is usually called a Controller in a MVC pattern and handler is a View) or at least separate method and then reuse it in your Renegate handler.
I am using scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware to cache scrapy requests. I'd like it to only cache if status is 200. Is that the default behavior? Or do I need to specify HTTPCACHE_IGNORE_HTTP_CODES to be everything except 200?
The answer is no, you do not need to do that.
You should write a CachePolicy and update settings.py to enable your policy
I put the CachePolicy class in the middlewares.py
from scrapy.extensions.httpcache import DummyPolicy
class CachePolicy(DummyPolicy):
def should_cache_response(self, response, request):
return response.status == 200
and then update the settings.py, append the following line
HTTPCACHE_POLICY = 'yourproject.middlewares.CachePolicy'
Yes, by default HttpCacheMiddleware run a DummyPolicy for the requests. It pretty much does nothing special on it's own so you need to set HTTPCACHE_IGNORE_HTTP_CODES to everything except 200.
Here's the source for the DummyPolicy
And these are the lines that actually matter:
class DummyPolicy(object):
def __init__(self, settings):
self.ignore_http_codes = [int(x) for x in settings.getlist('HTTPCACHE_IGNORE_HTTP_CODES')]
def should_cache_response(self, response, request):
return response.status not in self.ignore_http_codes
So in reality you can also extend this and override should_cache_response() to something that would check for 200 explicitly, i.e. return response.status == 200 and then set it as your cache policy via HTTPCACHE_POLICY setting.
I'm not sure if I have used the correct terminology in the question.
Currently, I am trying to make a wrapper/interface around Google's Blogger API (Blog service).
[I know it has been done already, but I am using this as a project to learn OOP/python.]
I have made a method that gets a set of 25 posts from a blog:
def get_posts(self, **kwargs):
""" Makes an API request. Returns list of posts. """
api_url = '/blogs/{id}/posts'.format(id=self.id)
return self._send_request(api_url, kwargs)
def _send_request(self, url, parameters={}):
""" Sends an HTTP GET request to the Blogger API.
Returns JSON decoded response as a dict. """
url = '{base}{url}?'.format(base=self.base, url=url)
# Requests formats the parameters into the URL for me
try:
r = requests.get(url, params=parameters)
except:
print "** Could not reach url:\n", url
return
api_response = r.text
return self._jload(api_response)
The problem is, I have to specify the API key every time I call the get_posts function:
someblog = BloggerClient(url='http://someblog.blogger.com', key='0123')
someblog.get_posts(key=self.key)
Every API call requires that the key be sent as a parameter on the URL.
Then, what is the best way to do that?
I'm thinking a possible way (but probably not the best way?), is to add the key to the kwargs dictionary in the _send_request():
def _send_request(self, url, parameters={}):
""" Sends an HTTP get request to Blogger API.
Returns JSON decoded response. """
# Format the full API URL:
url = '{base}{url}?'.format(base=self.base, url=url)
# The api key will be always be added:
parameters['key']= self.key
try:
r = requests.get(url, params=parameters)
except:
print "** Could not reach url:\n", url
return
api_response = r.text
return self._jload(api_response)
I can't really get my head around what is the best way (or most pythonic way) to do it.
You could store it in a named constant.
If this code doesn't need to be secure, simply
API_KEY = '1ih3f2ihf2f'
If it's going to be hosted on a server somewhere or needs to be more secure, you could store the value in an environment variable
In your terminal:
export API_KEY='1ih3f2ihf2f'
then in your python script:
import os
API_KEY = os.environ.get('API_KEY')
The problem is, I have to specify the API key every time I call the get_posts function:
If it really is just this one method, the obvious idea is to write a wrapper:
def get_posts(blog, *args, **kwargs):
returns blog.get_posts(*args, key=key, **kwargs)
Or, better, wrap up the class to do it for you:
class KeyRememberingBloggerClient(BloggerClient):
def __init__(self, *args, **kwargs):
self.key = kwargs.pop('key')
super(KeyRememberingBloggerClient, self).__init__(*args, **kwargs)
def get_posts(self, *args, **kwargs):
return super(KeyRememberingBloggerClient, self).get_posts(
*args, key=self.key, **kwargs)
So now:
someblog = KeyRememberingBloggerClient(url='http://someblog.blogger.com', key='0123')
someblog.get_posts()
Yes, you can override or monkeypatch the _send_request method that all of the other methods use, but if there's only 1 or 2 methods that need to be fixed, why delve into the undocumented internals of the class, and fork the body of one of those methods just so you can change it in a way you clearly weren't expected to, instead of doing it cleanly?
Of course if there are 90 different methods scattered across 4 different classes, you might want to consider building these wrappers programmatically (and/or monkeypatching the classes)… or just patching the one private method, as you're doing. That seems reasonable.
I'm designing a REST API, and I have an endpoint with a relatively flexible input.
Basically, it would be ideal to have a 48x48 array, but so long as it is an array, we can resize it to the correct size in a relatively intelligent way.
The resize operation isn't very costly, but I feel like the user should know that whatever input is being given is non-ideal, but I want this error message to be noninvasive.
I think this should still have an HTTP code of 200, but I could be persuaded otherwise.
Is there any accepted way of including metadata with a REST response?
I haven't found anything like this, but I feel like it can't be that strange a request.
For reference, using flask, and example code is below:
class Function(MethodView):
def post(self):
post_array = np.array(json.loads(request.form['data']))
if post_array.shape != (48, 48):
post_array = post_array.resize((48,48)) # Add some warning
return process(post_array)
As Jonathon Reinhart points out, including the warning as part of the response would be ideal. It should return 200 status for sure.
from flask import Flask, request, jsonify # jsonify is helpful for ensuring json in response
class Function(MethodView):
def post(self):
# create a response body dict with an empty warnings list
response_body = {
'warnings': [],
'process_result': None
}
# create a guard in case POST request body is not JSON
if request.json:
post_array = np.array(json.loads(request.form['data']))
if post_array.shape != (48, 48):
post_array = post_array.resize((48,48))
# add a warning to the response body
warning = {'data_shape_warning': 'the supplied data was not 48 x 48'}
response_body['warnings'].append(warning)
# compute the process and add the result to the response body
result = process(post_array)
response_body['process_result'] = result
return response_body, 200
# action to take if the POST body is not JSON
else:
warning = {'data_format_warning': 'POST request data was not JSON'}
response_body['warnings'].append(warning)
return jsonify(response_body), 200
I am struggling getting a Rest API Post to work with a vendor api and hope someone can give me a pointer.
The intent is to feed a cli command to the post body and pass to a device which returns the output.
The call looks like this : ( this works for all other calls but this is different because of posting to body)
def __init__(self,host,username,password,sid,method,http_meth):
self.host=host
self.username= username
self.password= password
self.sid=sid
self.method=method
self.http_meth=http_meth
def __str__(self):
self.url = 'http://' + self.host + '/rest/'
self.authparams = urllib.urlencode({ "session_id":self.sid,"method": self.method,"username": self.username,
"password": self.password,
})
call = urllib2.urlopen(self.url.__str__(), self.authparams).read()
return (call)
No matter how I have tried this I cant get it to work properly. Here is an excerpt from the API docs that explains how to use this method:
To process these APIs, place your CLI commands in the HTTP post buffer, and then place the
method name, session ID, and other parameters in the URL.
Can anyone give me an idea of how to properly do this. I am not a developer and am trying to learn this correctly. For example if I wanted to send the command "help" in the post body?
Thanks for any guidance
Ok this was ridiculously simple and I was over-thinking this. I find that I can sometimes look at a much higher level than a problem really is and waist time. Anyway this is how it should work:
def cli(self,method):
self.url = ("http://" + str(self.host) + "/rest//?method=" +str(method)+ "&username=" +str(self.username)+ "&password=" +str(self.password)+ "&enable_password=test&session_id="+str(self.session_id))
data="show ver"
try:
req = urllib2.Request(self.url)
response = urllib2.urlopen(req,data)
result = response.read()
print result
except urllib2.URLError, e:
print e.reason
The cli commands are just placed in the buffer and not encoded....