I'm trying to create a self documenting python web service.
My Situation:
I've got an object accessible via RESTful python web service.
http://foo.com/api/monkey
And, what I'd like to do is if
there's an error during a call to http://foo.com/api/monkey like http://foo.com/api/monkey/get was called without "&monkey_id={some number}"
*or*
the web service call is made specifcally to http://foo.com/api/monkey/help
then i want it to return the formatted html epydoc output for that object (dynamically).
I've reviewed cornice, but its a pain because I don't want to use Pyramid. I really don't want to be coupled to any particular web framework to be able to do this.
My question is this: Is what I want to do, with epydoc, possible?
The first use case ("there's an error during a call") is poorly-defined. 404 errors, for example, don't result in help pages, they're perfectly ordinary.
A http://foo.com/api/bad/path/get request can't figure out which help page to send, since it didn't make a monkey request.
Also, putting /get on your path is not really very RESTful at all. Doing /monkey/get/monkey_id={some number} is considered bad form. You should consider doing /monkey/{some mnumber}/. That's considered RESTful.
There are very, very few situations where you'll want to show help.
However, there may be some kinds of error handling where you do want to show help.
For these you should provide a 301 redirect to http://foo.com/api/monkey/help instead of some other error page.
Your ordinary http://foo.com/api/monkey/help URL's should be handled by Apache (or nginx or lighttpd or whatever your web server is) to redirect to the static Epydoc-produced HTML files.
Related
I'm working with a networking appliance that has vague API documentation. I'm able to execute PATCH and GET requests fine, but POST isn't working. I receive HTTP status error 422 as a response, I'm missing a field in the JSON request, but I am providing the required fields as specified in the documentation. I have tried the Python Requests module and the vendor-provided PyCurl module in their sample code, but have encountered the same error.
Does the REST API have a debug method that returns the required fields, and its value types, for a specific POST? I'm speaking more of what the template is configured to see in the request (such as JSON {str(ServerName) : int(ServerID)}, not what the API developer may have created.
No this does not exist in general. Some services support an OPTIONS request to the route in question, which should return you documentation about the route. If you are lucky this is machine generated from the same source code that implements the route, so is more accurate than static documentation. However, it may just return a very simple summary, such as which HTTP verbs are supported, which you already know.
Even better, some services may support a machine description of the API using WSDL or WADL, although you probably will only find that if the service also supports XML. This can be better because you will be able to find a library that can parse the description and generate a local object model of the service to use to interact with the API.
However, even if you have OPTIONS or WADL file, the kind of error you are facing could still happen. If the documents are not helping, you probably need to contact the service support team with a demonstration of your problem and request assistance.
I have zero experience with website development but am working on a group project and am wondering whether it would be possible to create an interaction between a simple html/css website and my python function.
Required functionality:
I have to take in a simple string input from a text box in the website, pass it into my python function which gives me a single list of strings as output. This list of strings is then passed back to the website. I would just like a basic tutorial website to achieve this. Please do not give me a link to the CGI python website as I have already read it and would like a more basic and descriptive view as to how this is done. I would really appreciate your help.
First you will need to understand HTTP. It is a text based protocol.
I assume by "web site" you mean User-Agent, like FireFox.
Now, your talking about an input box, well this will mean that you've already handled an HTTP request for your content. In most web applications this would have been several requests (one for the dynamically generated application HTML, and more for the static css and js files).
CGI is the most basic way to programmatically inspect already parsed HTTP requests and create HTTP responses from objects you've set.
Now your application is simple enough where you can probably do all the HTTP parsing yourself to gain a basic understanding of what's going on, but you will still need to understand how to develop a server that can listen on a socket.
To avoid all that just find a Python application server that has already implemented all of the above and much more. There are many python application servers to choose from. Use one with a small learning curve for something as simple as above. Some are labeled as "micro-frameworks" in this genre.
Have you considered creating an app on Google App Engine (https://developers.google.com/appengine/)?
Their Handling Forms tutorial seems to describe your problem:
https://developers.google.com/appengine/docs/python/gettingstartedpython27/handlingforms
I have a REST (or almost REST) web api,
I want the API users to be able to use all the api, even if for some reason the can only make GET calls, so the plan is to accept a url parameter (query string) like request_method that can be GET (default) or POST, PUT, DELETE and I want to route them.
My question is other than the standard request handler overrides and checking in each httpRequestHandler in the get(self) method if this is meant to be a POST, PUT, DELETE and calling the appropriate functions, is there a way to do this "routing" in a more general way, like in the URL patterns in the application definition or overriding a routing function or something?
To make it clear, these requests are all coming over GET with a parameter for example like ?request_method=POST
Any suggestions is appreciated.
Possible Solutions:
only have a ".*" url pattern and handle all the routing in a single RequestHandler. Should work fine, except that I won't be taking advantage of the url pattern matching features of Tornado.
add an if to all the get(self) methods in all the request handlers and check if the request should be handled by get if not, call the relevant method.
This would be a very foolish thing to do. Both Chrome and Firefox, along with many other web user agents, will speculatively fetch (GET) some or all of the links on a page, including your request_method=DELETE URLs. You will find your database has been emptied out just because someone was looking around. Do not deliberately break HTTP. GET is defined to be a "safe" method, meaning it's okay to GET any URL you like and nothing bad will happen.
EDIT for others in similar situations:
The OP says he is using JSONP and is in control of both the API server and the client web app. In such a case the ideal solution is Cross-Origin Resource Sharing (CORS, spec), although this technology requires IE8+, Firefox 3.5+, Safari 4+ or Chrome 3+. If you need to target earlier browsers, and you control both domains, I would recommend merging the content of the two domains at least for your own client web app. The api domain can remain for external clients, but they would be restricted by the CORS browser requirements.
I am currently working on a project to create simple file uploader site that will update the user of the progress of an upload.
I've been attempting this in pure python (with CGI) on the server side but to get the progress of the file I obviously need send requests to the server continually. I was looking to use AJAX to do this but I was wondering how hard it would be to, instead of changing to some other framerwork (web.py for instance), just write my own web server for receiving the XML HTTP Requests?
My main problem is that sending the request is done from HTML and Javascript so it all seems like magic trickery at the moment.
Can anyone advise me as to the best way to go about receiving these requests on the server?
EDIT: It seems that a framework would be the way to go. Would web.py be a good route to take?
I would recommend to use a microframework like Sinatra for Ruby. There seem to be some equivalents for Python. What python equivalent of Sinatra would you recommend?
Such a framework allows you to simply map a single method to a route.
Writing a very basic HTTP server won't be very hard (see http://docs.python.org/library/simplehttpserver.html for an example), but you will be missing many features that are provided by real servers and web frameworks.
For your project, I suggest you pick one of the many Python web frameworks and run your application behind Apache/mod_wsgi.
There's absolutely no need to write your own web server. Plenty of options exist, including lightweight ones like nginx.
You should use one of those, and either your own custom WSGI code to receive the request, or (better) one of the microframeworks like Flask or Bottle.
I've had a look at many tutorials regarding cookiejar, but my problem is that the webpage that i want to scape creates the cookie using javascript and I can't seem to retrieve the cookie. Does anybody have a solution to this problem?
If all pages have the same JavaScript then maybe you could parse the HTML to find that piece of code, and from that get the value the cookie would be set to?
That would make your scraping quite vulnerable to changes in the third party website, but that's most often the case while scraping. (Please bear in mind that the third-party website owner may not like that you're getting the content this way.)
I responded to your other question as well: take a look at mechanize. It's probably the most fully featured scraping module I know: if the cookie is sent, then I'm sure you can get to it with this module.
Maybe you can execute the JavaScript code in a JavaScript engine with Python bindings (like python-spidermonkey or pyv8) and then retrieve the cookie. Or, as the javascript code is executed client side anyway, you may be able to convert the cookie-generating code to Python.
You could access the page using a real browser, via PAMIE, win32com or similar, then the JavaScript will be running in its native environment.