calling a function with cherry.py - python

So im doing a bit of web development, and due to some restriction set by my employer i need to use cheetah and cherrypy. I have this form that upon submit runs a function, and from said function i call another via HTTPRedirect, and what i want is to call it without redirecting. here is an example
#cherrypy.expose
def onSubmit(**kwargs):
##Do something
##Do something
##Do something
raise cherrypy.HTTPRedirect("/some_other_location/doSomethingElse?arg1=x&arg2=y")
now i want to do more stuff after running the second function, but i cant because since i redirect the code ends there. So my question is, is there a way to run that other function and not redirect, but still using HTTP. In javascript i would use AJAX and pass it the url, storing the output on the loader variable, but im not sure how to do this with cherrypy

Instead of doing the redirect, use one of the standard Python libraries for fetching HTTP data:
http://docs.python.org/library/urllib.html
http://docs.python.org/library/urllib2.html
or other arguably nicer third-party ones:
http://docs.python-requests.org/
http://code.google.com/p/httplib2/
Also, don't forget to convert the relative url to an absolute url, even if it's localhost:
To help you get started, here's an untested code snippet derived from your example, using urllib2:
import urllib2
#cherrypy.expose
def onSubmit(**kwargs):
##Do something
##Do something
##Do something
url = "http://localhost/some_other_location/doSomethingElse?arg1=x&arg2=y"
try:
data = urllib2.urlopen(url).read()
except urllib2.HTTPError, e:
raise cherrypy.HTTPError(500, "HTTP error: %d" % e.code)
except urllib2.URLError, e:
raise cherrypy.HTTPError(500, "Network error: %s" % e.reason.args[1])
##Do something with the data

Related

Python requests inside a Django view hangs out and needs timeout

I'm seeking help for this rather strange behaviour.
I have a Django view that gets called after a button click in a Django template
#require_http_methods(['GET', 'POST'])
#login_required
#transaction.atomic
def create_key(request, slug):
#some unrelated code
try:
r = requests.post(
some_url,
data={
#some_data
},
auth=(client_id, client_secret),
timeout=(req_to, res_to)
)
if r.status_code == 200:
return True
else:
return False
except ReadTimeout as to:
# handle exception
return True
except Exception as e:
# handle exception
return False
#some unrelated code
that basically calls an API endpoint to create a key.
Now the request with Postman works fine, taking out that python snippet and running it alone also works, but when put inside this Django view it hangs until reaches response timeout.
Does anybody has any idea or pointer on where the problem might be?
Thank you in advance!
EDIT: i've found similar problems but these while they share with me the same structure, the problem was somewhere else
Why Python requests library failing to get response?
LiveServerTestCase hangs at python-requests post call in django view

How can I print all incoming data in Pythons Tornado?

I'm trying to get a basic grasp on what the communication is like between an ajax request and tornado, but I can't find any functions which give me something I can pass to print()
I've checked the API http://www.tornadoweb.org/en/stable/web.html and every function with the word "get" in it seems to require that I first know the name of the thing I'm trying to get.
I'm not quite there yet with my understanding, and would like to start by just printing everything there is to print. All the headers, all the data, going in and out.
How do I do this?
#pseudo code
class MainHandler(tornado.web.RequestHandler):
def get(self):
everything = self.getIncomingHeaders + self.getDataSentByAjaxCall
print(everything)
Do this:
def get(self):
print("%r %s" % (self.request, self.request.body.decode()))
For a "get" there is no request body, but you can put the same code in a "put" or "post" method and see the full request body along with headers, path, and so on.

Identify if a website is taking too long to respond

I need to find if a website is taking too long to respond or not.
For example, i need to identify this website as problematic: http://www.lowcostbet.com/
I am trying something like this:
print urllib.urlopen("http://www.lowcostbet.com/").getcode()
but i am getting Connection timed out
My objective is just create a routine to identify what websites are taking too long to load. (e.g. 4 seconds, and cancel the request)
urlopen from urllib2 package has timeout param.
You can use something like this:
from urllib2 import urlopen
TO = 4
website = "http://www.lowcostbet.com/"
try:
response = urlopen(website, timeout=TO)
except:
mark_as_not_responsive(website)
UPD:
Please, note that using my snippet as-is suck because you'll catch all kind of exceptions, not just timeouts here. And probably, you need to make several tries before marking website as non-responsive.
also, requests.get has a timeout kwarg you can pass in.
from the docs:
requests.get('http://github.com', timeout=0.001)
this will raise an exception, so you probably want to handle that.
http://docs.python-requests.org/en/latest/user/quickstart/
The timeout value will be applied to both the connect and the read timeouts. Specify a tupleif would like to set the values separately:
import requests
try:
r = requests.get('https://github.com', timeout=(6.05, 27))
except requests.Timeout:
...
except requests.ConnectionError:
...
except requests.HTTPError:
...
except requests.RequestException:
...
else:
print(r.text)

Why does 'url' not work as a variable here?

I originally had the variable cpanel named url and the code would not return anything. Any idea why? It doesn't seem to be used by anything else, but there's gotta be something I'm overlooking.
import urllib2
cpanel = 'http://www.tas-tech.com/cpanel'
req = urllib2.Request(cpanel)
try:
handle = urllib2.urlopen(req)
except IOError, e:
if hasattr(e, 'code'):
if e.code != 401:
print 'We got another error'
print e.code
else:
print e.headers
print e.headers['www-authenticate']
Note that urllib2.Request has a parameter named url, but that really shouldn't be the source of the problem, it works as expected:
>>> import urllib2
>>> url = "http://www.google.com"
>>> req = urllib2.Request(url)
>>> urllib2.urlopen(req).code
200
Note that your code above functions identically when you switch cpanel for url. So the problem must have been elsewhere.
I'm pretty sure that /cpanel (if it is the hosting control panel) actually redirects (302) you to http://www.tas-tech.com:2082/ or something like that. You should just update your thing to deal with the redirect (or better yet, just send the request to the real address).

urllib ignore authentication requests

I'm having little trouble creating a script working with URLs. I'm using urllib.urlopen() to get content of desired URL. But some of these URLs requires authentication. And urlopen prompts me to type in my username and then password.
What I need is to ignore every URL that'll require authentication, just easily skip it and continue, is there a way to do this?
I was wondering about catching HTTPError exception, but in fact, exception is handled by urlopen() method, so it's not working.
Thanks for every reply.
You are right about the urllib2.HTTPError exception:
exception urllib2.HTTPError
Though being an exception (a subclass of URLError), an HTTPError can also function as a non-exceptional file-like return value (the same thing that urlopen() returns). This is useful when handling exotic HTTP errors, such as requests for authentication.
code
An HTTP status code as defined in RFC 2616. This numeric value corresponds to a value found in the dictionary of codes as found in BaseHTTPServer.BaseHTTPRequestHandler.responses.
The code attribute of the exception can be used to verify that authentication is required - code 401.
>>> try:
... conn = urllib2.urlopen('http://www.example.com/admin')
... # read conn and process data
... except urllib2.HTTPError, x:
... print 'Ignoring', x.code
...
Ignoring 401
>>>

Categories

Resources