I've got a bottle-based HTTP server that mostly shuffles JSON data around. When I run this in Python 2.7 it works perfectly, and in my route handlers I can access the JSON data via bottle.request.json. However, when I run it under Python 3.4 bottle.request.json is None.
I've examined the HTTP traffic, and in both cases it is exactly the same (as would expected since that's under control of the non-Python-dependent client.)
I also see that the JSON data is reaching bottle in both cases. If I print out bottle.request.params.keys(), I see the string-ified JSON as the only entry in the list in both cases. And the strings are identical in both cases. For some reason, however, the Python 2 version is recognizing the JSON data while the Python 3 version isn't.
Strangely, this used to work, but some recent change either in my code or bottle (or both) has broken things. Looking over my code, though, I can't see what I might have done to create the problem.
Does anyone know what's going on? Is this something I'm doing wrong at the client end, at the bottle configuration end, or is this a bottle defect? I searched for this problem both on google and the bottle issue tracker, but to no avail.
This turns out to have nothing to do with bottle. The ultimate cause of the problem is that the client request has two Content-Type headers due to a defect in an emacs lisp HTTP library. Embarrassingly, I've known about this defect for quite some time, but I thought I'd properly worked around it.
I'm not 100% sure why I see the variance between Python 2 and 3, but my guess right now is that it has to do with otherwise benign changes in the WSGI machinery between the versions.
Related
Does Brython has a recommended method for using the same rendering code in the server side?
To do this currently I'm using my own browser module emulating Brython's on the server side for html rendering (https://github.com/yairchu/vote_tool/blob/master/browser.py), but I wonder if there's a better way..
The portion of code you are using is short, elegant, and quite specific -
it will be Python compatible on the server side, as you have noted, and you it won't be easy to find another piece of code for this kind of rendering that is so concise and able
to work on Brython's client side (as it does not yet achieve 100% compatibility with Python).
That said, I think it is more than ok to reuse this code on the server side on your project.
Note that by carefully laying out files in your project directories, it is possible
to have some files to be imported both on server and client side. If done correctly,
this can make a great deal of work simpler.
the last couple of days I installed Python 2.7.3 and Neo4J community edition 1.8.M01. I managed to get the embedded python bindings to work, but as I need the py2neo REST bindings I've installed them like described at http://py2neo.org/. Moreover I can't download directly from git due to a "Permission denied (publickey)" error so I took the available py2neo-1.2.6.tar.gz version from the download section.
While the installation itself was not the problem, I can't get the example to work as on calling neo4j.GraphDatabaseService('http://localhost:7474/db/data') python crashes without any error message - Win7 64bit only pops up a message that the applicationn does not respond. Java, Python and Neo4J are all running on 64 bit basis and the server is accessible on http://localhost:7474. I even tried to force an output as described here: Catching a python app before it exits - but still no stacktrace or error-log.
I've installed everything from scratch or via the executables provided at www.lfd.uci.edu/~gohlke/pythonlibs/ several times now but nothing managed to get this example to work.
I have installed both tornado 2.2.1 and pycurl 7.23.1. pycurl.version_info() reveals: (3, '7.23.1', 464641, 'Windows', 28, 'OpenSSL/0.9.8s', 0, '1.2.5', ('gopher', 'http', 'https', 'imap', 'imaps', 'pop3', 'pop3s', 'rtsp', 'smtp', 'smtps'), None, 0, None) - moreover
c = pycurl.Curl()
c.setopt(c.URL, 'http://stackoverflow.com')
c.perform()
returns the content of the startpage.
I've followed the stacktrace via print-messages into tornado.IOLoop.start() and there into _run_callback() where it actually executes callback() and crashes. Not shure if the callback-function defined inside of tornado.HTTPClient.fetch() should be called here - printing the callback results in <tornado.stack_context._StackContextWrapper object at ...>
Any suggestion on how to fix this issue?
Thanks in advance,
Roman
edit: corrected port as of a typo
edit2: after a longer debug-session which narrowed the point of failure a bit, Nigel provided me with a way to deal with my issues by exchanging
self._http = http or httpclient.HTTPClient(curl_httpclient.CurlAsyncHTTPClient)
with
self._http = http or httpclient.HTTPClient()
in line 55 of rest.py. This is a workaround but does not solve the problem in the back of tornado/pycurl. The windows management console declares pycurl.pyd as the reason for the crash and as some of the nodes (after a seldom functioning intialization of the GraphDatabaseService) get stored within Neo4J and the Debug-Output below isn't shown anymore, the crash must occur between the send request and the return to the main application. I currently believe that either the Selection poll, which I fall back on Windows, is the reason for the crash or maybe the curl-handle gets shared between different threads - which should not happen (http://curl.haxx.se/libcurl/c/libcurl-tutorial.html) - and is somehow the most comprehensible reason imo
Sorry to hear that you're having issues with py2neo. I haven't carried out any testing under Windows since I only run Linux so I'm unsure whether there are any general incompatibilities there. I am also aware that error reporting is less than it should be which has been limited by the amount of time I've had to work on the project.
That said, I notice that you are running on port 4747 instead of the default 7474 - or this this a typo? Have you tried your short cURL test against the root database URI directly?
You seem to have covered all the bases looking at the layers involved so I'm unsure what else to look at here. I have considered adding an option to be able to switch between the curl_httpclient and the simple_httpclient - this may give an alternative to try. I will try to get something put up over the next few days.
Nige
I'm running a high traffic ssl website with apache/mod_wsgi/python. Very occasionally (around 10x in 3 months) I've seen some extra garbage characters in post data.
Usually it's been in the form of a extra char at the end.
('access.uid', 'allow\xba')
('checksum', 'b219d6a006ebd95691d0d7b468a94510496c5dd8\xff')
Once though it was in the middle of someone's password. Something like:
('login_password', 'samplepass\xe7word')
I've tried to reconstruct the request with all the same headers but I haven't been able to duplicate the error. Anyone have any ideas about what could be causing this or any ideas of how I could go about reproducing and fixing this problem?
(Copied from below):
I'm using apache-2.2.17_1 – Peter Mar 15 at 18:09
I'm using mod_wsgi-3.3_1 on one machine and mod_wsgi-2.8_1 on another. I've seen this error on both.
What version of Apache are you using? From memory, somewhere around Apache 2.2.12-2.2.15 there were various SSL fixes. You might want to ensure you are using Apache 2.2.15 or later.
what happens if you print eval("u'%s'"%garbled_text)? does the output look likely (I understand that you may not be able to post sensitive data)
It looks to me like somewhere it's assuming you're reading ASCII even though you've told it to read utf-8.
Can we see the code that reads this POST data into python, or where it is specified and from what input form?
Since you said all errors occurred in IE 7 or 8 I'm starting to suspect the error occurs client-side in the browser. I've never heard of anything like this error and I have no clue what otherwise could cause it server-side except for hardware failure (though that seems weird too since only one character is added). Perhaps you should suggest your users to upgrade to a decent browser?
This looks very much like chunked HTTP/1.1.
Use an appropriate handler to un-chunk it prior to parsing. See [1], [2].
Another option is to only accept HTTP/1.0 which doesn't have chunking at all, but this may have downsides.
I am working on a website, hosted on DreamHost, using Python. For a while, I was using their default setup, which runs Python scripts using CGI. It worked fine, but I was worried that if I get a lot of traffic, it would run slow and use a lot of memory, so I switched it over to FastCGI using this module.
Overall, it still works fine, but there is one major annoyance: I can't seem to be able to see anything that gets written to the standard error stream. If anything goes wrong, my usual source of useful clues for what to do about it no longer works. Before, I used to see stuff sent to standard error in my Apache error log. Now, it just seems to disappear.
I tried making a test script, and writing strings using sys.stderr.write (from various places), and environ["wsgi.errors"].write (from within my app, where environ is the first parameter passed to the app by the WSGI/FastCGI wrapper). Either way, I couldn't find them. Does anyone know why, or how to access this data?
Keep in mind that this is my first time ever using FastCGI, so please let me know if I am making a bad choice by using this fcgi module.
If something in your system is capturing file-descriptor two (the "real" stderr), you can assign sys.stderr to any open, writeable file object, or to a file-like object (it basically just needs to implement write) -- including a cStdIO.StdIO instance, whose value you can get at any time (before it's closed) with a call to its .getvalue() method.
To capture any uncaught exception just before it terminates your code, assign to sys.excepthook a function of yours in which you get the information and emit it in any way of your choice; or, to get and emit anything that was written to sys.stderr even without an exception (if that's what you want -- I'm not sure, from your question), use atexit to
register your grab-info-and-emit-it function.
I can't seem to get the wsgiref module to work at all under Python 3.0. It works fine under 2.5 for me, however. Even when I try the example in the docs, it fails. It fails so hard that even if I have a print function above where I do: "from wsgiref.simple_server import make_server", it never gets printed for some reason. It doesn't thow any errors when run, and it just displays a blank page in the browser and doesn't log any sort of request.
Does anybody know what the problem may be? Thanks!
issue 4718:wsgiref package totally broken. sorry about that.
You're in uncharted territory with WSGI on Python 3.0 I'm afraid.
WEB-SIG knew long ago that wsgiref was broken going into 3.0, but chose to do nothing about it. The spec hasn't been updated to cope with 3.0; pushing WSGI forwards even for the things everyone pretty-much agrees on is just agonisingly slow. It's depressing and senseless.
So yeah, it's easy to fix the obvious error with header unpacking in simple_server, but you'll still be running on a server that has been converted from Python 2-to-3 automatically and not really tested, with no de-jure standard to say exactly what it should do... never mind framework compatibility.
Python 3.0 for web scripting: needs some work.