I am running socket.io on an Apache server through Python Flask. We're integrating it into an iOS app (using the Socket.IO-Client-Swift library) and we're having a weird issue.
From the client side code in the app (written in Swift), I can view the actual connection log (client-side in XCode) and see the connection established from the client's IP and the requests being made. The client never receives the information back (or any information back; even when using a global event response handler) from the socket server.
I wrote a very simple test script in Javascript on an HTML page and sent requests that way and received the proper responses back. With that said, it seems to likely be an issue with iOS. I've found these articles (but none of them helped fix the problem):
https://github.com/nuclearace/Socket.IO-Client-Swift/issues/95
https://github.com/socketio/socket.io-client-swift/issues/359
My next thought is to extend the logging of socket.io to find out exact what data is being POSTed to the socket namespace. Is there a way to log exactly what data is coming into the server (bear in mind that the 'on' hook on the server side that I've set up is not getting any data; I've tried to log it from there but it doesn't appear to even get that far).
I found mod_dumpio for Linux to log all POST requests but I'm not sure how well it will play with multi-threading and a socket server.
Any ideas on how to get the exact data being posted so we can at least troubleshoot the syntax and make sure the data isn't being malformed when it's sent to the server?
Thanks!
Update
When testing locally, we got it working (it was a setting in the Swift code where the namespace wasn't being declared properly). This works fine now on localhost but we are having the exact same issues when emitting to the Apache server.
We are not using mod_wsgi (as far as I know; I'm relatively new to mod_wsgi, apologies for any ignorance). We used to have a .wsgi file that called the main app script to run but we had to change that because mod_wsgi is not compatible with Flask SocketIO (as stated in the uWSGI Web Server section here). The way I am running the script now is by using supervisord to run the .py file as a daemon (using that specifically so it will autostart in the event of a server crash).
Locally, it worked great once we installed the eventlet module through pip. When I ran pip freeze on my virtual environment on the server, eventlet was installed. I uninstalled and reinstalled it just to see if that cleared anything up and that did nothing. No other Python modules that are on my local copy seem to be something that would affect this.
One other thing to keep in mind is that in the function that initializes the app, we change the port to port 80:
socketio.run(app,host='0.0.0.0',port=80)
because we have other API functions that run through a domain that is pointing to the server in this app. I'm not sure if that would affect anything but it doesn't seem to matter on the local version.
I'm at a dead end again and am trying to find anything that could help. Thanks for your assistance!
Another Update
I'm not exactly sure what was happening yet but we went ahead and rewrote some of the code, making sure to pay extra special attention to the namespace declarations within each socket event on function. It's working fine now. As I get more details, I will post them here as I figure this will be something useful for other who have the same problem. This thread also has some really valuable information on how to go about debugging/logging these types of issues although we never actually fully figured out the answer to the original question.
I assume you have verified that Apache does get the POST requests. That should be your first test, if Apache does not log the POST requests coming from iOS, then you have a different kind of problem.
If you do get the POST requests, then you can add some custom code in the middleware used by Flask-SocketIO and print the request data forwarded by Apache's mod_wsgi. The this is in file flask_socketio/init.py. The relevant portion is this:
class _SocketIOMiddleware(socketio.Middleware):
# ...
def __call__(self, environ, start_response):
# log what you need from environ here
environ['flask.app'] = self.flask_app
return super(_SocketIOMiddleware, self).__call__(environ, start_response)
You can find out what's in environ in the WSGI specification. In particular, the body of the request is available in environ['wsgi.input'], which is a file-like object you read from.
Keep in mind that once you read the payload, this file will be consumed, so the WSGI server will not be able to read from it again. Seeking the file back to the position it was before the read may work on some WSGI implementations. A safer hack I've seen people do to avoid this problem is to read the whole payload into a buffer, then replace environ['wsgi.input'] with a brand new StringIO or BytesIO object.
Are you using flask-socketio on the server side? If you are, there is a lot of debugging available in the constructor.
socketio = SocketIO(app, async_mode=async_mode, logger=True, engineio_logger=True)
Related
I have a bottle server running on port 8080, using the "gevent" server. I use this server to support some simple "server sent events".
My question is probably related to not knowing exactly how my set up is working. I hope someone can take the time to elaborate on this.
All routes and serving of files from the server is working great, but I have an issue when accessing a specific route "/get_data". This gathers data from the web as well as from some internal data sources. The gathering takes about 30 minutes. While this process is running, I am not able to access any routes on the server, i.e. "/" or "/login". Once the process is finished, everything works again and the database is updated with the gathered information.
I tried replacing the gathering algorithms by a simple time.sleep(60), and while the timer was active, I was still able to access other routes just fine.
This leads to my two questions:
Why am I not able to access the server while this process is running. Is it the port that is blocked (from reading web-information), or maybe it has something to do with threading?
What would be the best way to run a demanding / long process on my server? Preferably I would like to access this from my web app, but I have thought about just putting this in a seperate python file and run this localy on the server, in a seperate instance of python. This process is run at most once per day, maybe as seldom as once per week.
This happen because WSGI handle request/response synchronously.
You can use gunicorn to run your application, it will handle multi requests and response, or you can use other methods described in bottle website:
Primer to Asynchronous Applications
I have a bunch of Google alerts set up as rss feeds that update in real time. What I want is to be able to store the new data the rss feed is sending out in a database.
After looking around I found Google and Superfeedr both offer hubs that do most of the work for you; however they both require a callback url (obviously). I do have an Apache server running on the machine I'm working off, it already has python enabled so I can run python scripts on my server. However at the moment its only accessible from within my LAN.
What my real question is, what do I do next? I know that in php you would just have a call back file that handles requests but I'm lost as to what to do in python. Would I write a script and give the google/superfeedr services a url to that script? What would be in the script? Specific imports needed?
Also, I just read that if you use XMPP you don't need a callback url. How does that work?
For the local LAN problem, the most commonly used solution is to use tuneling solutions like Passageway. They will temporarily expose a local port of your machine to the "outer" web.
Now, as for implementation, it's fairly easy to set things up. Python is similar to PHP in the sense that you'll have to write a script that listen on networking connection and then handles the HTTP requests you're getting from Superfeedr or Google. (it looks like you're not familiar with Python, why not stick to PHP then?)
Finally XMPP is a feature that only us (Superfeedr) offer. It solves the problem of exposing local ports because it works behind the firewall.
I'm trying to debug an internal server error in my django app, running on heroku. I'm
completely new to all of this web server stuff so I really have no idea what to do.
It seems like the stdout output is sometimes getting logged in heroku logs and sometimes not. I was reasonably sure that the program was reaching a certain line but the prints at that point are simply not showing up.
I am seeing the 500 error in my heroku logs file, but there is no stack trace or anything else in there. I am trying to create a web server to respond to GET and POST requests from various applications I have running, meaning I don't know how to debug this in a web browser, if thats even applicable. The current error is on a POST request sent to the webserver. I can't replicate this locally because the Http module I am using, http://www.python-requests.org/en/latest/ seems to be unable to connect to a local ip address.
I have done some extensive googling for the last hour and I haven't found any help. Do I need to enable logging or something somewhere in heroku? I am completely new to this so please be explicit in your explanations. I have heard mention of a way to get stack traces emailed to you but I haven't seen an explanation of how to do that. is that possible?
Thanks!
I would recommend 2 things in this case:
First: use python's logging facility rather than print statements (http://docs.python.org/2/howto/logging-cookbook.html). This gives you much more control over where your statements end up, and allows you to filter them.
Second: use a logging add-on. This vastly increases the amount of logging you can store (loggly keeps all your logs for 24 hours even in the "free" size), so you don't have to worry about the relevant information falling out before you get around to looking at it.
I'm running a script in python and takes a long time to process. The thing is if the function takes to long to run, i guess the nginx has a timeout, in his configuration and that prevents somekind of errors, and prevents the function to run completely.
I just want to know were i can increse the value of the timeout. Because i've tried some commands in the file conf of nginx such as:
uwsgi_connect_timeout 75;
uwsgi_send_timeout 75;
uwsgi_read_timeout 75;
keepalive_timeout 650;
but none of this worked.
Thks in advance
The problem with just extending the timeout is that no matter how much longer you set it to you will run into limitations somewhere along the line. Either with the web server, the browser or your geocode calls. If it is something that routinely fails n times in a request, then you can't really make any guarantees.
So rather than having the client request hanging on a long running process (and by extension risking a server timeout), why don't you use something like celery to run those geocode tasks and on the client-side, submit your client-side request via javascript and poll the server for the answer via ajax until it get's a response?
I also had Bad gateway error in NGIX + uWSGI configuration, and for sake of people who google this question: it might be missing uwsgi python plugin. Please see: uWSGI configuration issue: uwsgi fails without any error message..
I tried everything written in the above response as well as other places but they did not work.
My solution was changing my socket in both the uwsgi.conf and nginx.conf files.
I am trying to configure the Python mini-framework CherryPy with FastCGI (actually fcgid) on Apache. I am on a shared host, so I don't have access to httpd.conf, just htaccess. I have followed these tutorials to no avail:
http://tools.cherrypy.org/wiki/FastCGIWSGI
http://tools.cherrypy.org/wiki/BluehostDeployment
I keep getting 500 errors w/ the Apache logs saying "Premature end of script headers". I have tried everything (permissions/shebangs/full-paths/deamonized/not-daimonized). I know Apache is correctly executing my .fcgi, because I am able to print to the error log from python, but that's it. Has anyone out there successfully installed CherryPy or any other framework on a shared host before? Your help would be greatly appreciated. Thanks.
Apache + Bluehost + fastcgi + cherrypy + wsgi is unfortunately a lot of pieces. I wish I had a year to write the Definitive Guide for you, but alas. You might gain some insight from the rather long mailing list thread which resulted in those links you posted.
An idea: make sure your .fcgi file has a reference to the correct python executable in the initial line:
#!/usr/bin/python
I had to get Django running with fcgi on Bluehost and apache using the wrong python environment was my problem (worked from the shell, but not from the web/apache).
Other than that, if you can print to the error log from your code, can you confirm that the your code is correctly executed, without any exceptions, when you access the web page? (not when running from the shell).
The Bluehost article has been the best resource, but I didn't carefully read the part about getting the latest patches (the beginning of step 3). At the time of the article, and even now with CherryPy version 3.1.2, you can't do 'dynamic mode' fcgi (when apache spawns the process). more here. Dynamic mode is basically essential if you are on a shared host.
I have checked out the trunk (3.2.0rc1), and after jumping through some hoops, got it to work. I followed step 5, method C in the bluehost article. Here was the stuff in the main of my cherryd.fcgi:
if __name__ == '__main__':
cherrypy.config.update({
'server.socket_port': None,
'server.socket_host': None,
'server.socket_file': None
})
start( daemonize=False, fastcgi=True, imports=["hello"])
Also, in cherrypy/process/servers.py, I had to change the following line:
# from this
# if not hasattr(socket.socket, 'fromfd'):
# to this
if not hasattr(socket, 'fromfd'):
So, it is possible to get it to work, but it feels kind of hacky. You should wait for the final release of version 3.2.0, or do what I did and check out Web.py. I was able to get it working with my shared host very easily (docs explain fastcgi/htaccess well).
In your webserver's log file, it should actually show what the output was that confused it. Are you sure you're looking in the error log as well as the access log?