I'm using django server together with orbited/stomp server to write something like chat. Assume that some users are connected to orbited. When one of them disconnects from orbited, how can I notify the rest? I mean I've tried the following code (javascript on the client side - maybe this is already wrong, server should do the push, right?):
function end()
{
stomp.send('user killed', '/channel');
}
together with
stomp.onclose = end;
but this doesn't work at all. Then I used
window.onbeforeunload = end;
but again no visible effect. I also replaced end() with different function, which just do ajax post to django server. But then stomp.onclose again does nothing and window.onbeforeunload gives me a broken pipe.
So these were attempts to implement "client leaves message before qutting" idea. But that failed.
I'm not even sure whether I'm doing this right. Is there a way to notify orbited/stomp users about leaving of a user? All ideas would be appreciated.
EDIT: Maybe there's antother way. I've read that it is possible to configure orbited server to make an http callback to the application with the user's key when someone's connection closes. Unfortunetly there was no explanation how to do that. Anyone knows the answer?
It seems that orbited is not suited for this kind of things (I talked with orbited creator). I switched to hookbox and it works fine.
Related
First important point for me: I want to implement websockets. I do not need the fallback options of socketIO.
I would like "my clients" to implement whatever they want, as soon as they stay to the websockets protocol. Namely something like: var ws = new WebSocket.
So.. if the server is Flask-SocketIO, will a simple js WebSocket work?
ADDIONAL NOTES:
Python first!
I am trying to set up a server which will respond only (actually only send) to websockets, no web page associated. (Yes, I am fine with WS and I do not need WSS, in case you ask ;) ).
I had a try on the server side with flask-sockets
https://github.com/kennethreitz/flask-sockets
but it is giving me some problems. Like closing immediately the connection and
beside many similar problems on the web I could not find a solution. Hard to debug too. So before I start developing a new server...
Sadly no, you cannot use a Socket.IO server with plain WebSocket clients. Sorry, that's not what Flask-SocketIO was made for.
(in case this isn't clear, this is the author of Flask-SocketIO speaking)
I'm trying to debug an internal server error in my django app, running on heroku. I'm
completely new to all of this web server stuff so I really have no idea what to do.
It seems like the stdout output is sometimes getting logged in heroku logs and sometimes not. I was reasonably sure that the program was reaching a certain line but the prints at that point are simply not showing up.
I am seeing the 500 error in my heroku logs file, but there is no stack trace or anything else in there. I am trying to create a web server to respond to GET and POST requests from various applications I have running, meaning I don't know how to debug this in a web browser, if thats even applicable. The current error is on a POST request sent to the webserver. I can't replicate this locally because the Http module I am using, http://www.python-requests.org/en/latest/ seems to be unable to connect to a local ip address.
I have done some extensive googling for the last hour and I haven't found any help. Do I need to enable logging or something somewhere in heroku? I am completely new to this so please be explicit in your explanations. I have heard mention of a way to get stack traces emailed to you but I haven't seen an explanation of how to do that. is that possible?
Thanks!
I would recommend 2 things in this case:
First: use python's logging facility rather than print statements (http://docs.python.org/2/howto/logging-cookbook.html). This gives you much more control over where your statements end up, and allows you to filter them.
Second: use a logging add-on. This vastly increases the amount of logging you can store (loggly keeps all your logs for 24 hours even in the "free" size), so you don't have to worry about the relevant information falling out before you get around to looking at it.
I occasionally get this error when my server (call it Server A) makes requests to a resource on another one of my servers (all it Server B):
ConnectionError: HTTPConnectionPool(host='some_ip', port=some_port): Max retries exceeded with url: /some_url/ (Caused by : [Errno 111] Connection refused)
The message in the exception is
message : None: Max retries exceeded with url: /some_url/ (Caused by redirect)
which I include because it has that extra piece of information (caused by redirect).
As I said, I control both servers involved in this request, so I can make changes to either and/or both. Also, the error appears to be intermittent, in that it doesn't happen every time.
Potentially relevant information -- Server A is a Python server running apache, and Server B is a NodeJS server. I am not exactly a web server wizard, so beyond that, I'm not exactly sure what information would be relevant.
Does anyone know exactly what this error means, or how to go about investigating a fix? Or, does anyone know which server is likely to be the problem, the one making the request, or the one receiving it?
Edit: The error has begun happening with our calls to external web resources also.
You are getting a CONN Refused on "some_ip" and port. That's likely caused by
- No server actually listening on that port/IP combination
- Firewall settings that send Conn Refused (less likely a cause!)
- Third - a misconfigured (more likely) or busy server, that cannot handle requests.
I Believe When - server A is trying to connect to server B you are getting that error. (Assuming it's Linux and/or some unix derivative) what does netstat -ln -tcp show on the server? (man netstat to understand the flags - what we are doing here is - trying to find which all programs are listening on which port). If that indeed shows your server B listening - iptables -L -n to show the firewall rules. If nothing's wrong there - it's a bad configuration of listen queue most probably. (http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/023/2333/2333s2.html) or google for listen backlog.
This most likely is a bad configuration issue on your server B. (Note: a redirect loop as someone mentioned above - not handled correctly could just end up making the server busy! so possibly solving that could solve your problem as well)
If you're using gevent on your python server, you might need to upgrade the version. It looks like there's just some bug with gevent's DNS resolution.
This is a discussion from the requests library: https://github.com/kennethreitz/requests/issues/1202#issuecomment-13881265
This looks like a redirect loop on the Node side.
You mention server B is the node server, you can accidentally create a redirect loop if you set up the routes incorrectly. For example, if you are using express on server B - the Node server, you might have two routes, and assuming you keep your route logic in a separate module:
var routes = require(__dirname + '/routes/router')(app);
//... express setup stuff like app.use & app.configure
app.post('/apicall1', routes.apicall1);
app.post('/apicall2', routes.apicall2);
Then your routes/router.js might look like:
module.exports = Routes;
function Routes(app){
var self = this;
if (!(self instanceof Routes)) return new Routes(app);
//... do stuff with app if you like
}
Routes.prototype.apicall1 = function(req, res){
res.redirect('/apicall2');
}
Routes.prototype.apicall2 = function(req, res){
res.redirect('/apicall1');
}
That example is obvious, but you might have a redirect loop hidden in a bunch of conditions in some of those routes. I'd start with the edge cases, like what happens at the end of the conditionals within the routes in question, what is the default behavior if the call for example doesn't have the right parameters and what is the exception behavior?
As an aside, you can use something like node-validator (https://github.com/chriso/node-validator) to help determine and handle incorrect request or post parameters
// Inside router/routes.js:
var check = require('validator').check;
function Routes(app){ /* setup stuff */ }
Routes.prototype.apicall1 = function(req, res){
try{
check(req.params.csrftoken, 'Invalid CSRF').len(6,255);
// Handle it here, invoke appropriate business logic or model,
// or redirect, but be careful! res.redirect('/secure/apicall2');
}catch(e){
//Here you could Log the error, but don't accidentally create a redirect loop
// send appropriate response instead
res.send(401);
}
}
To help determine if it is a redirect loop you can do one of several things, you can use curl to hit the url with the same post parameters (assuming it is a post, otherwise you can just use chrome, it'll error out in the console if it notices a redirect loop), or you can write to stdout on the Node server or syslog out inside of the offending route(s).
Hope that helps, good thing you mentioned the "caused by redirect" part, that is I think the problem.
The example situation above uses express to describe the situation, but of course the problem can exist using just connect, other frameworks, or even your own handler code as well if you aren't using any frameworks or libraries at all. Either way, I'd make it a habit to put in good parameter checking and always test your edge cases, I've run myself into this problem exactly when I've been in a hurry in the past.
so I'm implementing a log server with twisted (python-loggingserver) and I added simple authentication to the server. If the authentication fails, I wanna close the connection to the client. The class in the log server code already has a function called handle_quit(). Is that the right way to close the connection? Here's a code snippet:
if password != log_password:
self._logger.warning("Authentication failed. Connection closed.")
self.handle_quit()
If the handle_quit message you're referring to is this one, then that should work fine. The only thing the method does is self.transport.loseConnection(), which closes the connection. You could also just do self.transport.loseConnection() yourself, which will accomplish the same thing (since it is, of course, the same thing). I would select between these two options by thinking about whether failed authentication should just close the connection or if it should always be treated the same way a quit command is treated. In the current code this makes no difference, but you might imagine the quit command having extra processing at some future point (cleaning up some resources or something).
Need some direction on this.
I'm writing a chat room browser-application, however there is a subtle difference.
These are collaboration chats where one person types and the other person can see live ever keystroke entered by the other person as they type.
Also the the chat space is not a single line but a textarea space, like the one here (SO) to enter a question.
All keystrokes including tabs/spaces/enter should be visible live to the other person. And only one person can type at one time (I guess locking should be trivial)
I haven't written a multiple chatroom application. A simple client/server where both are communicatiing over a port is something I've written.
So here are the questions
1.) How is a multiple chatroom application written ? Is it also port based ?
2.) Showing the other persons every keystroke as they type is I guess possible through ajax. Is there any other mechanism available ?
Note : I'm going to use a python framework (web2py) but I don't think framework would matter here.
Any suggestions are welcome, thanks !
The Wikipedia entry for Comet (programming) has a pretty good overview of different approaches you can take on the client (assuming that your client's a web browser), and those approaches suggest the proper design for the server (assuming that the server's a web server).
One thing that's not mentioned on that page, but that you're almost certainly going to want to think about, is buffering input on the client. I don't think it's premature optimization to consider that a multi-user application in which every user's keystroke hits the server is going to scale poorly. I'd consider having user keystrokes go into a client-side buffer, and only sending them to the server when the user hasn't typed anything for 500 milliseconds or so.
You absolutely don't want to use ports for this. That's putting application-layer information in the transport layer, and it pushes application-level concerns (the application's going to create a new chat room) into transport-level concerns (a new port needs to be opened on the firewall).
Besides, a port's just a 16-bit field in the packet header. You can do the same thing in the design of your application's messages: put a room ID and a user ID at the start of each message, and have the server sort it all out.
The thing that strikes me as a pain about this is figuring out, when a client requests an update, what should be sent. The naive solution is to retain a buffer for each user in a room, and maintain an index into each (other) user's buffer as part of the user state; that way, when user A requests an update, the server can send down everything that users B, C, and D have typed since A's last request. This raises all kind of issues about memory usage and persistence that don't have obvious simple solutions
The right answers to the problems I've discussed here are going to depend on your requirements. Make sure those requirements are defined in great detail. You don't want to find yourself asking questions like "should I batch together keystrokes?" while you're building this thing.
You could try doing something like IRC, where the current "room" is sent from the client to the server "before" the text (/PRIVMSG #room-name Hello World), delimited by a space. For example, you could send ROOMNAME Sample text from the browser to the server.
Using AJAX would be the most reasonable option. I've never used web2py, but I'm guessing you could just use JSON to parse the data between the browser and the server, if you wanted to be fancy.