I wrote server in python and now I would like to configure apache web server to support websockets.
My server returns information when a client sends queries to these addresses:
def make_app():
return tornado.web.Application([
(r"/playgame", EmptyGame),
(r"/playgame/", EmptyGame),
(r"/playgame/(.*)", PlayerGameWebsocket)
])
How to configure the server to support regular user traffic but also to enable websockets when the client establishes such a connection?
I user apache2.4 server.
Ok, it turned out that the solution is trivial. If someone ever looked for an answer, just add a simple redirection to the application in the virtual host configuration which listens on localhost:
ProxyPassMatch "/playgame/(.*)" "ws://127.0.0.1:8888/playgame/$1"
ProxyPassReverse "/playgame/(.*)" "ws://127.0.0.1:8888/playgame/$1"
Thanks to such syntax, we can even pass additional data, e.g. "/playgame/123".
We connect from the client without specifying the port:
var adr = "ws://serverip/playgame/" + gameid;
var ws = new WebSocket(adr);
Related
I've uwsgi 2.0.19 on Linux running with the python plugin. I serve http(s) traffic with different applications each for a specific record of my managed domain using such kind of configuration to register them to the front uwsgi servers.
subscribe2 = server=x.x.x.x:4443,key=domain.com,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
subscribe2 = server=x.x.x.x:4443,key=domain.com:443,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
subscribe2 = server=y.y.y.y:4443,key=domain.com,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
subscribe2 = server=y.y.y.y:4443,key=domain.com:443,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
Now when I reach one of the front servers to access a not-existing host, I received such error (the TCP connexion is closed I assume)
curl: (52) Empty reply from server
I would like to be able to have a default/catchall key for such case, that permits to return an HTTP status 404 as I would do in Apache using the _default_ vhost. is it possible.
In order to implement this, you need to define a fallback application using the http-subscription-fallback-key on the front uwsgi server
http-subscription-fallback-key=default
default is a standard application registered on the frontal uwsgi like any other application
subscribe2 = server=x.x.x.x:4443,key=default
subscribe2 = server=x.x.x.x:4443,key=default:80
I am quite new to MQTT and brokers, but I am having an issue with VerneMQ not sending offline messages to clients. Here is my setup. I have a backend written in Python which is using the Paho Eclipse MQTT library's single() method to send messages to a connected client. The client, a virtual machine on my development station, has a client which is written in go-lang, using paho.mqtt.golang to connect to the broker and subscribe.
The call to single() on the backend looks like this:
def send_message(device_id, payload):
token = get_jwt('my_token').decode()
mqtt.single(
f'commands/{device_id}',
payload=payload,
qos=2,
hostname=MESSAGING_HOST,
port=8080,
client_id='client_id',
auth={'username': 'username', 'password': f'Bearer {token}'},
transport='websockets'
)
On the client, the session is established with the following options:
func startListenerRun(cmd *cobra.Command, args []string) {
//mqtt.DEBUG = log.New(os.Stdout, "", 0)
mqtt.ERROR = log.New(os.Stdout, "", 0)
opts := mqtt.NewClientOptions().AddBroker(utils.GetMessagingHost()).SetClientID(utils.GetClientId())
opts.SetKeepAlive(20 * time.Second)
opts.SetDefaultPublishHandler(f)
opts.SetPingTimeout(5 * time.Second)
opts.SetCredentialsProvider(credentialsProvider)
opts.SetConnectRetry(false)
opts.SetAutoReconnect(true)
opts.willQos=2
opts.SetCleanSession(false)
I am not showing all the code, but hopefully enough to illustrate how the session is being set up.
I am running VerneMQ as a docker container. We are using the following environment variables to change configuration defaults in the Dockerfile:
ENV DOCKER_VERNEMQ_PLUGINS.vmq_diversity on
ENV DOCKER_VERNEMQ_VMQ_DIVERSITY.myscript1.file /etc/vernemq/authentication.lua
ENV DOCKER_VERNEMQ_VMQ_ACL.acl_file /etc/vernemq/vmq.acl
ENV DOCKER_VERNEMQ_PLUGINS.vmq_acl on
ENV DOCKER_VERNEMQ_RETRY_INTERVAL=3000
As long as the client has an active connection to the broker, the server's published messages arrive seamlessly. However, if I manually close the client's connection to the broker, and then publish a message on the backend to that client, when the client's connection reopens, the message is not resent by the broker. As I said, I am new to MQTT, so I may need to configure additional options, but so far I've yet to determine which. Can anyone shed any light on what might be happening on my setup that would cause offline messages to not be sent? Thanks for any information.
As thrashed out in the comments
Messages will only be queued for an offline client that has subscribed at greater than QOS 0
More details can be found here
You need to make QOS to 1 or 2 depending on your requirement and also you can use --retain flag which is quite useful. retain flag will make sure that last message will be delivered irrespective of any failure. You can know the last status of device. Check this http://www.steves-internet-guide.com/mqtt-retained-messages-example/
I'm trying to make JavaScript client to a Python websocket server through an Apache2 proxy.
The client is dead simple:
const socket = io({
transports: ['websocket']
});
I have a NodeJS websocket server and a working Apache2 reverse proxy setup.
Now I want to replace the NodeJS server with a Python server - but none of the example implementations from socket.io works. With each of the my client reports an "error 400" when setting up the websocket connection.
The Python server examples come from here:
https://github.com/miguelgrinberg/python-socketio/tree/master/examples/server
Error 400 stands for "Bad Request" - but I know that my requests are fine because my NodeJS server understands them.
When not running behind a proxy then all Python examples work fine.
What could be the problem?
I found the solution - all the Python socket.io server examples that I refered to are not configured to run behind a reverse proxy. The reason is, that the socket.io server is managing a list of allowed request origins and the automatic list creation is failing in the reverse proxy situation.
This function creates the automatic list of allowed origins (engineio/asyncio_server.py):
def _cors_allowed_origins(self, environ):
default_origins = []
if 'wsgi.url_scheme' in environ and 'HTTP_HOST' in environ:
default_origins.append('{scheme}://{host}'.format(
scheme=environ['wsgi.url_scheme'], host=environ['HTTP_HOST']))
if 'HTTP_X_FORWARDED_HOST' in environ:
scheme = environ.get(
'HTTP_X_FORWARDED_PROTO',
environ['wsgi.url_scheme']).split(',')[0].strip()
default_origins.append('{scheme}://{host}'.format(
scheme=scheme, host=environ['HTTP_X_FORWARDED_HOST'].split(
',')[0].strip()))
As you can see, it only adds URLs with {scheme} as a protocol. When behind a reverse proxy, {scheme} will always be "http". So if the initial request was HTTPS based, it will not be in the list of allowed origins.
The solution to this problem is very simple: when creating the socket.io server, you have to either tell him to allow all origins or specify your origin:
import socketio
sio = socketio.AsyncServer(cors_allowed_origins="*") # allow all
# or
sio = socketio.AsyncServer(cors_allowed_origins="https://example.com") # allow specific
I have managed to to install flask and run the hello-world script:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run()
I was impressed how easy it was. Then I wanted to make my web-server visible externally. As recommended, I put host='0.0.0.0' as the argument of run function. Then I found my IP address (in Google) and put it in the address line of my web browser (while the hello-world-script was running).
As a result I got: "A username and password are being requested" and a dialogue box where I need to put a user name and password. I am not sure but I think it comes from my wireless network. Is there a way to change this behaviour?
How are you trying to run your application? If you run flask as app.run() - flask creates its own WSGI server on your host (by default 127.0.0.1) and port (by default 5000) (need permissions if port < 1000). If you run flask using nginx + WSGI or etc. your server resolves host and port.
Now it looks like you want get application by port which resolved your server like nginx or Apache. Try to get flask application by http://your-server-host-or-ip:5000 with the default port or try to change the port (set explicit) like app.run('0.0.0.0', 8080) and get it by http://your-server-host-or-ip:8080.
By the way, you can always get IP address using command-line tools e.g. ifconfig for Unix-like systems, or ipconfig /all for Windows.
To elaborate a little bit onto what #tbicr said, that password prompt indicates that you're trying to connect to your IP on port 80, which is most likely hosting an administration page for your router/modem. You want to connect to your IP on port 5000, the default port for Flask apps run with app.run().
I have a linux box (Ubuntu 10.10 server edition) in ec2. I have written a web service using cherrypy framework. Let's say this is the code that I have written.
import sys
sys.path.insert(0,'cherrypy.zip')
import cherrypy
from cherrypy import expose
class Service:
#expose
def index(self):
return 'Hello World'
cherrypy.quickstart(Service())
I have copied this file, the cherrypy.zip file to /var/www in my ec2 instance. [I should inform that I created the www directory manually, as it wasn't there]. Then I ran
python webservice.py
and got the message
[01/Apr/2011:13:50:04] ENGINE Bus STARTED
However, when I try to run
(I have masked my public ip)
ec2-1**-2**-1**-**.ap-southeast-1.compute.amazonaws.com/
in my browser, I get connection failed. Can anyone tell me where I have gone wrong? or what I should do?
EDIT:
Okay, here is something interesting that I found. When I do
python webservice.py
I see
ENGINE Serving on 127.0.0.1:8080
Which means, the webservice will run only for the local machine. How do I make set the service 0.0.0.0 (that is, to serve any IP address?)
Hope this detail is sufficient for understanding the problem I'm facing. Help, please :)
EDIT 2:
Oh well, found the solution :-) Have to add this before cherrypy.quickstart() call
cherrypy.config.update({'server.socket_host': '0.0.0.0',
'server.socket_port': 80,
})
The cherrypy.quickstart function takes a config argument, which can be a dict, an open configuration file, or a path to a configuration file. I favor using a path to a configuration file because that minimizes the hardcoding of settings that you might prefer to control from a startup script.
In addition, since you control the server, you could configure a reverse proxy to route requests to the CherryPy application. This gives you quite a bit of flexibility. For example, if you wanted to, you could run multiple instances of the CherryPy application in parallel, each configured to listen on a different port.
Here's a sample configuration file for nginx, instructing it to forward requests to a single instance of your CherryPy application:
server
{
server_name your.hostname.com;
location / {
proxy_pass http://127.0.0.1:8080/;
}
}
And here's a sample configuration file instructing nginx to load-balance across two instances of your application, which are listening on the loopback address at ports 33334 and 33335:
upstream myapps {
server 127.0.0.1:33334;
server 127.0.0.1:33335;
}
server {
server_name your.hostname.com;
location / {
proxy_pass http://myapps;
}
}