I have a web application running Django and a separate websocket server. Both using single Django model. When Django make changes to the model, I want Django to notify websocket server about this changes.
Obviously, the way of simply connecting to websocket server and sending one message is looking bad due to increasing server load through connecting/disconnecting clients for each user form submit and due to websocket concept in general.
I've heard about solutions using AMQP server for similar purposes. The question is: is that a good idea, or there are better solutions in my case?
Have a look at https://github.com/jrief/django-websocket-redis
Documentation: http://django-websocket-redis.readthedocs.org/en/latest/
Related
I’ve got a standard client-server set-up with ReScript (ReasonML) on the front-end and a Python server on the back-end.
The user is running a separate process on localhost:2000 that I’m connecting to from the browser (UI). I can send requests to their server and receive responses.
Now I need to issue those requests from my back-end server, but cannot do so directly. I’m assuming I need some way of doing it through the browser, which can talk to localhost on the user’s computer.
What are some conceptual ways to implement this (ideally with GraphQL)? Do I need to have a subscription or web sockets or something else?
Are there any specific libraries you can recommend for this (perhaps as examples from other programming languages)?
I think the easiest solution with GraphQL would be to use Subscriptions indeed, the most common Rescript GraphQL clients already have such a feature, at least ReasonRelay, Reason Apollo Hooks and Reason-URQL have it.
I’m currently trying to serve multiple bots (running different models) and to allow users to interact with it on a website. I’ve had a look at the following: http://www.rasa.com/docs/nlu/http/, http://www.rasa.com/docs/core/http/ and http://www.rasa.com/docs/nlu/python/, but I’m still having trouble figuring out how it can be done.
Some of the solutions I’ve considered are either:
Serve the bot on a HTTP server and have my website interact with the Rasa HTTP server
Create the website on Django Framework or REST API, and run Rasa Core and NLU on the backend.
What would be the best way to go about doing this? And, could anyone please briefly explain how this can be done (with multiple bot models and instances running)?
Any help would be greatly appreciated!
For anyone else searching for an answer, I ended up using Flask as the server, along with Flask-SocketIO for real time communication. The server serves an API which allows clients to communicate with it via SocketIO, determines which bot to interact with, gets the response, and sends it back to the client.
I built a machine learning model of binary classification in python.
It works on my laptop (e.g. command line tool). Now I want to deploy it in production on a separate server in my company. It has to take inputs from another server (C# application), make some calculations and return outputs back to it.
My question is what are the best practices of doing such thing in production? As I know it can be done through TCP/IP connection.
I am new in this field and I don't know the terms used here.
So can anybody guide me?
Thanks.
I would say it depends on your infrastructure and how can the other application (C#) can communicate.
The easiest way in my opinion would be through a REST API (http request). There are some tools in different languages to create REST endpoints easily and request REST endpoints.
For example, in python, you can request the content of a URL like this:
What is the quickest way to HTTP GET in Python?
But it depends on what you have on the C# side. Can you update the C# code?
Here are a range of solutions:
REST API: need to expose REST endpoints on the communicating "service".
in C#: https://learn.microsoft.com/en-us/aspnet/web-api/overview/older-versions/build-restful-apis-with-aspnet-web-api
in python, I would recommend django framework if you need to create a server (but if the python only request things and don't serve as a server, you may not need it)
message queue like rabbitmq or zeromq, but it requires an external service to manage queues and messages
TCP/IP socket like you suggested, but it requires to manage yourself those connections
I have a django project, in which i expose a few api endpoints (api endpoint = answers to get/post, returns json response, correct me if im wrong in my definition). Those endpoints are used by me on front end, like update counts or get updated content, or a myriad other things. I handle the representation logic on server side, in templates, and in some cases send a rendered to string template to the client.
So here are the questions im trying to answer:
Do i need to have some kind of authentication between the clients and the server?
Is django cross origin protection enough?
Where, in this picture, fit such packages like django-oauth-toolkit? And django-rest-framework?
if i don't add any authentication between clients and server, am i leaving my server open for attacks?
Furthermore, what goes for server-to-server connection? Both servers under my control.
I would strongly recommend using django-tastypie for server to client communication.
I have used it in numerous applications both server to server or server to client.
This allows you to apply the django security as well as some more logic regarding the authorization process.
It offers also out of the box:
throttling
serialization in json, xml, and other formats
authentication (basic, apikey, customized and other)
validation
authorization
pagination
caching
So, as an overall overview i would suggest on building on such a framework that would make your internal api more interoperable for future extensions and more secure.
To specifically now answer your question, i would never enable any server api without at least some basic authentication/authorization.
Hopefully i answer your questions on how you can deliver all of your above worries with a framework.
The django-rest-framework that you ask for, is also really advanced and easy to use, but i prefer tastypie for the reasons i explain.
I hope i helped a bit!
I use https://github.com/mrjoes/sockjs-tornado for a Django app. I can send messages from javascript console very easy. But I want to create a signal in Django and send json string once the signal is active.
Could anyone give me a way to send a certain message in Python to sockjs-tornado socket server?
There are few options how to handle it:
Create simple REST API in your Tornado server and post your updates from Django using this API;
Use Redis. Tornado can subscribe to the update key and Django can publish updates to this key when something happens;
Use ZeroMQ (AMQP, etc) to send updates from the Django to the Tornado backend (variation of the 1 and 2).
In most of the cases, it is either first or second option. Some people prefer using 3rd option though.
I've wrote djazator, simple and easy to use django plugin. It uses zeromq for delivering messages from django to sockjs-tornado. Additionally, it can send messages to subset of authenticated django users.
I just put this up https://github.com/amagee/sockjs-client for talking directly to a SockJS server from Python (using xhr streaming).