I'm new to python and Django and i'm trying to implement websockets in Django.
What i do is i'm following the steps described in websockets documentation
The problem is that the server side command described has to be run in console. When i run it from console it works, but i want to run it inside the Django view asynchronously with a GET request. When i try it the server raises an exception something like this RuntimeError: There is no current event loop in thread 'Thread-2'.
To be more specific I want to use the technology to show a real time logs. For example an oracle procedure performs an insert and server pushes it to a page with websockets.
Am I on a wrong path for implementing the described or can anyone suggest a right/better solution?
I'm on django version 1.9 implemented on both Django's development server and Uwsgi and Nginx server, python version 3.5.2 on RedHatEnterpriseServer Release: 6.7
UPDATE
The exact code from the above URL and i put it in the view.
def ws(request):
async def time(websocket, path):
while True:
now = datetime.datetime.utcnow().isoformat() + 'Z'
await websocket.send(now)
await asyncio.sleep(random.random() * 3)
start_server = websockets.serve(time, '192.168.4.177', 9876)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
return render(request,"ws.html")
When the URL is handled by this view the above mentioned error occurs.
My ws.html is the exact copy from the above mentioned websockets documentation example
Django's request/response cycle is strictly synchronous. What you are trying to do is not possible in a normal Django view.
You might be interested in Django Channels, a project that aims to remove this limitation.
You can't really do this. I can't say why you're getting the exact errors you're getting, but a GET request to a Django view needs to return a response after some finite time, not run forever, otherwise the browser (or other parts in between like Nginx) will see the non-response as a timeout. If you want to run a websocket server, do it in a separate process outside of Django's.
There is much ongoing work to add async functionality and websockets to Django, in the form of channels -- I think the docs at http://channels.readthedocs.io/en/latest/ are the latest version of the code you can currently already use; hopefully it will be part of Django 1.10. Current version should be useable as a Django app that will allow you to make websockets in Django, but it's not as easy as your try above.
Related
I have an already existing Django app. I would like to add a system that sends data from my Django application to another Python application hosted on another server, so that the Python application receives data from the Django App in json format, possibly.
So for example, i would need to create a view that every tot seconds sends the data from a DB table to this application, or when a form is hit, the data is sent to this external application.
How can i do this? Is there an example for this particular matter? I don't know what tools i'd need to use to create this system, i only know that i would need to use Celery to perform asynchronous tasks, but nothing else; should i use Webhooks maybe? Or Django channels?
Edit: adding some more context:
I have my Django client. Then i have one or two Python applications running on another server. On my Django client i have some forms. Once the form is submitted, the data is saved on the db, but i also want this data to be sent instantly to my Python applications. The Python applications should receive the data from Django in Json format and perform some tasks according to the values submitted by users. Then the application should send a response to Django.
Come on! I'll call your Django app here "DjangoApp" and your Python apps, in Flask or another framework by "OtherApp".
First as you predicted you will need a framework that is capable of performing tasks, the new **Django 3.0 allows this, but I haven't used it yet ... I will pass on to you something that you are using and fully functional with Django 2.8 and Python 3.8**.
On your DjangoApp server you will need to structure the communication well with your Celery, let's leave the tasks to him. You can read Celery Docs and this post, its very ok to make this architecture.
Regardless of how your form or Django App looks, when you want it to activate a task in celery, it is basically the function to transmit data but in the background.
from .tasks import send_data
...
form.save()
# Create a function within the form to get the data the way you want it
# or do it the way you want.
values = form.new_function_serializedata()
send_data.delay(values) # [CALL CELERY TASKS][1]
...
Read too CALL CELERY TASKS
In all your other applications you will need to have a POST route to receive and serialize this data, do this with lightweight frameworks like Pyramid
This way, every time a form is submitted, you will have this data sent to the server within the send_data function.
In my experience, but not knowing much about your problem I would use a similar architecture but using Celery Beat.
CELERY_BEAT_SCHEDULE = {
'send_data': {
'task': 'your_app.tasks.send_data',
'schedule': crontab(), # CONFIGURE YOUR CRON
},
}
Not only is the above code added, but it is something like that.
Within your models I would create one field for sent. And every 2 seconds, 10 seconds .. as long as I wish I would filter all objects with sent = false, and pass all objects for the send_data task.
I don't know if you got confused, that's a lot to explain. But I hope I can help and answer your questions.
import requests
from django import http
def view(request):
url = 'python.app.com' # replace with other python app url or ip
request_data = {'key': 'value'} # replace with data to be sent to other app
response = requests.post(url, json=request_data)
response_data = response.json() # data returned by other app
return http.JsonResponse(response_data)
This is an example of a function based view that uses the requests library to hit an external service. The request lib takes care of encoding/decoding your data to/from json.
Yeah, webhook would be one of the options, but there are other options available too.
-> You can use Rest Apis to send data from one app to another. but In their case, you need to think about synchronization. That depends on your requirement, If you don't want data in synchronize manner then you may use RabbiMq or other async tools. Just push your rest API request in Rabbitmq and Rabbitmq will handle.
If there is someone out there who has already worked with SOLR and a python library to index/query solr, would you be able to try and answer the following question.
I am using the mySolr python library but there are others out (like pysolr) there and I don't think the problem is related to the library itself.
I have a default multicore SOLR setup, so no authentication required normally. Don't need it to access the admin page at http://localhost:8080/solr/testcore/admin/ either
from mysolr import Solr
solr = Solr('http://localhost:8080/solr/testcore/')
response = solr.search(q='*:*')
print("response")
print(response)
This code used to work but now I get a 401 reply from SOLR ... just like that, no changes have been made to the python virtual env containing mysolr or the SOLR setup. Still...something must have changed somewhere but I'm out of clues.
What could be the causes of a SOLR 401 reponse?
Additional info: This script and mor advanced script do work on another PC, just not on the one I am working on. Also, adding "/select?q=:" behind the url in the browser does return the correct results. So the SOLR is setup correctly, it has probably something to do with my computer itself. Could windows settings (of any kind) have an impact on how SOLR responds to requests from python? The python env itself has been reinstalled several times to no avail.
Thanks in advance!
The problem was: proxy.
If this exact situation was ever to occur to someone and you are behind a proxy, check if your HTTP and HTTPS environmental variables are not set. If they are... this might cause the python session to try using the proxy while it shouldn't (connecting to localhost via proxy).
It didn't cause any trouble for months but out of the blue it did so whether you encounter this or not might be dependent on how your IT setup your proxy or made some other changes...somewhere.
thank you everyone!
I'm a novice developing a simple multi-user game (think Minesweeper) using Flask for the API backend and AngularJS for the frontend. I've followed tutorials to structure the Angular/Flask app and I've coded up a RESTful API using Flask-Restless.
Now I'd like to push events to all the clients when game data is changed in the database (as it is by a POST to one the Restless endpoints). I was looking at using the SqlAlchemy event.listen API to call the Flask-SocketIO emit function to broadcast the data to clients. Is this an appropriate method to accomplish what I'm trying to do? Are there drawbacks to this approach?
#CESCO's reply works great if all of your computation is being done in the same process. You could also use this syntax (see full source code here):
#sa.models_committed.connect_via(app)
def on_models_committed(sender, changes):
for obj, change in changes:
print 'SQLALCHEMY - %s %s' % (change, obj)
Read on if you're interested in subscribing to all updates to a database ...
That won't work if your database is being updated from another process, however.
models_committed only works in the same process where the commit comes from (it's not a DB-level notification, it's sqlalchemy after committing to the DB)
https://github.com/mitsuhiko/flask-sqlalchemy/issues/369#issuecomment-170272020
I wrote a little demo app showing how to use any of Redis, ZeroMQ or socketIO_client to communicate real-time updates to your server. It might be helpful for anyone else trying to deal with outside database access.
Also, you could look into subscribing to postgres events:
https://blog.andyet.com/2015/04/06/postgres-pubsub-with-json/
Thats the barebone version of the code you want. I would start testing from there. You will have to figure out what you mean by change, so you can atach the right SqlAlchemy event to the type of change you are doing.
User database after insetion listen event
from sqlalchemy import event
from app import socketio
def after_insert_listener(mapper, connection, target):
socketio.emit('namespace response',{'data': data[0]},
namespace='/namespace')
print(target.id_user)
event.listen(User, 'after_insert', after_insert_listener)
SocketIO
First time here, it says to be specific... So here goes.
I'm doing a small project to connect Salesforce to my Raspberry Pi. The goal is to make a light (Think a beacon, siren-like light) flash when a high priority case comes in from a client in Salesforce. At the moment, clients usually send an email to a certain address, and this creates a case. It goes to the 'Unassigned Queue' and emails the team that this case is there waiting to be assigned.
Salesforce uses REST, so I need to be able to get the Pi to accept JSON so it can easily understand what Salesforce wants it to do.
Currently, I guess I have won half of the battle. I have a web server (Lighttpd) running on the Pi, which hosts an index page and a Python script. I am also using a Python wrapper, which allows me to easily run a command from a Telldus program I have installed. This program controls a USB RF Transmitter that I have connected, it is paired to a RF Socket, which is connected to the mains power supply with a light connected to it.
So the Python script is called power.py, and can be controlled with URL variables, so if I go to power.py?device=1&power=on&time=10&password=hunter2 that turns on device 1 for 10 seconds. I also created a POST form on the index page, which just POSTS to the python script, and runs it in the same way as using the URL variables. That all works great.
So all I need to do, is connect it to Salesforce. I would like to use REST and JSON, so that if I ever move away from Salesforce to another CRM program, it will easily be able to adapt and receive instructions from new places.
I have posted the Python script I am using here:
https://github.com/7ewis/RemotePiControl/blob/master/power.py
The Pi isn't currently allowed out of the local network, so I will need to somehow develop a way to send JSON commands, and the recieve and convert them to work using the correct variables etc. I'm not a programmer, I've just exposure to languages from hacking things and exploring. Hence why I need some guidance with this.
I have never used REST or JSON before, so what would I need to do to achieve this?
Seems like adding Flask http://flask.pocoo.org to your Raspberry Pi Webserver would be a good move. It allows server-side python to be run in response to JQuery ajax (and regular) requests. Check out a couple of examples here:
http://flask.pocoo.org/docs/patterns/jquery/
and this stack overflow question: how can I use data posted from ajax in flask?
Flask is pretty straightforward to get up and running, and is happy working with a number of servers, including Lighttpd. Writing RESTful flask is also a perfectly reasonable proposition, see: http://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask
Additionally, lots of people have used flask on the raspberry pi already-- so that could help get you up and running smoothly: http://mattrichardson.com/Raspberry-Pi-Flask/
Good luck!
Firstly don't use a Python script that prints out result directly to CGI. You will be forever debugging it.
Use a light weight framework like Flask. You can do something as simple as
from flask import Flask
application = Flask(__name__)
#application.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'POST':
# use Flask's build in json decoder
the_data = request.get_json()
# then do something with the data
return "This was a POST request, how interesting..."
else:
# request was GET rather than POST, so do something with else
return "Hello World!"
See how to configure Flask to run with Lighttpd here http://flask.pocoo.org/docs/deploying/fastcgi/
If you want to test this you can either write another Python script to send JSON data to your server (I recommend looking at the Python Requests library for this http://www.python-requests.org/en/latest/), or you can do this manually using a HTTP request builder, such as HTTPRequester for Firefox (https://addons.mozilla.org/en-US/firefox/addon/httprequester/)
I'm working on DB design tool (python, gevent-socket.io). In these tool multiple users can discuss one DB model, receiving changes in runtime. To support this feature, I'm using socket.io. I'd like to extend number of servers that handle socket.io connection easily. The simplest way to do it is to set up nginx to choose server basing of model ID.
I'd like module approach, where model ID is divided by number of servers. So if I have 3 nodes, model 1 will be handled on first, 2 - on second, 3 - on third, 4 - on first again etc.
My request for model loading looks like /models/, so no problem here - argument can be parsed to find server to handle it. But after model page is loaded, JS tries to establish connection:
var socket = io.connect('/models', {
'reconnection limit': 4000
});
It accesses default endpoint, so server receives following requests:
http://example.com/socket.io/1/xhr-pooling/111111?=1111111
To handle it, I create application this way:
SocketIOServer((app.config['HOST'], app.config['PORT']), app, resource='socket.io', transports=transports).serve_forever()
and then
#bp.route('/<path:remaining>')
def socketio(remaining):
app = current_app._get_current_object()
try:
# Hack: set app instead of request to make it available in the namespace.
socketio_manage(request.environ, {'/models': ModelsNamespace}, app)
except:
app.logger.error("Exception while handling socket.io connection", exc_info=True)
return Response()
I'd like to change it to
http://example.com/socket.io/<model_id>/1/xhr-pooling/111111?=1111111
to be able to choose right server in ngnix. How to do it?
UPDATE
I also like to check user permissions when it tries to establish connection. I'd like to do it in socketio(remaining) method, but, again, I need to know what model he is trying to access.
UPDATE 2
I implemented permission validator, taking model_id from HTTP_REFERER. Seems, it's only part of request that contains identifier of the model (example of values: http://example.com/models/1/).
The first idea - is to tell client side available servers for current time.
Furthermore you can generate server list for client side by priority, just put them in javascript generated array by order.
This answer means that your servers can answer on any models, you can control server loading by changing servers ordering in generated list for new clients.
I think this is more flexible way. But if you want - you can parse query string in nginx and route request on any underlying server - just have a table for "model id-server port" relations
Upd: Just thinking about your task. And find one another solution. When you generate client web page you can inline servers count in js somewhere. Then, when you requesting model updates, just use another parameter founded as
serverId = modelId%ServersCount;
that will be server identificator for routing in nginx.
Then in nginx config you can use simple parsing query string, and routing request to server you can find by serverId parameter.
in "metalanguage" it will be
get parameter serverId to var $servPortSuffix
route request to localhost:80$servPortSuffix
or another routing idea.
You can add additional GET parameters to socket.io via
io.connect(url, {query: "foo=bar"})