Send data from Django to another server - python

I have an already existing Django app. I would like to add a system that sends data from my Django application to another Python application hosted on another server, so that the Python application receives data from the Django App in json format, possibly.
So for example, i would need to create a view that every tot seconds sends the data from a DB table to this application, or when a form is hit, the data is sent to this external application.
How can i do this? Is there an example for this particular matter? I don't know what tools i'd need to use to create this system, i only know that i would need to use Celery to perform asynchronous tasks, but nothing else; should i use Webhooks maybe? Or Django channels?
Edit: adding some more context:
I have my Django client. Then i have one or two Python applications running on another server. On my Django client i have some forms. Once the form is submitted, the data is saved on the db, but i also want this data to be sent instantly to my Python applications. The Python applications should receive the data from Django in Json format and perform some tasks according to the values submitted by users. Then the application should send a response to Django.

Come on! I'll call your Django app here "DjangoApp" and your Python apps, in Flask or another framework by "OtherApp".
First as you predicted you will need a framework that is capable of performing tasks, the new **Django 3.0 allows this, but I haven't used it yet ... I will pass on to you something that you are using and fully functional with Django 2.8 and Python 3.8**.
On your DjangoApp server you will need to structure the communication well with your Celery, let's leave the tasks to him. You can read Celery Docs and this post, its very ok to make this architecture.
Regardless of how your form or Django App looks, when you want it to activate a task in celery, it is basically the function to transmit data but in the background.
from .tasks import send_data
...
form.save()
# Create a function within the form to get the data the way you want it
# or do it the way you want.
values = form.new_function_serializedata()
send_data.delay(values) # [CALL CELERY TASKS][1]
...
Read too CALL CELERY TASKS
In all your other applications you will need to have a POST route to receive and serialize this data, do this with lightweight frameworks like Pyramid
This way, every time a form is submitted, you will have this data sent to the server within the send_data function.
In my experience, but not knowing much about your problem I would use a similar architecture but using Celery Beat.
CELERY_BEAT_SCHEDULE = {
'send_data': {
'task': 'your_app.tasks.send_data',
'schedule': crontab(), # CONFIGURE YOUR CRON
},
}
Not only is the above code added, but it is something like that.
Within your models I would create one field for sent. And every 2 seconds, 10 seconds .. as long as I wish I would filter all objects with sent = false, and pass all objects for the send_data task.
I don't know if you got confused, that's a lot to explain. But I hope I can help and answer your questions.

import requests
from django import http
def view(request):
url = 'python.app.com' # replace with other python app url or ip
request_data = {'key': 'value'} # replace with data to be sent to other app
response = requests.post(url, json=request_data)
response_data = response.json() # data returned by other app
return http.JsonResponse(response_data)
This is an example of a function based view that uses the requests library to hit an external service. The request lib takes care of encoding/decoding your data to/from json.

Yeah, webhook would be one of the options, but there are other options available too.
-> You can use Rest Apis to send data from one app to another. but In their case, you need to think about synchronization. That depends on your requirement, If you don't want data in synchronize manner then you may use RabbiMq or other async tools. Just push your rest API request in Rabbitmq and Rabbitmq will handle.

Related

How/where should I store data in a Django app that is not connected to any HTTP request?

I have created a Django app with the usual components: applications, models views, templates, etc.
The architecture of a Django app is such that it basically just sits there and does nothing until you call one of the views by hitting a REST endpoint. Then it serves a page (or in my case some JSON) and waits for the next REST request.
I would like to insert into this app some automated tweeting. For this purpose I will be using the python-twitter library. My tweets will contain a URL. Twitter's website says that any URLs inserted into tweets get shortened to 23 characters by Twitter itself. So the remaining characters are available for the non-URL portion of the tweet. But the 23-character size may change. So Twitter recommends checking the current size of shortened URLs when the application is loaded but no more than once daily. This is how I can check the current shortened-URL size using python-twitter:
>>> import twitter
>>> twitter_keys = {
"CONSUMER_KEY": "BlahBlahBlah1",
"CONSUMER_SECRET": "BlahBlahBlah2",
"ACCESS_TOKEN_KEY": "BlahBlahBlah3",
"ACCESS_TOKEN_SECRET": "BlahBlahBlah4",
}
>>> api = twitter.Api(
consumer_key=twitter_keys['CONSUMER_KEY'],
consumer_secret=twitter_keys['CONSUMER_SECRET'],
access_token_key=twitter_keys['ACCESS_TOKEN_KEY'],
access_token_secret=twitter_keys['ACCESS_TOKEN_SECRET'],
)
>>> api.GetShortUrlLength()
23
Where and how should I save this value 23 such that it is retrieved from Twitter only once at the start of the application, but available to my Django models all the time during the execution of my application? Should I put it in the settings.py file? Or somewhere else? Please include a code sample if necessary to make it absolutely clear and explicit.
Lot's of different ways and it's primarily a matter of opinion. The simplest of course would be to keep that data in the source file for the module that connects to twitter. Which looks like your existing system. Which works fine as long as this is not an app that's being commited to a public VCS repository.
If the code get's into a public repository you have two choices. Use an 'app_settings' file or save it in the db. Both approaches are described here: https://stackoverflow.com/a/37266007/267540

Can I persist an http connection (or other data) across Flask requests?

I'm working on a Flask app which retrieves the user's XML from the myanimelist.net API (sample), processes it, and returns some data. The data returned can be different depending on the Flask page being viewed by the user, but the initial process (retrieve the XML, create a User object, etc.) done before each request is always the same.
Currently, retrieving the XML from myanimelist.net is the bottleneck for my app's performance and adds on a good 500-1000ms to each request. Since all of the app's requests are to the myanimelist server, I'd like to know if there's a way to persist the http connection so that once the first request is made, subsequent requests will not take as long to load. I don't want to cache the entire XML because the data is subject to frequent change.
Here's the general overview of my app:
from flask import Flask
from functools import wraps
import requests
app = Flask(__name__)
def get_xml(f):
#wraps(f)
def wrap():
# Get the XML before each app function
r = requests.get('page_from_MAL') # Current bottleneck
user = User(data_from_r) # User object
response = f(user)
return response
return wrap
#app.route('/one')
#get_xml
def page_one(user_object):
return 'some data from user_object'
#app.route('/two')
#get_xml
def page_two(user_object):
return 'some other data from user_object'
if __name__ == '__main__':
app.run()
So is there a way to persist the connection like I mentioned? Please let me know if I'm approaching this from the right direction.
I think you aren't approaching this from the right direction because you place your app too much as a proxy of myanimelist.net.
What happens when you have 2000 users? Your app end up doing tons of requests to myanimelist.net, and a mean user could definitely DoS your app (or use it to DoS myanimelist.net).
This is a much cleaner way IMHO :
Server side :
Create a websocket server (ex: https://github.com/aaugustin/websockets/blob/master/example/server.py)
When a user connects to the websocket server, add the client to a list, remove it from the list on disconnect.
For every connected users, do frequently check myanimelist.net to get the associated xml (maybe lower the frequence the more online users you get)
for every xml document, make a diff with your server local version, and send that diff to the client using the websocket channel (assuming there is a diff).
Client side :
on receiving diff : update the local xml with the differences.
disconnect from websocket after n seconds of inactivity + when disconnected add a button on the interface to reconnect
I doubt you can do anything much better assuming myanimelist.net doesn't provide a "push" API.

Testing Flask REST server

I have a tiny Flask server that is supposed to load data from a file and run a function on it. This function will return a DataFrame and I return the json version of it. Much to my surprise this all works nicely. However, how would I test this? I have included some attempts below but I don't understand Flask (nor REST) well enough yet:
#!/home/thomas/python
from flask import Flask
from flask.ext.restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class UniverseAPI(Resource):
def get(self):
import pandas as pd
frame = pd.read_csv("//datasrv10//data$//AQ//test.csv", index_col=0, header=0)
return frame.to_json()
api.add_resource(UniverseAPI, '/data/universe')
I am happy to include a few of my attempts here... I appreciate any hints. I have read the official documentation.
I should specify what I mean with testing. I can run this on my linux server and can extract all the required information with the requests package. However, I want to create a unittest that comes without the need to start the server on the localhost. I think I have managed with the FLASK test-client. However, the problem now is that the requests response object and the flask response object treat the underlying json strings rather differently. So I guess my problem is more related to json string issues rather than FLASK. Thanks for all your helpful feedback though
Well, the basics of writing a REST API are essentially a set of design principles. My understanding of it is based on this article by Miguel Grinberg, http://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask .
In it, he talks about how a REST API is:
"Stateless" - All interactions with the service can happen using the information from one request.
Built upon accessing "resources" from URIs using HTTP requests like GET, PUTS, and POST. A resource could be an order in a store, a task in a web app, or whatever you like.
There's also a bunch of stuff about how the server should standardize all forms of communication between itself and the client, indicate whether it can do cacheing, and other stuff like that. From an initial design standpoint, though, this is "the point" as he put it:
"The task of designing a web service or API that adheres to the REST guidelines then becomes > an exercise in identifying the resources that will be exposed and how they will be affected > by the different request methods."
If you're looking for an interesting example of a REST API that might be suited to your interests (I know it is to mine), reddit's is open source. It's a relatable example to see how they try and structure the interactions behind requests: http://www.reddit.com/dev/api

Socket.io connections distribution between several servers

I'm working on DB design tool (python, gevent-socket.io). In these tool multiple users can discuss one DB model, receiving changes in runtime. To support this feature, I'm using socket.io. I'd like to extend number of servers that handle socket.io connection easily. The simplest way to do it is to set up nginx to choose server basing of model ID.
I'd like module approach, where model ID is divided by number of servers. So if I have 3 nodes, model 1 will be handled on first, 2 - on second, 3 - on third, 4 - on first again etc.
My request for model loading looks like /models/, so no problem here - argument can be parsed to find server to handle it. But after model page is loaded, JS tries to establish connection:
var socket = io.connect('/models', {
'reconnection limit': 4000
});
It accesses default endpoint, so server receives following requests:
http://example.com/socket.io/1/xhr-pooling/111111?=1111111
To handle it, I create application this way:
SocketIOServer((app.config['HOST'], app.config['PORT']), app, resource='socket.io', transports=transports).serve_forever()
and then
#bp.route('/<path:remaining>')
def socketio(remaining):
app = current_app._get_current_object()
try:
# Hack: set app instead of request to make it available in the namespace.
socketio_manage(request.environ, {'/models': ModelsNamespace}, app)
except:
app.logger.error("Exception while handling socket.io connection", exc_info=True)
return Response()
I'd like to change it to
http://example.com/socket.io/<model_id>/1/xhr-pooling/111111?=1111111
to be able to choose right server in ngnix. How to do it?
UPDATE
I also like to check user permissions when it tries to establish connection. I'd like to do it in socketio(remaining) method, but, again, I need to know what model he is trying to access.
UPDATE 2
I implemented permission validator, taking model_id from HTTP_REFERER. Seems, it's only part of request that contains identifier of the model (example of values: http://example.com/models/1/).
The first idea - is to tell client side available servers for current time.
Furthermore you can generate server list for client side by priority, just put them in javascript generated array by order.
This answer means that your servers can answer on any models, you can control server loading by changing servers ordering in generated list for new clients.
I think this is more flexible way. But if you want - you can parse query string in nginx and route request on any underlying server - just have a table for "model id-server port" relations
Upd: Just thinking about your task. And find one another solution. When you generate client web page you can inline servers count in js somewhere. Then, when you requesting model updates, just use another parameter founded as
serverId = modelId%ServersCount;
that will be server identificator for routing in nginx.
Then in nginx config you can use simple parsing query string, and routing request to server you can find by serverId parameter.
in "metalanguage" it will be
get parameter serverId to var $servPortSuffix
route request to localhost:80$servPortSuffix
or another routing idea.
You can add additional GET parameters to socket.io via
io.connect(url, {query: "foo=bar"})

Testing Django Responses to Stripe Webhooks

I'm trying to figure out an effective way to test how my server handles webhooks from Stripe. I'm setting up a system to add multiple subscriptions to a customer's credit card, which is described on Stripe's website:
https://support.stripe.com/questions/can-customers-have-multiple-subscriptions
The issue I'm having is figuring out how to effectively test that my server is executing the scripts correctly (i.e., adding the correct subscriptions to the invoice, recording the events in my database, etc.). I'm not too concerned about automating the test right now, I'm just struggling to effectively run any good test on the script. Has anyone done this with Django previously? What resources and tools did you use to run these tests?
Thanks!
I did not use any tools to run the tests. Impact the stripe has a FULL API REFERENCE which display the information you have send to them and they also display the error. Stripe is very easy to setup, cheap, and have full details in documentation.
What I did is?
First I create a stripe account. In that account, they will give you:
TEST_SECRET_KEY: use for sending payment and information in stripe (for testing)
TEST_PUBS_KEY: identifies your website when communicating with Stripe (for testing)
LIVE_SECRET_KEY: use for sending payment and information in stripe (for live)
LIVE_PUBS_KEY: identifies your website when communicating with Stripe (for live)
API_VERSION: "2012-11-07" //this is the version for testing only
When you login you will see Documentation at the top. Click the documentation and they will give you step by step tutorial on how to create a form, how to create subscription, how to handle errors and many more.
To check if your script is executing and connecting to stripe. Click FULL API REFERENCE then choose Python. In that page you will see the information you have send and error that you have encountered.
What I really like is, if the Stripe detect an error the system will point out that and give you a solution. The solution is in the left side and checking the information send is on the right side.
Stripe is divided into two worlds: the test mode and the live. In test mode, you can perform creating new customer, add new invoices, set up your subscription, and many more. What ever you do in test mode, is the same when your Stripe is live.
I really love that stripe provides the logs for the web hooks, however, it is difficult to view the error responses from them, so I set up a script using the Requests library. First, I went to the Stripe dashboard and copied one of the requests they were sending.
Events & Webhooks --> click on one of the requests --> copy the entire request
import requests
data = """ PASTE COPIED JSON REQUEST HERE """
# insert the appropriate url/endpoint below
res = requests.post("http://localhost:8000/stripe_hook/", data=data).text
output = open("hook_result.html", "w")
output.write(res)
output.close()
Now I could open hook_result.html and see any django errors that may have come up (given DEBUG=True in django).
In django-stripe-payments I have a test suite that while far from comprehensive is meant to be a start at getting there. What I do is just copy a real webhook's data, scrub it for sensitive data and add it as a data to the test.
testing stripe webhooks is a pain. I don't use Django, so my answer will be more general.
My php webhook handler parses the webhook data and dispatches handler functions accordingly. In my handler class, I set up class properties with legitimate data for all the ids that the test webhooks mangles. Then I have a condition in each of my handler functions that tests for livemode. If false, I replace the mangled ids with legit test ids.
I also have another class property called $fakeLiveMode, which I set to true when I'm testing. This allows me to force the code to process as though in live mode.
So, for example, when testing the customer.subscription.updated event, the plan id and customer id get botched. So in that handler I would do this:
if ($event->livemode === true || $this->fakeLivemode)
{
if ($this->fakeLivemode)
{
// override botched data returned by test webhook
$event->data->object->plan->id = $this->testPlanId;
$event->data->object->customer = $this->testCustomerId;
}
// process webhook
}
Does that help?

Categories

Resources