stripe-python, should it be async? - python

I need to integrate Stripe into a Django project and I noticed that there's a stripe-python package. This runs entirely synchronously though. Is it a bad idea to make these types of call from the main web server? Since it makes external calls, this presumably means the webserver will be blocked while we wait for a response, which seems bad.
So, should I be running this from something like Celery? Or is it fine to run on the main thread? Anyone have experience with this?

Based on a previous project, I think using it synchronously is much better from a design prospective. WIth most payments, you want to keep the user on the page until the payment goes through so they know for certain that there was no issue with the payment and you can handle any issues with the payment right there rather than taking the task from the queue and handling it. If you think about most payments you have done online, these all are happening in the main thread for this reason

Related

Canceling a Flask backend process from a Web-based client

I have a Flask application running at backend, and delivering some data to a client Web application through some endpoints. E.g.
#app.route('/extract_entities_from_matching_docs', methods=['POST'])
def extract_entities_from_matching_docs():
data = request.form
entities = storage.get_entities_in_docs_by_keywords(data[“keywords”])
return jsonify(entities)
This is just an example, but the thing is sometimes, these kinds of methods take too much time to process, and the user may want to cancel the processing from the client (e.g. imagine you have a “Cancel” button on the client).
My question is: how can I cancel a running process at backend from the frontend? I thought that I may include a flag in the loops, so if the flag is set to true it continues looping and processing, otherwise, it just returns. But the problem with such a simple alternative is: what if the client just closes the browser’s tab? The process will continue running at the backend with no reason to do it. Maybe by posting to another flag from the client every x minutes, to make the backend know that there is a client waiting for the response. But maybe there is a more elegant solution that I'm ignoring.
I’m not really into the backend, but I need to solve this. So, can you suggest me any other alternative? Something to read? Or do you know if is there any good practice to face this?
What you want is not easily possible. Generally the solution is to not bother and keep the request running in the background and just live with it. In the end most requests will probably not be "cancelled" and thus the overhead for developing this kind of solution outweighs the minor benefits.
If your tasks are really long-running (and with that I mean minutes or even much more), then you should off-load them to something like Celery tasks. In that case, you can keep a reference to the task ID and revoke it (passing terminate=True should deal with an already-running task as well).

Can I have Python code to continue executing after I call Flask app.run?

I have just started with Python, although I have been programming in other languages over the past 30 years. I wanted to keep my first application simple, so I started out with a little home automation project hosted on a Raspberry Pi.
I got my code to work fine (controlling a valve, reading a flow sensor and showing some data on a display), but when I wanted to add some web interactivity it came to a sudden halt.
Most articles I have found on the subject suggest to use the Flask framework to compose dynamic web pages. I have tried, and understood, the basics of Flask, but I just can't get around the issue that Flask is blocking once I call the "app.run" function. The rest of my python code waits for Flask to return, which never happens. I.e. no more water flow measurement, valve motor steering or display updating.
So, my basic question would be: What tool should I use in order to serve a simple dynamic web page (with very low load, like 1 request / week), in parallel to my applications main tasks (GPIO/Pulse counting)? All this in the resource constrained environment of a Raspberry Pi (3).
If you still suggest Flask (because it seems very close to target), how should I arrange my code to keep handling the real-world events, such as mentioned above?
(This last part might be tough answering without seeing the actual code, but maybe it's possible answering it in a "generic" way? Or pointing to existing examples that I might have missed while searching.)
You're on the right track with multithreading. If your monitoring code runs in a loop, you could define a function like
def monitoring_loop():
while True:
# do the monitoring
Then, before you call app.run(), start a thread that runs that function:
import threading
from wherever import monitoring_loop
monitoring_thread = threading.Thread(target = monitoring_loop)
monitoring_thread.start()
# app.run() and whatever else you want to do
Don't join the thread - you want it to keep running in parallel to your Flask app. If you joined it, it would block the main execution thread until it finished, which would be never, since it's running a while True loop.
To communicate between the monitoring thread and the rest of the program, you could use a queue to pass messages in a thread-safe way between them.
The way I would probably handle this is to split your program into two distinct separately running programs.
One program handles the GPIO monitoring and communication, and the other program is your small Flask server. Since they run as separate processes, they won't block each other.
You can have the two processes communicate through a small database. The GIPO interface can periodically record flow measurements or other relevant data to a table in the database. It can also monitor another table in the database that might serve as a queue for requests.
Your Flask instance can query that same database to get the current statistics to return to the user, and can submit entries to the requests queue based on user input. (If the GIPO process updates that requests queue with the current status, the Flask process can report that back out.)
And as far as what kind of database to use on a little Raspberry Pi, consider sqlite3 which is a very small, lightweight file-based database well supported as a standard library in Python. (It doesn't require running a full "database server" process.)
Good luck with your project, it sounds like fun!
Hi i was trying the connection with dronekit_sitl and i got the same issue , after 30 seconds the connection was closed.To get rid of that , there are 2 solutions:
You use the decorator before_request:in this one you define a method that will handle the connection before each request
You use the decorator before_first_request : in this case the connection will be made once the first request will be called and the you can handle the object in the other route using a global variable
For more information https://pythonise.com/series/learning-flask/python-before-after-request

Google App Engine - run task on publish

I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).
In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.
Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.
Any help will be greatly appreciated.
Thank you.
Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose.
The main question will be how to ensure it only runs once for a particular version.
Here is an outline on how you might approach it.
You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run.
Then make sure you increment your version, when ever you deploy your new code.
In you warmup handler or appengine_config.py grab the version deployed,
then in a transaction try and fetch the new HasRun entity by Key (version number).
If you get the Entity then don't run the one time code.
If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request.
Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing.
Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once.
It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks.
Or a combination/variation of all of the above.

What is the best way to update the UI when a celery task completes in Django?

I want the user to be able to click a button to generate a report, show him a generating report animation and then once the report finishes generating, display the word success on the page.
I am thinking of creating a celery task when the generate report button is clicked. What is the best way for me to update the UI once the task is over? Should I constantly be checking via AJAX calls if the task has been completed? Is there a better way or third party notification kind of app in Django that helps with this process?
Thanks!
Edit: I did more research and the only thing I could find is three way data bindings with django-angular and django-websocket-redis. Seems like a little bit of an overkill just for this small feature. I guess without web sockets, the only possible way is going to be constantly polling the backend every x seconds to check if the task has completed. Any more ideas?
Note that polling means you'll be keeping the request and connection open. On web applications with large amount of hits, this will waste a significant amount of resource. However, on smaller websites the open connections may not be such a big deal. Pick a strategy that's easiest to implement now that will allow you to change it later when you actually have performance issues.
Polling is a good and simple solution for this. Avoid adding unnecessary overhead to your site for simple features.
while Result.state == u'PENDING':
#do your stuff
if Result.state == u'SUCCESS':
#Finished
else:
#something wrong

How to create a polling script in Python?

I was trying to create a polling script in python that starts when another python script starts and then keeps supplying data back to this script.
I can obviously write an infinite loop but is that the right way to go about it? I might loose control over how the functions work and how many times a function should be called in an hour.
Edit:
What I am trying to accomplish is to poll the REST API of twitter and get new mentions and people who follow me. I obviously can't keep polling because I will run out of API requests per hour. Thus, the issue. This poller, will send the new mention and follower id/user to the main script that would be listening to any such update.
I highly suggest looking into Twisted, one of the most popular async frameworks using the reactor pattern.
The "infinite loop" you are looking for is really an application pattern that Twisted implements to respond to events asynchronously, and it almost never makes sense to roll your own.
Twisted is largely used for networking requirements, but the it has a LoopingCall interface to set up the kind of functionality you require. Using the core Twisted deferred as your request model allows you to set up a long-polling server that can perform the kind of conditional network test you need. It can intially be a little intimidating, but once you understand the core components (Factories, Reactors, Protocols etc) that you need to inherit it becomes much easier to visualize your problem.
This also might be a good tutorial to start looking at the basics of the "push" model:
http://carloscarrasco.com/simple-http-pubsub-server-with-twisted.html

Categories

Resources