So, I want to write this Python code that wants to assess data in requests I am getting and act accordingly. I have many different endpoints and I'd like a way to access the request data for all requests without creating manually every possible endpoint.
Is there a way to do it with Flask/Bottle? A proxy of sorts
You can register a function to be called before every request with the #app.before_request decorator:
#app.before_request
def handle_every_request():
# ...
If this function returns anything other than None, it'll be used as the response and no views will be called. This would let you create any number of routing options.
Another option is to use the request_started signal, see the Signals documentation:
from flask import request_started
#request_started.connect_via(app)
def handle_every_request(sender, **kwargs):
# ...
The above listens to requests for the specific Flask app instance. Use #request_started.connect if you want to listen to all requests for all apps. Signals just listen, they don't route, however.
In general, both Flask and Bottle are WSGI applications. WSGI supports wrapping such applications in WSGI middleware, letting you inspect every ingoing and outgoing byte of a request-response pair. This gives you access to the data at a lower level. You could use this to rewrite paths being requested however.
Related
I have a flask route this like:
#app.route('/product/<string:slug>')
def product(slug):
# some codes...
return render_template('product.html', product=product)
Different clients use the project (different websites, same infrastructure). And every customer wants the product URL to be different. Like;
asite.com/product-nike-shoe-323
bsite.com/nike-shoe
csite.com/product/nike-shoue
vs. vs
How do I set the URL structure to come from the database?
like:
url_config = "product-{product_name}-{product_id}"
or
url_config = "product-{product_id}"
Note: please without redirect.
I’m not 100% clear on what you refer to when you say “database” here. From context I infer you may be talking about the Flask Config object. If that’s the case, you can simply register your view function right after setting up the app configuration. Just call app.add_url_rule() to register the URL pattern from the configuration to point to your view function of choice.
If, however, you are talking about a SQL or NoSQL database and you have built a web UI to register routes, then don’t dispair. Flask routes can be registered with the app object at any point. There is no point in the Flask app lifecycle after which you can no longer register a route!
All that registering a route does, is create a mapping between a URL template and endpoint name, an opaque string. Most of the time, you also register a function to be called to handle the specific endpoint, and most of the time, Flask infers the endpoint name from the function. Once registered in the mapping any next incoming request can be routed to the function for the given endpoint.
So, Flask keeps two maps:
from url route -> endpoint name: Flask.url_map
from endpoint name -> function: Flask.view_functions
That said, there is API for removing or changing url registrations (other than restarting your server, of course). You can’t change the url route, the endpoint name for a given route or what endpoint maps to what function. The intention of the framework is that you register your routes early on when first starting your server, via code that runs directly when imported or when bound to the app (Blueprints and Flask extensions do the latter). The majority of Flask apps will create their Flask instance, register all their routes and extensions, then pass the instance to the WSGI server for request dispatch, and that’s it. But there is nothing in the implementation stopping you from registering more routes after this point.
If you want to register URL routes from database information, you have to take care of at least the following two things:
Register existing routes at start-up. Once you have a connection to your database established, retrieve the existing routes and register them.
If a new entry is added to the database, register a new route.
First of all: if I were to implement something like this I’d use one view function. You can always figure out what url rule was matched and what endpoint name this mapped to by looking at request.url_rule and request.endpoint, respectively.
Next, I’d explicitly generate endpoint names for each url rule from the database. Use the primary key in the name; you want to be able to find the database row from the endpoint name and vice versa. How you do this is up to you; let’s assume you know how to do this, and you have two functions for this named pk_from_endpoint() and endpoint_from_pk().
Your view function can then look like this:
from flask import request
def product_request(**kwargs):
key = pk_from_endpoint(request.endpoint)
row = database_query(key)
# … process request
You register a route for a given database row with:
app.add_url_route(row.url_config, endpoint_from_pk(row.id), product_request)
As mentioned, you can’t change URL registrations. But, as long as changes to these URLs are infrequent you could always add new registrations and for any old entries use abort(404) to return a 404 Not Found response.
That's not possible with Flask's routing system. The URL map is supposed to be defined at startup and not change after that.
However, if you have some specific path where you need the dynamic parts (e.g. /product/WHATEVER), then you can register a route for /product/<slug> and query the database within your view function.
That said, if you REALLY want URL rules in a DB, and do not mind connecting to your database during startup (usually that's ugly), then nothing stop you from querying the database at startup time and define the URL rules based on data from the DB. Quite ugly, but doable.
Example:
with app.app_context():
url_map = {u.endpoint: u.rule for u in URLRules.query}
#app.route(url_map['foo'])
def foo():
...
Of course doing so makes it harder to nicely structure your app unless you use app.add_url_rule() for all the endpoints in a single place instead of the #app.route() decorators.
Likewise with blueprints of course.
I have an already existing Django app. I would like to add a system that sends data from my Django application to another Python application hosted on another server, so that the Python application receives data from the Django App in json format, possibly.
So for example, i would need to create a view that every tot seconds sends the data from a DB table to this application, or when a form is hit, the data is sent to this external application.
How can i do this? Is there an example for this particular matter? I don't know what tools i'd need to use to create this system, i only know that i would need to use Celery to perform asynchronous tasks, but nothing else; should i use Webhooks maybe? Or Django channels?
Edit: adding some more context:
I have my Django client. Then i have one or two Python applications running on another server. On my Django client i have some forms. Once the form is submitted, the data is saved on the db, but i also want this data to be sent instantly to my Python applications. The Python applications should receive the data from Django in Json format and perform some tasks according to the values submitted by users. Then the application should send a response to Django.
Come on! I'll call your Django app here "DjangoApp" and your Python apps, in Flask or another framework by "OtherApp".
First as you predicted you will need a framework that is capable of performing tasks, the new **Django 3.0 allows this, but I haven't used it yet ... I will pass on to you something that you are using and fully functional with Django 2.8 and Python 3.8**.
On your DjangoApp server you will need to structure the communication well with your Celery, let's leave the tasks to him. You can read Celery Docs and this post, its very ok to make this architecture.
Regardless of how your form or Django App looks, when you want it to activate a task in celery, it is basically the function to transmit data but in the background.
from .tasks import send_data
...
form.save()
# Create a function within the form to get the data the way you want it
# or do it the way you want.
values = form.new_function_serializedata()
send_data.delay(values) # [CALL CELERY TASKS][1]
...
Read too CALL CELERY TASKS
In all your other applications you will need to have a POST route to receive and serialize this data, do this with lightweight frameworks like Pyramid
This way, every time a form is submitted, you will have this data sent to the server within the send_data function.
In my experience, but not knowing much about your problem I would use a similar architecture but using Celery Beat.
CELERY_BEAT_SCHEDULE = {
'send_data': {
'task': 'your_app.tasks.send_data',
'schedule': crontab(), # CONFIGURE YOUR CRON
},
}
Not only is the above code added, but it is something like that.
Within your models I would create one field for sent. And every 2 seconds, 10 seconds .. as long as I wish I would filter all objects with sent = false, and pass all objects for the send_data task.
I don't know if you got confused, that's a lot to explain. But I hope I can help and answer your questions.
import requests
from django import http
def view(request):
url = 'python.app.com' # replace with other python app url or ip
request_data = {'key': 'value'} # replace with data to be sent to other app
response = requests.post(url, json=request_data)
response_data = response.json() # data returned by other app
return http.JsonResponse(response_data)
This is an example of a function based view that uses the requests library to hit an external service. The request lib takes care of encoding/decoding your data to/from json.
Yeah, webhook would be one of the options, but there are other options available too.
-> You can use Rest Apis to send data from one app to another. but In their case, you need to think about synchronization. That depends on your requirement, If you don't want data in synchronize manner then you may use RabbiMq or other async tools. Just push your rest API request in Rabbitmq and Rabbitmq will handle.
I've created a Django-rest-framework app. It exposes some API which does some get/set operations in the MySQL DB.
I have a requirement of making an HTTP request to another server and piggyback this response along with the usual response. I'm trying to use a self-made HTTP connection pool to make HTTP requests instead of making new connections on each request.
What is the most appropriate place to keep this app level HTTP connection pool object?
I've looked around for it & there are multiple solutions each with some cons. Here are some:
To make a singleton class of the pool in a diff file, but this is not a good pythonic way to do things. There are various discussions over why not to use singleton design pattern.
Also, I don't know how intelligent it would be to pool a pooler? (:P)
To keep it in init.py of the app dir. The issue with that are as follows:
It should only contain imports & things related to that.
It will be difficult to unit test the code because the import would happen before mocking and it would actually try to hit the API.
To use sessions, but I guess that makes more sense if it was something user session specific, like a user specific number, etc
Also, the object needs to be serializable. I don't know how HTTP Connection pool can be serialized.
To keep it global in views.py but that also is discouraged.
What is the best place to store such app/global level variables?
This thread is a bit old but still could be googled. generally, if you want a component to be accessible among several apps in your Django project you can put it in a general or core app as a Util or whatever.
in terms of reusability and app-specific you can use a Factory with a cache mechanism something like:
class ConnectionPool:
pass
#dataclass
class ConnectionPoolFactory:
connection_pool_cache: dict[str: ConnectionPool] = field(default_factory=dict)
def get_connection(self, app_name: str) -> ConnectionPool:
if self.connection_pool_cache.get(app_name, None) is None:
self.connection_pool_cache[app_name] = ConnectionPool()
return self.connection_pool_cache[app_name]
A possible solution is to implement a custom Django middleware, as described in https://docs.djangoproject.com/ja/1.9/topics/http/middleware/.
You could initialize the HTTP connection pool in the middleware's __init__ method, which is only called once at the first request. Then, start the HTTP request during process_request and on process_response check it has finished (or wait for it) and append that response to the internal one.
I'm working on a Flask app which retrieves the user's XML from the myanimelist.net API (sample), processes it, and returns some data. The data returned can be different depending on the Flask page being viewed by the user, but the initial process (retrieve the XML, create a User object, etc.) done before each request is always the same.
Currently, retrieving the XML from myanimelist.net is the bottleneck for my app's performance and adds on a good 500-1000ms to each request. Since all of the app's requests are to the myanimelist server, I'd like to know if there's a way to persist the http connection so that once the first request is made, subsequent requests will not take as long to load. I don't want to cache the entire XML because the data is subject to frequent change.
Here's the general overview of my app:
from flask import Flask
from functools import wraps
import requests
app = Flask(__name__)
def get_xml(f):
#wraps(f)
def wrap():
# Get the XML before each app function
r = requests.get('page_from_MAL') # Current bottleneck
user = User(data_from_r) # User object
response = f(user)
return response
return wrap
#app.route('/one')
#get_xml
def page_one(user_object):
return 'some data from user_object'
#app.route('/two')
#get_xml
def page_two(user_object):
return 'some other data from user_object'
if __name__ == '__main__':
app.run()
So is there a way to persist the connection like I mentioned? Please let me know if I'm approaching this from the right direction.
I think you aren't approaching this from the right direction because you place your app too much as a proxy of myanimelist.net.
What happens when you have 2000 users? Your app end up doing tons of requests to myanimelist.net, and a mean user could definitely DoS your app (or use it to DoS myanimelist.net).
This is a much cleaner way IMHO :
Server side :
Create a websocket server (ex: https://github.com/aaugustin/websockets/blob/master/example/server.py)
When a user connects to the websocket server, add the client to a list, remove it from the list on disconnect.
For every connected users, do frequently check myanimelist.net to get the associated xml (maybe lower the frequence the more online users you get)
for every xml document, make a diff with your server local version, and send that diff to the client using the websocket channel (assuming there is a diff).
Client side :
on receiving diff : update the local xml with the differences.
disconnect from websocket after n seconds of inactivity + when disconnected add a button on the interface to reconnect
I doubt you can do anything much better assuming myanimelist.net doesn't provide a "push" API.
I have a bottle.py application which has a number of routes already built. I would like to create a new get route which, when accessed, passes the request along to another HTTP server and relays the result back.
What is the simplest way to get that done?
In principle, all you need is to install the wsgiproxy module and do this:
import bottle
from wsgiproxy.app import WSGIProxyApp
root = bottle.Bottle()
proxy_app = WSGIProxyApp("http://localhost/")
root.mount(proxy_app,"/proxytest")
Running this app will then proxy all requests under /proxytest to the server running on localhost:80. In practice, I found this didn't work without taking extra steps to remove hop-by-hop headers. I took the code in this gist and stripped it down to make a simple app that successfully proxies the request.