Create object on startup and share across requests - python

I would like to use my Django application as a relay for a session-based online service and share this session among all users. For this, I've configured a python-requests Session object.
I would like to initialise this Session when Django starts up and keep it alive forever. My idea is to have all requests to my Django application share the session object by allowing the view for the particular request access the session object.
In Flask setting this up (for experimental purposes) is fairly easy:
from flask import Flask, render_template, request, session
from requests import Session
app = Flask(__name__)
session = Session()
session.post() # Setup Session by logging in
#app.route("/")
def use_session():
reply = session.get() # Get resource from web service
return jsonify(reply)
Here session would be created when starting and can be accessed by use_session().
I struggle to set up the same in Django though. Where would be the preferred place to create the session?

The equivalent of your Flask code in Django would be to put the same logic in a views.py file:
# yourapp/views.py
from django.http import HttpResponse
from requests import Session
session = Session()
session.post('https://httpbin.org/post') # Setup Session by logging in
def use_session(request):
reply = session.get('https://example.com') # Get resource from web service
return HttpResponse(reply.content, status=reply.status_code)
# yourproject/urls.py
from django.urls import path
from yourapp.views import use_session
urlpatterns = [
path('', use_session)
]
The object will get created as you start the server.
One problem with this approach is that in a real-world deployment you normally run multiple copies of your app (doesn't matter if it's Flask or Django or any other framework) to avoid thread blocking, utilize multiple CPU cores on the same server or have multiple servers, each copy will end up with its own session, which might work fine, but might be problematic.
In a situation like this, you can share a Python object1 across multiple processes by serializing it using the pickle module and storing the byte data in a database. A good place for the data would be something like Redis or Memcached but Django's ORM can be used as well:
# yourapp/models.py
from django.db import models
class Session(models.Model):
pickled = models.BinaryField()
# yourapp/views.py
import pickle
from django.http import HttpResponse
from requests import Session
from . import models
def use_session(request):
# Load the pickled session from the database
session_db_obj = models.Session.objects.first()
if session_db_obj:
# Un-pickle the session
session = pickle.loads(session_db_obj.pickled)
else:
# Create a new session
session = Session()
session.post('https://httpbin.org/post') # Setup Session by logging in
# Create the database object, pickle the session and save it
session_db_obj = models.Session()
session_db_obj.pickled = pickle.dumps(session)
session_db_obj.save()
reply = session.get('https://example.com') # Get resource from web service
return HttpResponse(reply.content, status=reply.status_code)
1: Not any object can be pickled and unpickled reliably, be careful!

Probably the best place to set this up is in settings.py since it's called before the application is initialized and you can easily import your code from there.
This being said, you may want to look into connection pooling or have a wrapper on top of your session which can recreate it in case of failure. Networks are not as reliable as they seem and if you plan to keep the session running for a long time it increases the chances that the session will be stopped at some point.

Related

FastAPI - How to get app instance inside a router?

I want to get the app instance in my router file, what should I do ?
My main.py is as follows:
# ...
app = FastAPI()
app.machine_learning_model = joblib.load(some_path)
app.include_router(some_router)
# ...
Now I want to use app.machine_learning_model in some_router's file , what should I do ?
Since FastAPI is actually Starlette underneath, you could store the model on the app instance using the generic app.state attribute, as described in Starlette's documentation (see State class implementation too). Example:
app.state.ml_model = joblib.load(some_path)
As for accessing the app instance (and subsequently, the model) from outside the main file, you can use the Request object. As per Starlette's documentation, where a request is available (i.e., endpoints and middleware), the app is available on request.app. Example:
from fastapi import Request
#router.get('/')
def some_router_function(request: Request):
model = request.app.state.ml_model

How can I share a python Class between applications without installing it?

I have a flask application, that accesses a DB via Class that encapsulates DB access. I need to use this same class outside of the flask application for some regular jobs that access the same db.
/databases/database.db
/website/application/myblueprint/views.py
/website/application/myblueprint/db_class.py
/scripts/log_reading.py
Both views.py and log_reading.py need to use db_class.py, but you can't import from above your own package.
I could make db_class.py it's own application and install it each
venv, but then every time I edit I have to reinstall it in each
place. Plus there's the overhead of the setup stuff for a single module.
I could put the file in python site path, either by moving
it or by adding to the path, but that feels wrong and I'm not sure
would work with venvs.
I could sym link, that also feels wrong.
I'm not using flask models for the DB, but I don't think that would solve my problem anyway.
if you want to use db or session then you have to create application context, outside of application context you can't use it.
from flask import Flask
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
import os
APP_SETTINGS = os.getenv("APP_SETTINGS", "your config path")
def _create_db_session(self, connection_string):
engine = create_engine(connection_string)
session_factory = sessionmaker(bind=self.engine)
session = scoped_session(session_factory)
return session
app = Flask(__name__)
app.config.from_object(APP_SETTINGS)
db_url = app.config.get("SQLALCHEMY_DATABASE_URI")
session =_create_db_session(db_url)
#then you can use
# session.query to query database
# session.commit(), session.remove()
# same operations you do normally with DB session

Using Flask SQLAlchemy from worker threads

I have a python app that uses Flask RESTful as well as Flask SQLAlchemy. Part of the API I'm writing has the side effect of spinning off Timer objects. When a Timer expires, it executes some database queries. I'm seeing an issue in which code that is supposed to update rows in the database (a sqlite backend) is actually not issuing any UPDATE statements. I have verified this by turning the SQLALCHEMY_ECHO flag on to log the SQL statements. Whether or not the code works seems to be random. About half the time it fails to issue the UPDATE statement. See full example below.
My guess here is that SQLAlchemy Flask does not work properly when called from a worker thread. I think part of the point of Flask SQLAlchemy is to manage the SQLAlchemy sessions for you per API request. Obviously since there are no API requests going on when the Timer expires, I could see where things may not work properly.
Just to test this, I went ahead and wrote a simple data access layer using python's sqlite3 interface and it seems to solve the problem.
I'd really rather not have to rewrite a bunch of data access code though. Is there a way to get Flask SQLAlchemy to work properly in this case?
Sample code
Here's where I set up the flask app and save off the SQLAlchemy db object:
from flask import Flask
from flask_restful import Api
from flask.ext.sqlalchemy import SQLAlchemy
from flask_cors import CORS
import db_conn
flask_app = Flask(__name__)
flask_app.config.from_object('config')
CORS(flask_app)
api = Api(flask_app)
db_conn.db = SQLAlchemy(flask_app)
api.add_resource(SomeClass, '/abc/<some_id>/def')
Here's how I create the ORM models:
import db_conn
db = db_conn.db
class MyTable(db.Model):
__tablename__ = 'my_table'
id = db.Column(db.Integer, primary_key=True)
phase = db.Column(db.Integer, nullable=False, default=0)
def set_phase(self, phase):
self.phase = phase
db.session.commit()
Here's the API handler with timer and the database call that is failing:
from flask_restful import Resource
from threading import Timer
from models import MyTable
import db_conn
import global_store
class SomeClass(Resource):
def put(self, some_id):
global_store.saved_id = some_id
self.timer = Timer(60, self.callback)
return '', 204
def callback(self):
row = MyTable.query.filter_by(id=global_store.saved_id).one()
# sometimes this works, sometimes it doesn't
row.set_phase(1)
db_conn.db.session.commit()
I'm guessing in your callback you aren't actually changing the value of the object. SQLAlchemey won't issue DB UPDATE calls if the session state is not dirty. So if the phase is already 1 for some reason there is nothing to do.

Django RabbitMQ consumer

I am building a Django application which will be contacted by multiple external applications. Django application is supposed to provide UI and populate the database with the data received from external applications.
First idea was to use django_rest_framework but this seemed like creating a tightly coupled system because every external app will have to contact the Django app via REST call.
My other idea is best described with a picture: http://imgur.com/vakZvQs Several publishers would create messages on an RabbitMQ and my Django would consume those and create appropriate models in the DB.
Is something like this possible? I've used async examples from pika library for publisher and consumer and the messages are flowing as expected. Throwing Django in the mix produces errors such as:
RuntimeError: Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label
django.core.exceptions.ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Code excerpts:
# pika consumer
def on_message(self, unused_channel, basic_deliver, properties, body):
# invoking view function
from myapp.views import create_one_foo
create_one_foo()
self.acknowledge_message(basic_deliver.delivery_tag)
# views.py
from .models import Foo
def create_one_foo():
foo = Foo()
foo.bar = "bar"
foo.save()
I had similar issues, it was solved by calling these two lines before you import the models.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "admin.settings")
django.setup()
and then
from .models import Foo
I am still learning django, if I find a detailed explanation I will edit my answer
Use this article for creating consumer
import json
import pika
import django
from sys import path
from os import environ
path.append('/home/john/Dev/SECTION/Likes/Likes/settings.py') #Your path to settings.py file
environ.setdefault('DJANGO_SETTINGS_MODULE', 'Likes.settings')
django.setup()
from likes.models import Quote
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', heartbeat=600, blocked_connection_timeout=300))
channel = connection.channel()
channel.queue_declare(queue='likes')
def callback(ch, method, properties, body):
print("Received in likes...")
print(body)
data = json.loads(body)
print(data)
if properties.content_type == 'quote_created':
quote = Quote.objects.create(id=data['id'], title=data['title'])
quote.save()
print("quote created")
elif properties.content_type == 'quote_updated':
quote = Quote.objects.get(id=data['id'])
quote.title = data['title']
quote.save()
print("quote updated")
elif properties.content_type == 'quote_deleted':
quote = Quote.objects.get(id=data)
quote.delete()
print("quote deleted")
channel.basic_consume(queue='likes', on_message_callback=callback, auto_ack=True)
print("Started Consuming...")
channel.start_consuming()
Look at celery: http://www.celeryproject.org It's a framework helping to create RabbitMQ-based workers
Run a celery worker service on a host where your Django app is. If you need to change the state of Django DB, just import the Django models and put data to the database by the worker. Otherwise you can run celery workers inside Django app.

How to structure a CherryPy app into a MVC architecture?

I was planning for my structure to be something similar to this:
appname/libs/user
/football
/tax
Where the library user, will have the models for the User, the controller which would show up the REST JSON API and the views.
The problems that I'm facing currently can be divided into two major questions, and which mainly stem from using Django for a while. I'm fairly new to CherryPy and SqlAlchemy.
How to define modles in each of this library? The problem I face is I've to inherit the Base Declarative and Engine in every model and run it as a standalone app for its models to be generated. Is there a mechanism where I can plug in the libraries and the database should pull all the models and create it? (Something that Django does.)
How to define routes/apis? (a urls.py)
How about defining the declarative base (sqlalchemy) in appname/db/__init__.py and for each of the libs import the base from appname in appname/libs/NAME/models.py:
import appname.db
Base = appname.db.Base
class MyUser(Base):
...
To get a database session just use a scoped session for example, this could be the basic content for appname/db/__init__.py (or just db.py if you don't want to define additional base models in appname/db/models.py)
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy import engine_from_config
__all__ = ['session', 'ses']
ses = session = scoped_session(sessionmaker())
def load_engine(config):
engine = engine_from_config(config, prefix='')
session.configure(bind=engine)
Then set a tool to remove the session from the thread locals when the request has ended:
import cherrypy
from appname import db
def remove_db_session():
db.session.remove()
cherrypy.tools.removedbs = cherrypy.Tool('on_end_request', remove_db_session)
from that point forward just use the session as normal from any part of your application, for example:
from appname import db
from appname.libs.user import models
class User:
exposed = True
def GET(self, id):
db.ses.query(models.User).filter_by(id=id)
# and let the tool to remove the session
# when the request finish
By the way to enable that removedbs tool, just make sure that you execute somewhere that cherrypy.tools.removedbs = .... I usually put that in: appname/cptools/tool and then in the config dictionary or file set tools.removedbs.on to True
Working with cherrypy means that you will build the application tree, there is no extra magic you need to have a central place to build the full tree, if you want to use the MethodDispatcher or the DefaultDispatcher.
In this case I recommend you the MethodDispatcher, and probably this post can give you a little more perspective and this is from my blog emulating the github api without any base handler.
There is an alternative to use more django like routes with the RoutesDispatcher, but you will lose a lot of functionality from the tools.
To show you an example with the MethodDispatcher and building your own object tree, from the current structure you can have a build function on the appname/builder.py and make something like this:
from appname.views import Main
from appname.libs import user, football
appmap = {'user': user,
'footbal': football}
def build_app(*apps):
root = Main()
for app in apps:
appmodule = appmap[app]
appmodule.attach(root)
return root
And inside the appname/libs/user/__init__.py
from appname.libs.user import views
def build_tree():
root = views.Main()
root.management = views.Management()
return root
def attach(root):
root.user = build_tree()
That's just a way to structure the application, there is also a repository with cherrypy recipes which are pretty helpful.

Categories

Resources