From alembic migration version script, I added line below to surpass event trigger.
My question is do I need to reregister at the end of the script? Or migration is using totally different session from the entire application, so it won't matter.
from alembic import op
from sqlalchemy.orm.session import Session
from sqlalchemy import event
def upgrade():
session = Session(bind=op.get_bind())
event.remove(SomeModel, 'after_insert', after_insert_handle)
...
event.listen(SomeModel, 'after_insert', after_insert_handle) # this line necessary?
session.commit()
Related
I create simple Django app. Inside this app I have single checkbox. I save this checkbox state to database if it's checked in database I have True value if checkbox is uncecked I have False value. There is no problem with this part. Now I created function that prints for me every 10 second all the time this checkbox state value from database.
Function I put into views.py file and it looks like:
def get_value():
while True:
value_change = TurnOnOff.objects.first()
if value_change.turnOnOff:
print("true")
else:
print("false")
time.sleep(10)
The point is that function should work all the time. For example If I in models.py code checkbox = models.BooleanField(default=False) after I run command python manage.py runserver it should give me output like:
Performing system checks...
System check identified no issues (0 silenced).
January 04, 2019 - 09:19:47
Django version 2.1.3, using settings 'CMS.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
true
true
true
true
then if I visit website and change state is should print false this is obvious. But as you notice problem is how start this method. It should work all the time even if I don't visit the website yet. And this part confuse me. How to do this properly ?
I need to admit that I tried some solutions
put this function at the end of manage.py file,
put this function into def ready(self),
create middleware class and put method here (example code below).
But this solutions doesn't work.
middleware class :
class SimpleMiddleware:
def __init__(self, get_response):
self.get_response = get_response
get_value()
You can achieve this by using the AppConfig.ready() hook and combining it with a sub-process/thread.
Here is an example apps.py file (based on the tutorial Polls app):
import time
from multiprocessing import Process
from django.apps import AppConfig
from django import db
class TurnOnOffMonitor(Process):
def __init__(self):
super().__init__()
self.daemon = True
def run(self):
# This import needs to be delayed. It needs to happen after apps are
# loaded so we put it into the method here (it won't work as top-level
# import)
from .models import TurnOnOff
# Because this is a subprocess, we must ensure that we get new
# connections dedicated to this process to avoid interfering with the
# main connections. Closing any existing connection *should* ensure
# this.
db.connections.close_all()
# We can do an endless loop here because we flagged the process as
# being a "daemon". This ensures it will exit when the parent exists
while True:
value_change = TurnOnOff.objects.first()
if value_change.turnOnOff:
print("true")
else:
print("false")
time.sleep(10)
class PollsConfig(AppConfig):
name = 'polls'
def ready(self):
monitor = TurnOnOffMonitor()
monitor.start()
Celery is the thing that best suits your needs from what you've described.
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.
The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing, Eventlet, or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready).
You need to create task, run it periodically, call if you want to manually trigger is (in some view/controller).
NOTE: do not use time.sleep(10)
My Django app requires a background thread which executes every 60 seconds and reads entries from a model called "rawdata" then runs some calculations based on the data and generates a new record in another model called "results".
After doing some research I realize this can be done with celery however I feel a full async task framework is a bit of overkill for what I need. I am new to Django and am trying to determine if my implementation below is safe.
My app launches the background process using the "ready" hook of Django like this:
class Myapp(AppConfig):
name = 'Myapp'
ready_done = False
def ready(self):
if os.environ.get('RUN_MAIN') and Myapp.ready_done==False:
from scripts.recordgen import RecordGenerator
Myapp.ready_done=True
Myapp.record_generator = RecordGenerator()
In the app directory I have a /scripts/recordgen.py which spawns a basic independent thread adding new data to the database every 60 seconds. Something like this:
import threading
import time
from Myapp.models import results,rawdata
class RecordGenerator(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.start()
def run(self):
while (1):
time.sleep(60)
#Query the rawdata here and do some calculations
#Just filling with dummy data for this example
dummydata = "dummy data"
new_record = results()
new_record.data = dummydata
new_record.save()
If records are only added to the results table in this thread, and only read, updated and deleted through Django views, will this be a thread safe implementation? The database I'm using is Postgre.
(SQLAlchemy 1.0.11, Python 2.7)
Column definition in model:
public_data = Column(MutableDict.as_mutable(JSON), nullable=False)
Event listener in same model file:
def __listener(target, value, oldvalue, initiator):
... do some stuff
event.listen(User.public_data, 'set', __listener)
Change that should trigger set event:
# this doesn't work
user.public_data['address'] = ''
# but this works
user.public_data = {}
The event is never triggered when only a JSON attribute is modified. I stepped through the SQLAlchemy code and found that after the above line is executed, the model's changed() method is called, which I assume should be responsible for the event firing. Am I doing something wrong or is this not supported?
I'm using celery in my flask application but celery(3.1.8).This is my configuration with the flask application
celery.py
from __future__ import absolute_import
from celery import Celery
from cuewords.settings import CELERY_BROKER_URL,CELERY_RESULT_BACKEND
app = Celery('proj',
broker=CELERY_BROKER_URL,
backend=CELERY_RESULT_BACKEND)
app.conf.update(CELERY_TASK_RESULT_EXPIRES=3600)
if __name__ == '__main__':
app.start()
setting.py
CELERY_BROKER_URL='redis://localhost:6379/0'
CELERY_RESULT_BACKEND='redis://localhost:6379/0'
BROKER_TRANSPORT = 'redis'
api.py
class Webcontent(Resource):
def post(self,session=session):
args = self.parser.parse_args()
site_url = args["url"]
url_present=Websitecontent.site_url_present(session,site_url)
if site_url.strip() != "" and not url_present:
try:
#add data and commit
session.commit()
websitecontent=Websitecontent(params*)
websitecontent.update_url(id,session)
except:
session.rollback()
raise
finally:
session.close()
else:
return "No data created / data already present"
And in my model i'm adding a method to task
model.py
from cuewords.celery import app
class Websitecontent(Base):
#app.task(name='update_url')
def update_url(self,id,session):
...code goes here..
And this how i run the celery from command prompt
celery -A cuewords.celery worker
And i also using flower to monitor the task i can see a worker running but i couldn't see any task its empty .Any idea what im missing or doing wrong ..
Thanks
The problem is that your tasks never get imported into the Python runtime when running the worker(s). The celery command is your entry point. And you're telling Celery to import your cuewords.celery module because thats where you're app instance resides. However, this is where the chain of events ends and no further Python code is imported.
Now, the most common mistake is to import the tasks into the same module as the Celery app instance. Unfortunately this will result in two modules trying to import things from each other and will result in a circular import error. This is no good.
To get around this one could import the task functions into the Celery app module and register them without using the decorator style. For example:
from celery import Celery
from models import my_task
app = Celery()
app.task(name='my_task')(my_task)
This would remove the need to import the app instance in your model module.
However, you're using method tasks. Method tasks need to be treated differently than function tasks as noted here: http://docs.celeryproject.org/en/latest/reference/celery.contrib.methods.html. Method tasks are different from function tasks, because they are associated with an instance of an object. In other words, the function is a class function. So to use the previous style of registering tasks, you'd need an instance of the class first. To get around this you should consider making your tasks functions instead of methods.
I don't get it, but this code doesn't call after_flush/before_flush/after_flush_postexec
# -*- coding: utf-8 -*-
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.orm.interfaces import SessionExtension
class AfterFlushExtension(SessionExtension):
def before_commit(self, session):
print "> before_commit"
def after_commit(self, session):
print "> after_commit"
def before_flush(self, session, flush_context, instances):
print '> before_flush'
def after_flush(self, session, flush_context):
print '> after_flush'
def after_flush_postexec(self, session, flush_context):
print '> after_flush_postexec'
session = scoped_session(sessionmaker(extension=AfterFlushExtension()))
session.flush()
session.commit()
And a result:
$ python ~/Dropbox/playground/python/sqlalchemy_hook_test/main.py
> before_commit
> after_commit
Michael Bayer answered on SQLAlchemy's mailing list https://groups.google.com/d/msg/sqlalchemy/GrMZGtJ-yc8/mCviGB6g9HYJ :
The flush events only fire off if
there's actually something to be
flushed. It would be inefficient for
the events to be emitted for every
flush() as flush is in fact called a
great number of times, on every query,
assuming autoflush enabled. For this
reason a flush() with a session that
has no change events of any kind
quickly checks some flags and returns.
Think of before_flush() really being
called
before_flush_on_pending_changes() if
that helps.
I'll check the docstrings to see if
any clarification is needed.
Thanks, Michael