(SQLAlchemy 1.0.11, Python 2.7)
Column definition in model:
public_data = Column(MutableDict.as_mutable(JSON), nullable=False)
Event listener in same model file:
def __listener(target, value, oldvalue, initiator):
... do some stuff
event.listen(User.public_data, 'set', __listener)
Change that should trigger set event:
# this doesn't work
user.public_data['address'] = ''
# but this works
user.public_data = {}
The event is never triggered when only a JSON attribute is modified. I stepped through the SQLAlchemy code and found that after the above line is executed, the model's changed() method is called, which I assume should be responsible for the event firing. Am I doing something wrong or is this not supported?
Related
From alembic migration version script, I added line below to surpass event trigger.
My question is do I need to reregister at the end of the script? Or migration is using totally different session from the entire application, so it won't matter.
from alembic import op
from sqlalchemy.orm.session import Session
from sqlalchemy import event
def upgrade():
session = Session(bind=op.get_bind())
event.remove(SomeModel, 'after_insert', after_insert_handle)
...
event.listen(SomeModel, 'after_insert', after_insert_handle) # this line necessary?
session.commit()
I have a large Python 3.6 system where multiple processes and threads interact with each other and the user. Simplified, there is a Scheduler instance (subclasses threading.Thread) and a Worker instance (subclasses multiprocessing.Process). Both objects run for the entire duration of the program.
The user interacts with the Scheduler by adding Task instances and the Scheduler passes the task to the Worker at the correct moment in time. The worker uses the information contained in the task to do its thing.
Below is some stripped out and simplified code out of the project:
class Task:
def __init__(self, name:str):
self.name = name
self.state = 'idle'
class Scheduler(threading.Thread):
def __init__(self, worker:Worker):
super().init()
self.worker = worker
self.start()
def run(self):
while True:
# Do stuff until the user schedules a new task
task = Task() # <-- In reality the Task intance is not created here but the thread gets it from elsewhere
task.state = 'scheduled'
self.worker.change_task(task)
# Do stuff until the task.state == 'finished'
class Worker(multiprocessing.Process):
def __init__(self):
super().init()
self.current_task = None
self.start()
def change_task(self, new_task:Task):
self.current_task = new_task
self.current_task.state = 'accepted-idle'
def run(self):
while True:
# Do stuff until the current task is updated
self.current_task.state = 'accepted-running'
# Task is running
self.current_task.state = 'finished'
The system used to be structured so that the task contained multiple multiprocessing.Events indicating each of its possible states. Then, not the whole Task instance was passed to the worker, but each of the task's attributes was. As they were all multiprocessing safe, it worked, with a caveat. The events changed in worker.run had to be created in worker.run and back passed to the task object for it work. Not only is this a less than ideal solution, it no longer works with some changes I am making to the project.
Back to the current state of the project, as described by the python code above. As is, this will never work because nothing makes this multiprocessing safe at the moment. So I implemented a Proxy/BaseManager structure so that when a new Task is needed, the system gets it from the multiprocessing manager. I use this structure in a sightly different way elsewhere in the project as well. The issue is that the worker.run never knows that the self.current_task is updated, it remains None. I expected this to be fixed by using the proxy but clearly I am mistaken.
def Proxy(target: typing.Type) -> typing.Type:
"""
Normally a Manager only exposes only object methods. A NamespaceProxy can be used when registering the object with
the manager to expose all the attributes. This also works for attributes created at runtime.
https://stackoverflow.com/a/68123850/8353475
1. Instead of exposing all the attributes manually, we effectively override __getattr__ to do it dynamically.
2. Instead of defining a class that subclasses NamespaceProxy for each specific object class that needs to be
proxied, this method is used to do it dynamically. The target parameter should be the class of the object you want
to generate the proxy for. The generated proxy class will be returned.
Example usage: FooProxy = Proxy(Foo)
:param target: The class of the object to build the proxy class for
:return The generated proxy class
"""
# __getattr__ is called when an attribute 'bar' is called from 'foo' and it is not found eg. 'foo.bar'. 'bar' can
# be a class method as well as a variable. The call gets rerouted from the base object to this proxy, were it is
# processed.
def __getattr__(self, key):
result = self._callmethod('__getattribute__', (key,))
# If attr call was for a method we need some further processing
if isinstance(result, types.MethodType):
# A wrapper around the method that passes the arguments, actually calls the method and returns the result.
# Note that at this point wrapper() does not get called, just defined.
def wrapper(*args, **kwargs):
# Call the method and pass the return value along
return self._callmethod(key, args, kwargs)
# Return the wrapper method (not the result, but the method itself)
return wrapper
else:
# If the attr call was for a variable it can be returned as is
return result
dic = {'types': types, '__getattr__': __getattr__}
proxy_name = target.__name__ + "Proxy"
ProxyType = type(proxy_name, (NamespaceProxy,), dic)
# This is a tuple of all the attributes that are/will be exposed. We copy all of them from the base class
ProxyType._exposed_ = tuple(dir(target))
return ProxyType
class TaskManager(BaseManager):
pass
TaskProxy = Proxy(Task)
TaskManager.register('get_task', callable=Task, proxytype=TaskProxy)
I have a problem with mongo. When I ended my automation tests, I need trash all data and object which I created. I create a script. In this script I delete a rows from a few table. But when I start this, This class doesn't start, where is my problem?
In consol I haven't any message, zero value.
from pymongo import MongoClient
class deleteMongo():
def deleteFirst(self):
client = MongoClient('databaseaddress')
db = client.TableData
db.Employe.delete_one({"name": "EmployeOne"})
def deleteSecond(self):
client = MongoClient('databaseaddress')
db = client.PersonData
db.Person.delete_one({"name": "PersonOne"})
def deleteThird(self):
client = MongoClient('databaseaddress')
db = client.AutoData
db.Auto.delete_one({"name": "AutoThird"})
If I am understanding your question correctly, you are trying to run the script above and it's not doing anything?
If this is your complete module, you are not calling the class at all, but defining the class object.
also the parenthesis in class deleteMongo(): are redundant in a class, since it always inherits the object. On the current setup of this class object, you should use def instead, or setup your class to initialize shared objects of the class.
Based on your current code, try this:
from pymongo import MongoClient
class deleteMongo:
def __init__(self, databaseAddress):
# if the databseAddress is always the same, you can remove it from the arguments
# and hardcode it here
self.client = MongoClient(databaseAddress)
def deleteFirst(self):
db = self.client.TableData
db.Employe.delete_one({"name": "EmployeOne"})
def deleteSecond(self):
db = self.client.PersonData
db.Person.delete_one({"name": "PersonOne"})
def deleteThird(self):
db = self.client.AutoData
db.Auto.delete_one({"name": "AutoThird"})
and then when you need to call one of the class functions, call it like this:
deleteMongo(databaseAddress='someaddress').deleteFirst()
I have a cache handler function that processes functions placed in a queue by threads.
The cache handler is called when the console is idle. I need to be able to know if a function is being processed by the cache handler, or if it's executing outside of the cache handler loop.
Some logic like so:
If cache handler in referring function stack, return True:
Here's the cache handler code:
# Processing all console items in queue.
def process_console_queue():
log = StandardLogger(logger_name='console_queue_handler')
if not CONSOLE_CH.CONSOLE_QUEUE:
return
set_console_lock()
CONSOLE_CH.PROCESSING_CONSOLE_QUEUE.acquire()
print('\nOutputs held during your last input operation: ')
while CONSOLE_CH.CONSOLE_QUEUE:
q = CONSOLE_CH.CONSOLE_QUEUE[0]
remove_from_console_queue()
q[0](*q[1], **q[2])
CONSOLE_CH.PROCESSING_CONSOLE_QUEUE.release()
release_console_lock()
return
If that code calls a function which calls a function which calls a function.... (anywhere in that chain is called by process_console_queue) return True within the called function.
How's that done?
How about using a global threading.local object with an attribute, in_cache_handler?
Have the cache handler set the attribute to True on entry, and set it to False on exit. Then any function that examines the attribute can tell whether the cache handler is somewhere below on the stack.
import threading
thread_local_object = threading.local()
thread_local_object.in_cache_handler = False
def cache_handler(...):
try:
thread_local_object.in_cache_handler = True
...
finally:
thread_local_object.in_cache_handler = False
def some_random_function(...):
if thread_local_object.in_cache_handler:
...
else
...
I bind 2 keys to call 2 methods of my class. Is it possible to call the some method and knowing which key was pressed?
def initGui(self):
self.keyAction = QAction("Test Plugin", self.iface.mainWindow())
self.iface.registerMainWindowAction(self.keyAction, self.toggle_key_1)
self.iface.addPluginToMenu("&Test plugins", self.keyAction)
QObject.connect(self.keyAction, SIGNAL("triggered()"), self.toogle_layer_1)
self.keyAction = QAction("Test Plugin", self.iface.mainWindow())
self.iface.registerMainWindowAction(self.keyAction, self.toggle_key_2)
self.iface.addPluginToMenu("&Test plugins", self.keyAction)
QObject.connect(self.keyAction, SIGNAL("triggered()"), self.toogle_layer_2)
Yes, you can know which object has triggered the signal from your slot (function) with using QObject::sender() function. As Qt docs say:
Returns a pointer to the object that sent the signal, if called in a
slot activated by a signal; otherwise it returns 0. The pointer is
valid only during the execution of the slot that calls this function
from this object's thread context.
Update:
For example, in your slot you can write:
def toogle_layer(self):
action = QtCore.QObject.sender()
if action == self.action1:
# do something
elif action == self.action2:
# do something else