Prefacing my question with the fact that I'm new to Celery and this (1) may have been answered somewhere else (if so, I couldn't find the answer) or (2) there may be a better way to accomplish my objective than what I'm directly asking.
Also, I am aware of celery.contrib.methods, but task_method does not quite accomplish what I am looking for.
My Objective
I would like to create a class mixin that turns a whole class into a Celery task. For example, a mixin represented by something like the code below (which right now does not run):
from celery import Task
class ClassTaskMixin(Task):
#classmethod
def enqueue(cls, *args, **kwargs):
cls.delay(*args, **kwargs)
def run(self, *args, **kwargs):
Obj = type(self.name, (), {})
Obj(*args, **kwargs).run_now()
def run_now(self):
raise NotImplementedError()
Unlike when using task_method, I do not want to fully instantiate the class before the task is queued and .delay() is called. Rather, I want to simply hand-off the class name along with any relevant initialization parameters to the async process. The async process would then fully instantiate the class using the class name and the given initialization paremeters, and then call some method (say .run_now(), for example) on the instantiated object.
Example Use Case
Constructing and sending email asynchronously would be an example use for the mixin I need.
class WelcomeEmail(EmailBase, ClassTaskMixin):
def __init__(self, recipient_address, template_name, template_context):
self.recipient_address = recipient_address
self.template_name = template_name
self.template_context = template_context
def send(self):
self.render_templates()
self.construct_mime()
self.archive_to_db()
self.send_smtp_email()
def run_now(self):
self.send()
The above code would send an email in an async Celery process by calling WelcomeEmail.enqueue(recipient_address, template_name, template_context). Sending the email synchronously in-process would be accomplished by calling WelcomeEmail(recipient_address, template_name, template_context).send().
Questions
Is there any reason that what I'm trying to do is very, very wrong within the Celery framework?
Is there a better way to structure the mixin to make it more Celery-onic than what I've proposed (better attribute names, different method structure, etc.)?
What am I missing to make the mixin functional in a use case as I've described?
Apparently this issue isn't hugely interesting to a lot of people, but... I've accomplished what I set out to do.
See pull request https://github.com/celery/celery/pull/1897 for details.
Related
there's a design issue i have with a problem i'm currently working that will require a certain observable to emit values whenever it reaches a computation milestone,so i need values emitted intermediately,i discovered django implements something similar with signals which is just an implementation of the observer pattern but i really wanted to implement it the way django signals is implemented because observers subscribing to an observable( e.g after saving a model to the db i.e post_save signal) is very much decoupled in the sense that the observing function only needs to be annotated with the #receiver decorator.
So i took a look at django's source code and tried creating a simple version of the signal mechanism i created a signals class in one module with the decorator :
class Signal:
def __init__(self):
self.observers=[]
def connect(self,receiver):
self.observers.append(receiver.__name__)
def send(self,sender,**named):
return [
(receiver,mem_dec(signal=self,sender=sender,**named))
for receiver in self.observers
]
class Child(Signal):
def __init__(self):
super().__init__()
def mem_dec(signal,**kwargs):
def wrapper(func):
signal.connect(func, **kwargs)
return func
return wrapper
if __name__ == "__main__":
send_signal=Child()
send_signal.send(sender="class_child",a="henry")
and i placed the decorated function in another module to serve as the observer:
from .signals import mem_dec
from .signals import Child
#mem_dec(Child,sender=Child)
def hello(a,**kwargs):
print(f'hey {a}')
i noticed in my implementation doesn't work which i expected but the issue is that's what i could reproduce from the source code,i'm very confused about how they implemented their send() and connect() mechanism,all i want to achieve with this is for the signal subclass call all decorated functions, i made the Child class the signal and sender at the same time cause i'm just trying to reproduce the simplest example.
I'm developing a Django project using DRF. I also have used drf-yasg for documentation purposes.
Long story short: I'm using class-based views and I have some APIs which are so similar and I decided to make a super-class and implement common parts of the APIs in it! For being more clear:
class MySuperApiView(APIView):
permission_classes = [<some permission classes>]
def uncommon(self):
pass # to be override in subclasses
#swagger_auto_schema(request_body=request_body, responses=api_responses)
def post(self, *args, **kwargs):
# do some common stuff here
self.uncommon()
# do some other common stuff here
And I just override uncommon method in child-classes:
class SomeCustomApi(MySuperApiView):
def uncommon(self):
# do specific things here
It works fine but I have a little problem: Every Api have its own api_responses which is initialized in the swagger_auto_schema decorator in super-class! And it's not possible to change it!
What do you recommend for such a situation? I really want to do OOP and observe DRY principle.
I finally found the best way to do such a thing in Django! (So yes, I don't know how to deal with such a problem in other frameworks or languages!)
I moved swagger_auto_schema decorator to child-class using a class decorator named method_decorator. So first of all I had to import this method:
from django.utils.decorators import method_decorator
And then I changed super-class and child-class like this:
class MySuperApiView(APIView):
permission_classes = [<some permission classes>]
def uncommon(self):
pass # to be override in subclasses
# <I removed decorator from here!>
def post(self, *args, **kwargs):
# do some common stuff here
self.uncommon()
# do some other common stuff here
api_responses = # responses which belong to "SomeCustomApi"
#method_decorator(name='post',
decorator=swagger_auto_schema(request_body=request_body,
responses=api_responses))
class SomeCustomApi(MySuperApiView):
def uncommon(self):
# do specific things here
It works totaly fine :) however I prefer a solution in which I don't have to repeat the decorator and instead, just initialize the responses parameter. If you face such a problem in other languages and you have the answer for another language, please post your answer.
I am a beginner in Python, so please be... kind?
Anyway, I need use a static method to call another method, which requires the use of "self" (and thus, a normal method I believe). I am working with Telethon, a Python implementation of Telegram. I have tried other questions on SO, but I just can't seem to find a solution to my problem.
An overview of the program (please correct me if I'm wrong):
1) interactive_telegram_client is a child class of telegram_client, and it creates an instance.
#interactive_telegram_client.py
class InteractiveTelegramClient(TelegramClient):
super().__init__(session_user_id, api_id, api_hash, proxy)
2) When the InteractiveTelegramClient runs, it adds an update_handler self.add_update_handler(self.update_handler) to constantly check for messages received/sent, and prints it to screen
#telegram_client.py
def add_update_handler(self, handler):
"""Adds an update handler (a function which takes a TLObject,
an update, as its parameter) and listens for updates"""
if not self.sender:
raise RuntimeError(
"You should connect at least once to add update handlers.")
self.sender.add_update_handler(handler)
#interactive_telegram_client.py
#staticmethod
def update_handler(update_object):
try:
if type(update_object) is UpdateShortMessage:
if update_object.out:
print('You sent {} to user #{}'.format(update_object.message,
update_object.user_id))
else:
print('[User #{} sent {}]'.format(update_object.user_id,
update_object.message))
Now, my aim here is to send back an auto-reply message upon receiving a message. Thus, I think that adding a call to method InteractiveTelegramClient.send_ack(update_object) in the update_handler method would serve my needs.
#interactive_telegram_client.py
def send_ack(self, update_object):
entity = update_object.user_id
message = update_object.message
msg, entities = parse_message_entities(message)
msg_id = utils.generate_random_long()
self.invoke(SendMessageRequest(peer=get_input_peer(entity),
message=msg,random_id=msg_id,entities=entities,no_webpage=False))
However, as you can see, I require the self to invoke this function (based on the readme, where I assume client to refer to the same thing as self). Since the method update_handler is a static one, self is not passed through, and as such I cannot invoke the call as such.
My possible strategies which have failed include:
1) Instantiating a new client for the auto-reply
- Creating a new client/conversation for each reply...
2) Making all the methods non-static
- Involves a tremendous amount of work since other methods modified as well
3) Observer pattern (sounds like a good idea, I tried, but due to a lack of skills, not succeeded)
I was wondering if there's any other way to tackle this problem? Or perhaps it's actually easy, just that I have some misconception somewhere?
Forgot to mention that due to some restrictions on my project, I can only use Telethon, as opposed to looking at other alternatives. Adopting another library (like an existing auto-reply one) is allowed, though I did not really look into that since merging that and Telethon may be too difficult for me...
based on the readme, where I assume client to refer to the same thing as self
Correct, since the InteractiveTelegramClient subclasses the TelegramClient and hence, self is an instance of the extended client.
Instantiating a new client for the auto-reply - Creating a new client/conversation for each reply
This would require you to create another authorization and send another code request to login, because you can't work with the same *.session at the same time.
Making all the methods non-static - Involves a tremendous amount of work since other methods modified as well
It doesn't require such amount of work. Consider the following example:
class Example:
def __init__(self, a):
self.a = a
def do_something(self):
Example.other_method()
#staticmethod
def other_method():
print('hello, world!')
Is equivalent to:
class Example:
def __init__(self, a):
self.a = a
def do_something(self):
self.other_method()
#staticmethod
def other_method():
print('hello, world!')
It doesn't matter whether you use self. or the class name to refer to a static method from within the class. Since the InteractiveClientExample already uses self., all you would have to do would be changing:
#staticmethod
def update_handler(update_object):
for
def update_handler(self, update_object):
For more on the #staticmethod decorator, you can refer to the docs.
Using Django signals receiver decorator I have the following function.
#receiver(post_save)
def action_signal(sender, instance, created, **kwargs):
pass
Is it possible to use the receiver decorator on a class instead on a function? The reason for this is I would like to have a __init__ method etc.
i.e. How can I get something like this to work...
class ActionSignals
def __init__(self):
self.stuff
#receiver(post_save)
def action_signal(sender, instance, created, **kwargs):
print(self.stuff)
Using the receiver decorator on a class method doesn't really make sense. When do you expect the object to be instantiated and the __init__ method to run?
You can use the manual method for connecting signals, instead of the receiver decorator.
First instantiate the object:
action_signal = ActionSignals()
Then you can use the connect method to connect the signal:
post_save.connect(action_signal.action_signal)
Adding a class-based signal handler
You can do this:
class ActionSignals(object):
def __init__(self, *args, **kwargs):
# ...
def __call__(self, *args, **kwargs):
print(self.stuff)
Then to connect the signal handler:
handler = ActionSignals()
post_save.connect(handler)
This makes use of python's "magic" __call__ method that allows you to use an instance of a class as a function.
Avoiding duplicates
Be careful about where you are adding the handlers in your code, as you might create duplicates. For example if you were to put the second bit of code in the module root, it would add a handler every time the module is imported.
To avoid this you can do the following:
post_save.connect(handler, dispatch_uid="my_unique_identifier")
As #Alasdair pointed out, you can add handlers in AppConfig.ready() (and this is the recommended place to do it), although generally it can be done anywhere if you take care not to create undesired duplicates.
See "Where should this code live?" in this doc.
I'm a beginner-intermediate self taught Python developer,
In most of the projects I completed, I can see the following procedure repeats. I don't have any outside home code experiences, I think the below code is not so professional as it is not reusable, and seems like it is not fitting all together in a container, but loosely coupled functions on different modules.
def get_query():
# returns the query string
pass
def make_request(query):
# makes and returns the request with query
pass
def make_api_call(request):
# calls the api and returns response
pass
def process_response(response):
# process the response and returns the details
pass
def populate_database(details):
# populates the database with the details and returns the status of population
pass
def log_status(status):
# logs the status so that developer knows whats happening
pass
query = get_query()
request = make_request(query)
response = make_api_call(request)
details = process_response(response)
status = populate_database(details)
log_status(status)
How do I design this procedure as a class based design?
If I understand correctly, you want these group of functions to be reused. Good approach to this would be create Abstract base class with these methods as shown below:
from abc import ABCMeta
class Generic(object):
__metaclass__ = ABCMeta
def get_query(self):
# returns the query string
pass
def make_request(self, query):
# makes and returns the request with query
pass
def make_api_call(self, request):
# calls the api and returns response
pass
def process_response(self, response):
# process the response and returns the details
pass
def populate_database(self, details):
# populates the database with the details and returns the status of population
pass
def log_status(self, status):
# logs the status so that developer knows whats happening
pass
Now whenever you need to use any of these methods in your project, inherit your class from this abstract class.
class SampleUsage(Generic):
def process_data(self):
# In any of your methods you can call these generic functions
self.get_query()
And then you can create object to actually get results which you want.
obj = SampleUsage()
obj.process_data()
You may have several classes here. To name a few, Query, Request, Response, Database, Logger
Some of your functions may map as follows:
make_query -> Query.make() constructor or class method
make_request -> Request.make(query) constructor or class method
make_api_call -> Request.make_api_call()
process_response -> Response.process()
populate_database -> Database.populate()
log_status -> Logger.status Consider using logging module
You have to think about your application and design it as an interaction of cooperating objects. This is just a starting point in order for you to be partition the functionality of the application between the classes.
Some of these Classes may be Singletons, meaning they are instantiated only once at the beginning of the application and accessed everywhere else. Database and Logger fit that role.
Here is some skeleton definitions:
class Query(object):
#classmethod
def make(cls, *args, **kwargs):
pass
class Request(object):
#classmethod
def make(cls, query):
pass
def make_api_call(self, *args, **kwargs):
# possibly return Response
pass
class Response(object):
def process_response(self):
pass
class Database(object):
_the_db = None
#classmethod
def get_db(cls):
# Simple man's singleton
if not cls._the_db:
cls._the_db = Database()
return cls._the_db
def populate(self):
pass
class Logger(object):
def log(self):
# consider using logging module
pass
I think what lacks in your question is the sense of purpose. You don't switch a perfectly fine procedural code to object-oriented code without a reason. Depending on the reason, there are several ways to do it. As this problem is quite a common one, there are some common techniques that are known to work well for some common reasons.
So, let's assume you encapsulate the main procedure in an object. What are your needs?
Allow re-using the procedure, possibly overriding some parts? See below the template method pattern.
Allow dynamically altering the behavior of the procedure at runtime depending on external factors? Look into the Strategy pattern.
Allow dynamically altering the behavior of the procedure at runtime depending on internal factors? For example, if some request may switch the procedure into "maintenance mode"? Look into the State pattern.
I'll just describe the template method pattern, which looks the closest to Marty's concerns. I cut down the example to 3 steps so it's easier to explain, but I made you a fully working example gist.
The template method
You want to provide a way to re-use the procedure, while allowing to override some well-defined parts? Let's create an empty, fill-in-the-blanks-style template:
class BaseRequestProcesor(object):
def get_query(self):
raise NotImplementedError()
def process_query(self, query):
raise NotImplementedError()
def log_status(self, status):
raise NotImplementedError()
def process(self): # main procedure
query = self.get_query()
status = self.process_query(query)
self.log_status(status)
__call__ = process # allow "calling" the requestprocessor
We have our basic template. Let's create some template fillers:
class DemoQueryReader(object):
def get_query(self):
return 'this is a query'
class HelloQueryProcessor(object):
def process_query(self, query):
return 'Hello World, {}!'.format(query)
class StdoutLogProcessor(object):
def log_status(self, status):
print(status)
Now build a full request processor from the bits we want. This is where the pieces comes together:
class DemonstrationProcessor(DemonQueryReader, HelloQueryProcessor, StdoutLogProcessor, BaseRequestProcessor):
pass
Demonstrating in the console:
>>> from marty_example import DemonstrationProcessor
>>> processor = DemonstrationProcessor()
>>> processor()
Hello World, this is a query!
This is the most pedantic example you can build. You could supply default implementations when that makes sense (doing nothing, mostly). And you can group together overrides, should that make sense.
The point is, you made your process a template, allowing easy override of chosen details, while still being in control of the overall workflow. This is a form of inversion of control.
You can also save a Python file with the class name, or you can create external modules with some functions, organizing them into the modules depending on what they do. Some modules will only contain one function; others will contain a lot.