I have a POST method which calls a few tasklets. These tasklets do have yields in them, and I do have some x.put_async() in my code. So I don't want it to return before all the async stuff is done. So I decorated all my tasklets, which are just small functions with #ndb.tasklet. Also, on top of my POST method, I have:
#ndb.toplevel
def post(self):
However, in the documentation it states:
But if a handler method uses yield, that method still needs to be
wrapped in another decorator, #ndb.synctasklet; otherwise, it will
stop executing at the yield and not finish.
Indeed my method has a yield. It's already wrapped in #ndb.tasklet. Do I replace this with #ndb.synctasklet or do I use both (if so how would I use both)?
Also, see this thread which has some relevance. I too noticed an issue where my request would return without any output, but is un-reproducible. It happens every 15 minutes or so of constant use. I had app = ndb.toplevel(webapp2.WSGIApplication([..]) only, but now I've added #ndb.toplevel to the main POST methods, but the issue still persists.
Should I put #ndb.tasklet on top of methods that have just put_async()'s too? (Should I put it on top of every method just to be safe? What are the downsides to this?)
Regarding the handler and using #ndb.toplevel and #ndb.synctasklet:
The way I understood it was that you need to use both #ndb.synctasklet and #ndb.toplevel on the handler. All the sub-tasklets only need the #ndb.tasklet decorator. e.g.
class Foo(ndb.Model):
name = ndb.StringProperty()
#ndb.tasklet
def my_async(self):
....
#do something else that yields
raise ndb.Return("some result")
#ndb.toplevel
#ndb.synctasklet
def post(self):
foo = Foo(name="baz")
yield foo.put_async()
yield foo.my_async()
....
However. looking at the source, it appears that #ndb.toplevel is actually a synctasklet anyway:
def toplevel(func):
"""A sync tasklet that sets a fresh default Context.
Use this for toplevel view functions such as
webapp.RequestHandler.get() or Django view functions.
"""
Running a small test with yields in the handler and decorated with #ndb.toplevel still seems to work, and appears that you can remove #ndb.synctasklet from the handler.
Regarding whether you should include #ndb.tasklet on methods that call put_async():
If you're not yielding on the put_async(), then you don't need to include #ndb.tasklet on the surrounding method (#ndb.toplevel will handle getting the results from the put_async())
Related
*This post has been edited to reflect suggestions relating to question clarity and intent
*I have conceptualized a solution to this problem, and I am currently developing the code. I will update this post when the solution is complete. This comment will be removed when I post my solution.
My goal is to implement a modular framework that meets the following requirements:
There is a single instance that manages all of the modular components; I will refer to it as the Engine, and the modular components as Models. The Engine must provide the following services:
Direct information to each Models subscribed functions; I will refer to these as callbacks.
Run functions belonging to an instance of a Model, if the model has requested that the function be run; I will refer to these as updates.
A Model can be defined and instantiated without an Engine. The Model has no dependencies on the Engine (although without an Engine, the model is mostly useless).
There should be some attribute applied to each callback and update in a Model so that, when added to the Engine, the Engine can determine the purpose of each function in the Model.
Each update is defined with update criteria, consisting of Boolean logic and a priority. An update will only run successfully if the update criteria are True. If the update criteria are False, the update will raise an Exception
updates are called by the Engine, highest priority first.
Information can only be passed to an Engine by returning a value in a update. Information passed to the Engine is distributed immediately to callbacks.
Each callback is defined with a topic. The callback will only be called by the Engine, if the information topic matches the callback topic.
It is my current understanding that decorators would be effective in implementing the desired behavior. I think we could do the following:
Create a decorator named callback, that takes a parameter tag which is set as an attribute on the decorated function. tag denotes the type of information that the function should receive.
Create a decorator named 'update', that takes parameters logic and priority which are set as attributes on the decorated function. logic is a callable that returns a Boolean. priority denotes the priority of the function.
Using decorators would allow me to define Models without Engine dependency. The Engine can utilize inspect to get functions in each Model that have the relevant attributes.
After implementing various attempts, I have the following concerns:
Decorators are applied at instantiation, and as a result cannot accept self as an argument.
I'm still unclear about the source of the arguments, why you are attaching flags to functions and where this exception should be used...
This seems more like an XY-problem than anything else.
Nonetheless, decorators are cool so I wrote this which might point you in the right direction.
import typing as T
from datetime import date
def testmonday() -> bool:
return date.today().weekday() == 0 # when I wrote this it was Monday
def testfalse() -> bool:
return False
def runonany(*tests: T.Callable[[], bool]):
def wrapper(func):
if any(t() for t in tests):
return func
else:
return lambda: None
return wrapper
#runonany(testmonday, testfalse, lambda: False)
def runfunc():
return "running"
#runonany(testfalse, lambda: True)
def dontrunfunc():
return "how did you get this ?"
# Or if flags are a group of booleans
FLAGS = [True, False, True, False]
def runonanyflag():
def wrapper(func):
if any(FLAGS):
return func
else:
return lambda: None
return wrapper
#runonanyflag
def runfuncflag():
return "also running"
print(runfunc())
print(dontrunfunc()) # None
print(runfuncflag())
Seems that your Model is nothing but a set of some Callbacks, somehow tagged, with the only intention to be called by the Engine.
In that case it is enough to have simple abstract class defining all known accepted callbacks by their rightful names. It’s easy to know if a given callback is implemented: one deadly stupid option is to catch the NotImplemented on the Engine side.
Further more: your Engine could expose a way to register expected Callback upfront, in which case design is approaching the standard proven by Time, waterproof and bulletproof Observer pattern.
I would suggest to take a really careful look into so called “reactive extensions”, especially in JavaScript (rxjs), and it’s core interfaces: Observer and Observable.
I am a beginner in Python, so please be... kind?
Anyway, I need use a static method to call another method, which requires the use of "self" (and thus, a normal method I believe). I am working with Telethon, a Python implementation of Telegram. I have tried other questions on SO, but I just can't seem to find a solution to my problem.
An overview of the program (please correct me if I'm wrong):
1) interactive_telegram_client is a child class of telegram_client, and it creates an instance.
#interactive_telegram_client.py
class InteractiveTelegramClient(TelegramClient):
super().__init__(session_user_id, api_id, api_hash, proxy)
2) When the InteractiveTelegramClient runs, it adds an update_handler self.add_update_handler(self.update_handler) to constantly check for messages received/sent, and prints it to screen
#telegram_client.py
def add_update_handler(self, handler):
"""Adds an update handler (a function which takes a TLObject,
an update, as its parameter) and listens for updates"""
if not self.sender:
raise RuntimeError(
"You should connect at least once to add update handlers.")
self.sender.add_update_handler(handler)
#interactive_telegram_client.py
#staticmethod
def update_handler(update_object):
try:
if type(update_object) is UpdateShortMessage:
if update_object.out:
print('You sent {} to user #{}'.format(update_object.message,
update_object.user_id))
else:
print('[User #{} sent {}]'.format(update_object.user_id,
update_object.message))
Now, my aim here is to send back an auto-reply message upon receiving a message. Thus, I think that adding a call to method InteractiveTelegramClient.send_ack(update_object) in the update_handler method would serve my needs.
#interactive_telegram_client.py
def send_ack(self, update_object):
entity = update_object.user_id
message = update_object.message
msg, entities = parse_message_entities(message)
msg_id = utils.generate_random_long()
self.invoke(SendMessageRequest(peer=get_input_peer(entity),
message=msg,random_id=msg_id,entities=entities,no_webpage=False))
However, as you can see, I require the self to invoke this function (based on the readme, where I assume client to refer to the same thing as self). Since the method update_handler is a static one, self is not passed through, and as such I cannot invoke the call as such.
My possible strategies which have failed include:
1) Instantiating a new client for the auto-reply
- Creating a new client/conversation for each reply...
2) Making all the methods non-static
- Involves a tremendous amount of work since other methods modified as well
3) Observer pattern (sounds like a good idea, I tried, but due to a lack of skills, not succeeded)
I was wondering if there's any other way to tackle this problem? Or perhaps it's actually easy, just that I have some misconception somewhere?
Forgot to mention that due to some restrictions on my project, I can only use Telethon, as opposed to looking at other alternatives. Adopting another library (like an existing auto-reply one) is allowed, though I did not really look into that since merging that and Telethon may be too difficult for me...
based on the readme, where I assume client to refer to the same thing as self
Correct, since the InteractiveTelegramClient subclasses the TelegramClient and hence, self is an instance of the extended client.
Instantiating a new client for the auto-reply - Creating a new client/conversation for each reply
This would require you to create another authorization and send another code request to login, because you can't work with the same *.session at the same time.
Making all the methods non-static - Involves a tremendous amount of work since other methods modified as well
It doesn't require such amount of work. Consider the following example:
class Example:
def __init__(self, a):
self.a = a
def do_something(self):
Example.other_method()
#staticmethod
def other_method():
print('hello, world!')
Is equivalent to:
class Example:
def __init__(self, a):
self.a = a
def do_something(self):
self.other_method()
#staticmethod
def other_method():
print('hello, world!')
It doesn't matter whether you use self. or the class name to refer to a static method from within the class. Since the InteractiveClientExample already uses self., all you would have to do would be changing:
#staticmethod
def update_handler(update_object):
for
def update_handler(self, update_object):
For more on the #staticmethod decorator, you can refer to the docs.
I am using tornado and I declared a RequestHandler with a single parameter like this:
class StuffHandler(RequestHandler):
def get(self, stuff_name):
...
app = Application([
(r'/stuff/(.*)/public', StuffHandler)
])
Now I added another handler for '/stuff/(.*)/private', which requires the user to be authenticated:
class PrivateStuffHandler(RequestHandler):
#tornado.web.authenticated
def get(self, stuff_name):
...
This of course will cause get_current_user() to be called before get(). The problem is that, in order for get_current_user() to run, I need to know the stuff_name parameter.
So I thought that I may use the prepare() or the initialize() method, which is called before get_current_user(). However, I can't seem to access stuff_name from those methods. I tried putting stuff_name as a parameter but it didn't work, then I tried calling self.get_argument("stuff_name") but it didn't work either.
How do I access an URL parameter from the prepare() method?
I think you can use self.request.path to get the full path, then achieve the value in path which you need.
In the end, I asked straight to Tornado developers and a helpful user made me notice that there's self.path_args and self.path_kwargs available from anywhere in the class.
So, from the prepare() method (or even the get_current_user() method), I can do:
stuff_name = self.path_args[0]
I'm using Flask and I'd like to protect everything under /admin path.
How can I do that? I'm sure there's a better way than checking for the session in every function.
The most straightforward way to do this, I think, is to use a Blueprint similar to how it is described in this snippet. Then you can have some code that will run before each request when the URL starts with /admin, and within that code you can do your authentication.
The obvious way would be to write a decorator that tests the session, and redirects to another page if the authentication fails. I don't know how much python you know, but a decorator is a function that takes a function and returns another function. Basically, it's a function modifier. Here's a decorator that should show you the way to writing your own to check the session:
import functools
def check_session(view):
#functools.wraps(view)
def inner(*args, **kwargs):
if <test for auth>:
return view(*args, **kwargs)
else:
return flask.redirect("/")
return inner
As you can see, we have a function that takes the view function, and then defines a new function called inner which checks for authentication, and if it succeeds, calls the original view. The line #functools.wraps(view) is an example of using a decorator; the functools.wraps decorator gives the function it is wrapping the properties of the function that it is given as it's first argument. To use this decorator, apply it to your views as such:
#app.route("/admin")
#check_session
def admin_view():
return "Top secret"
Any user who fails the authentication check will now be redirected away, and users who do will be able to see the view as usual. Decorators are applied in a bottom to top order, so make sure that you put it after any other decorators that do function registration (i.e. #app.route).
I am trying to simply my Web application handlers, by using Python decorators.
Essentially I want to use decorators to abstract code that checks for authenticated sessions and the other that checks to see if the cache provider (Memcache in this instance) has a suitable response.
Consider this method definition with the decorators:
#auth.login_required
#cache.clear
def post(self, facility_type_id = None):
auth.login_required checks to see if the user is logged in, otherwise returns an appropriate error message, or executes the original function.
cache.clear would check to to see if the cache has a particular key and drop that, before it executes the calling method.
Both auth.login_required and cache.clear would want to eventually execute the calling method (post).
From what I've read both, doing what I am doing now would execute the calling method (post) twice.
My question, how do I chain decorators that end up executing the calling method, but ensure that it's only called once.
Appreciate any pointers and thanks for your time.
Each successive decorator receives the previously wrapped function, so the function itself only gets called once at the end of the chain. Here's a simple example:
def dec1(f):
def wrapped():
print 'dec1'
return f()
return wrapped
def dec2(f):
def wrapped():
print 'dec2'
return f()
return wrapped
#dec2
#dec1
def spam(): print 'spam'
>>> spam()
dec2
dec1
spam
You didn't understand how decorators work.
The two decorators are already "sequenced". The outer one will receive as function to act on an already decorated function. The inner function is not going to be called twice.