Callback inside inlineCallbacks function - python

Let's say I have a function like this:
def display(this, that):
print this, that
and a class:
class Runner(object):
def __init__(self, callback):
self.callback = callback
self.loop = twisted.internet.task.LoopingCall(repeat)
self.loop.start(0)
#defer.inlineCallbacks
def repeat(self):
this = yield do_this()
that = yield do_that()
if this and that:
# now I want to call the callback function
yield self.callback(this, that) # makes sense?
runner = Runner(display)
reactor.run()
Basically what I want to do is I want to create a Runner class which will do some specific tasks and every time it gets a result, it will call the given callback function. Instead of creating a new function which does a specific thing, I want to create a generic class which does only one thing. E.g:
class TwitterReader(object):
def __init__(self, callback):
...
...
#defer.inlineCallbacks
def get_messages(self):
...
...
yield callback(messages)
class MessageFilter(object):
def __init__(self):
self.bad_messages = open('bad_messages.txt', 'w')
self.twitter = TwitterReader(self.message_received)
def message_received(messages):
for message in messages:
for bad_word in BAD_WORDS:
if bad_word in message:
self.bad_messages.write(message)
break
I'm new to twisted. So, I'm not sure if this is the right way to do it. Is it?
Thanks

Your problem is that callback inside repeat should instead be self.callback.
Other than that your example should work exactly as written.

You'd only need to yield self.callback if it returned a deferred and you wanted to wait for the result before exiting the repeat function. In your example, your callback is a normal function (which effectively returns None), so there is no advantage to yielding - however it is allowed to yield non-deferred values so no harm is done. From the inlineCallbacks docs:
Things that are not Deferreds may also be yielded, and your generator
will be resumed with the same object sent back. This means yield
performs an operation roughly equivalent to maybeDeferred.
If your callback did return a deferred (eg, if it was also an inlineCallbacks decorated function) then yielding would pause execution of repeat until the deferred completed. This may or may not be desirable in your application.

Related

Python Multiprocessing JoinableQueue: clear queue and discard all unfinished tasks

I got two processes and in order to do some clean up in case of fatal errors (instead of processes keeping running), I want to remove all remaining tasks en empty the queue (in order to let join() proceed). How can I achieve that (preferably it should be code to apply in both processes, but my code allows the child process to signal the main process of its failure state and instruct main to do the clean up as well)?
I was trying to get a understand it by inspecting the source at:
https://github.com/python/cpython/blob/main/Lib/multiprocessing/queues.py
But I got a little bit lost with code like:
...
self._unfinished_tasks._semlock._is_zero():
...
def __init__(self, maxsize=0, *, ctx):
Queue.__init__(self, maxsize, ctx=ctx)
self._unfinished_tasks = ctx.Semaphore(0)
...
(also where does the _semlock property comes from?)
For example, what is ctx and it appears not be required as I did not use it in my object creation. Digging further, it may have something to do with (a little bit too mysterious or me)
mp.get_context('spawn')
or
#asynccontextmanager
async def ctx():
yield
I need something like mentioned here by V.E.O (which is quite understandable, but that is only a single process as far as I understand):
Clear all items from the queue
I came up with the following code (to be tested):
def clearAndDiscardQueue(self):
try: # cleanup, preferably in the process that is adding to the queue
while True:
self.task_queue.get_nowait()
except Empty:
pass
except ValueError: # in case of closed
pass
self.task_queue.close()
# theoretically a new item could be placed by the
# other process by the time the interpreter is on this line,
# therefore the part above should be run in the process that
# fills (put) the queue when it is in its failure state
# (when the main process fails it should communicate to
# raise an exception in the child process to run the cleanup
# so main process' join will work)
try: # could be one of the processes
while True:
self.task_queue.task_done()
except ValueError: # too many times called, do not care
# since all remaining will not be processed due to failure state
pass
Else I would need to try understanding code like the following. I think messing with the next code, analogous to calling queue.clear() as in a single process queue, would have serious consequences in terms of race conditions when clearing the buffer/pipe myself somehow.
class Queue(object):
def __init__(self, maxsize=0, *, ctx):
…
self._reader, self._writer = connection.Pipe(duplex=False)
…
def put(self, obj, block=True, timeout=None):
…
self._buffer.append(obj) # in case of close() the background thread
# will quit once it has flushed all buffered data to the pipe.
…
def get(self, block=True, timeout=None):
…
res = self._recv_bytes()
…
return _ForkingPickler.loads(res)
…
class JoinableQueue(Queue):
def __init__(self, maxsize=0, *, ctx):
…
self._unfinished_tasks = ctx.Semaphore(0)
…
def task_done(self):
…
if not self._unfinished_tasks._semlock._is_zero():
…
in which _is_zero() is somehow externally defined (see synchronize.py), like mentioned here:
Why doesn't Python's _multiprocessing.SemLock have 'name'?

Use python generator like syntax while returning back control over to user

Use python generator like syntax while returning back control over to the user (Create stateful API)
I'm having a sort of API like function where user request to a function and below is my code structure:
def api_function(parameters, user: DBUser):
# I have a global variable with a list (obj_list) of WorkingChildClass
# assume obj is an object of WorkingChildClass specifically for this user
# I've very limited user so I'm storing all objects in the list
return obj.handle_user_input(parameters)
Definition for the function that handles user input:
We are storing the next_stage so next time when the user makes the request we'll continue from there instead of starting from the beginning.
class WorkingClass:
def __init__(self) -> None:
pass
def handle_user_input(self, parameters):
while True:
is_skill_completed, response = self.next_stage(parameters)
if is_skill_completed:
return is_skill_completed, response
if response:
return is_skill_completed, response
child class, there are many child class for different functionality:
class WorkingChildClass(WorkingClass):
def __init__(self, jarvis_obj, intent_part) -> None:
self.next_stage = self.stage1
def stage1(self, parameters):
# Do some work
self.next_stage = self.stage2
return False, response
# If response is there then it will be returned to user, otherwise we'll continue to stage2
# first parameter is whether functionality is completed or not, if it returns True then we'll start stage1 again (this will be handled by the main file as it will recreate the obj)
def stage2(self, parameters):
# someother stuff
pass
What I'm looking for is, instead of writing stage1, stage2, etc function I would like to replace with a generator like (or similar) syntax:
class WorkingChildClass:
def __init__(self):
pass
def working_function(self, parameters):
# run stage1
if need_to_return_to_user:
yield response
# run stage2
# and so on.
return response
# when we return that means the we need to restart the working_function from top next time
I want something similar syntax which is less mistake-proof than writing stage1, stage2 as a function, and set next_stage variable. And this syntax increases readability.
I don't know if this is possible or not. But if we can do some kind of middle function which stores the state. It could be a middle function like handle_user_input or decorator or class attributes or a combination of these.
I think it was a bit simpler than I was thinking. I just changed the handle_user_input function a little bit.
class WorkingClass:
def __init__(self) -> None:
self.generator = self.working_function()
def handle_user_input(self, parameters):
self.parameters = parameters
while True:
print("calling next stage.")
is_skill_completed, responses = next(self.generator)
print("next stage completed. is_completed: {}, responses: {}".format(is_skill_completed, responses))
if is_skill_completed:
return is_skill_completed, responses
if responses:
return is_skill_completed, responses
And it works!! Thanks!

modify a function of a class from another class

In pymodbus library in server.sync, SocketServer.BaseRequestHandler is used, and defines as follow:
class ModbusBaseRequestHandler(socketserver.BaseRequestHandler):
""" Implements the modbus server protocol
This uses the socketserver.BaseRequestHandler to implement
the client handler.
"""
running = False
framer = None
def setup(self):
""" Callback for when a client connects
"""
_logger.debug("Client Connected [%s:%s]" % self.client_address)
self.running = True
self.framer = self.server.framer(self.server.decoder, client=None)
self.server.threads.append(self)
def finish(self):
""" Callback for when a client disconnects
"""
_logger.debug("Client Disconnected [%s:%s]" % self.client_address)
self.server.threads.remove(self)
def execute(self, request):
""" The callback to call with the resulting message
:param request: The decoded request message
"""
try:
context = self.server.context[request.unit_id]
response = request.execute(context)
except NoSuchSlaveException as ex:
_logger.debug("requested slave does not exist: %s" % request.unit_id )
if self.server.ignore_missing_slaves:
return # the client will simply timeout waiting for a response
response = request.doException(merror.GatewayNoResponse)
except Exception as ex:
_logger.debug("Datastore unable to fulfill request: %s; %s", ex, traceback.format_exc() )
response = request.doException(merror.SlaveFailure)
response.transaction_id = request.transaction_id
response.unit_id = request.unit_id
self.send(response)
# ----------------------------------------------------------------------- #
# Base class implementations
# ----------------------------------------------------------------------- #
def handle(self):
""" Callback when we receive any data
"""
raise NotImplementedException("Method not implemented by derived class")
def send(self, message):
""" Send a request (string) to the network
:param message: The unencoded modbus response
"""
raise NotImplementedException("Method not implemented by derived class")
setup() is called when a client is connected to the server, and finish() is called when a client is disconnected. I want to manipulate these methods (setup() and finish()) in another class in another file which use the library (pymodbus) and add some code to setup and finish functions. I do not intend to modify the library, since it may cause strange behavior in specific situation.
---Edited ----
To clarify, I want setup function in ModbusBaseRequestHandler class to work as before and remain untouched, but add sth else to it, but this modification should be done in my code not in the library.
The simplest, and usually best, thing to do is to not manipulate the methods of ModbusBaseRequestHandler, but instead inherit from it and override those methods in your subclass, then just use the subclass wherever you would have used the base class:
class SoupedUpModbusBaseRequestHandler(ModbusBaseRequestHandler):
def setup(self):
# do different stuff
# call super().setup() if you want
# or call socketserver.BaseRequestHandler.setup() to skip over it
# or call neither
Notice that a class statement is just a normal statement, and can go anywhere any other statement can, even in the middle of a method. So, even if you need to dynamically create the subclass because you won't know what you want setup to do until runtime, that's not a problem.
If you actually need to monkeypatch the class, that isn't very hard—although it is easy to screw things up if you aren't careful.
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
If you want to be able to call the normal implementation, you have to stash it somewhere:
_setup = ModbusBaseRequestHandler.setup
def setup(self):
# do different stuff
# call _setup whenever you want
ModbusBaseRequestHandler.setup = setup
If you want to make sure you copy over the name, docstring, etc., you can use `wraps:
#functools.wraps(ModbusBaseRequestHandler.setup)
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
Again, you can do this anywhere in your code, even in the middle of a method.
If you need to monkeypatch one instance of ModbusBaseRequestHandler while leaving any other instances untouched, you can even do that. You just have to manually bind the method:
def setup(self):
# do different stuff
myModbusBaseRequestHandler.setup = setup.__get__(myModbusBaseRequestHandler)
If you want to call the original method, or wraps it, or do this in the middle of some other method, etc., it's otherwise basically the same as the last version.
It can be done by Interceptor
from functools import wraps
def iterceptor(func):
print('this is executed at function definition time (def my_func)')
#wraps(func)
def wrapper(*args, **kwargs):
print('this is executed before function call')
result = func(*args, **kwargs)
print('this is executed after function call')
return result
return wrapper
#iterceptor
def my_func(n):
print('this is my_func')
print('n =', n)
my_func(4)
more explanation can be found here

Does a callback execute when other function is running?

I have a conceptual doubt.
If I pass a class method as a callback function (to another program running on other thread) and I get struck in some other class method (not the callback method) eg while(True).
Will the callback ever execute?
class Bicycle(object):
__init__(self, name):
self.name = name
self.f = 0
def callback(self, push_force):
#Go ahead
self.f = push_force
def balance(self):
while True:
# Balance the Bicycle
def main():
B1 = Bicycle("Red")
external(callback=B1.callback)
while True:
B1.balance()
Not my answer, but #Bakuriu's, which is correct:
If the callback is passed to an other thread then, yes it can execute while your balance method is running... even though they will interleave in CPython due to the GIL, but they will be executed concurrently. In other Python implementations they might be executed in parallel.

Why does the order of asynchronous and gen.coroutine matter in Tornado?

I have a piece of code as follows:
#tornado.web.stream_request_body
class DownloadHandler(SecureHandler):
executor = ThreadPoolExecutor(50)
#tornado.web.authenticated
#tornado.gen.coroutine
#tornado.gen.asynchronous
def post(self):
# ...
path = yield self.down_load(fname)
self.set_header("Content-Type", "application/octet-stream")
self.set_header("Content-Disposition", "attachment;filename=%s" % fname)
self.generator = self.read_file(path)
tornado.ioloop.IOLoop.instance().add_callback(self.loop)
#run_on_executor
def down_load(self, fname):
# download a file named `fname` from other website
# store it in a temp file at `path`
# ...
return path
def loop(self):
try:
data = self.generator.next()
self.write(data)
self.flush()
tornado.ioloop.IOLoop.instance().add_callback(self.loop)
except Exception as e:
traceback.print_exc()
self.finish()
def read_file(self, fname):
with open(fname, 'rb') as f:
while True:
data = f.read(1024 * 1024 * 8)
if not data
break
yield data
If the order of asynchronous and gen.coroutine is as shown in my code, it works fine.
But if I switch the order of them, the client side can only receive 8MB data. the traceback.print_exc() prints finish() called twice. May be caused by using async operations without the #asynchronous decorator.
So my question, why does the order of these two decorators matter, and what are the rules to choose the order?
Order matters because #asynchronous looks at the Future returned by #gen.coroutine, and calls finish for you when the coroutine returns. Since Tornado 3.1, the combination of #asynchronous and #gen.coroutine has been unnecessary and discouraged; in most cases you should use #gen.coroutine alone.
However, the example you've shown here is kind of odd - it mixes the coroutine and callback style in ways that don't really work well. The coroutine returns before it is finished, and leaves the remaining work to a chain of callbacks. This is actually similar in spirit to what the #asynchronous decorator does with non-coroutine functions, although it doesn't work with the interaction between the two decorators. The best solution here is to make loop() a coroutine too:
#gen.coroutine
def loop(self):
for data in self.generator:
self.write(data)
yield self.flush()
Then you can call it with yield self.loop() and the coroutine will work normally. You no longer need to call finish() explicitly, or use the #asynchronous decorator.

Categories

Resources