Calling Dbus method without expecting a reply - python

I have a dbus client written in python to call the exposed dbus methods. The code is as follows
bus = dbus.SessionBus()
service = bus.get_object(PANEL_BUS_NAME, PANEL_BUS_OBJECT)
__panelInterface = dbus.Interface(service, PANEL_BUS_INTERFACE)
__panelInterface.SetBTConnected()
The problem is that when the method is called first time, it takes a while for the exposed method to get executed. My understanding is that the dbus expects a reply from the method's process but times out. However, what I fail to understand is that the method gets executed immediately if called again. In other words, the block occurs only the first time. Can somebody recommend me the remedy for this behavior and help me understand it??

You might find it useful to debug this with a D-Bus analysis tool like Bustle or dbus-monitor. They will show you when the messages and replies are sent, whether any errors are returned, and where the time is spent.

Related

How to open a new pyghmi Session via pyghmi.impi.command.Command after the previous one has timed out?

I'm having some issues with the pyghmi python library, which is used for sending IPMI commands with python scripts. My goal is to implement an HTTP API to send IPMI commands through HTTP requests.
I am already able to create a Session and send a few commands with the library, but if the Session remains IDLE for 30 seconds, it logged itself out.
When the Session is logged out, I can't create a new one : I get an error "Session is logged out", or a deadlock.
How can I do if I want to have a server that is always up and create Session when it receives requests, if I can't create new Session when the previous one is logged out ?
What I've tried :
from pyghmi.ipmi import command
ipmi = command.Command(ip, user, passwd)
res = ipmi.get_power()
print(res)
# wait 30 seconds
res2 = ipmi.get_power() # get "Session logged out" error
ipmi2 = command.Command(ip, user, paswd) # Deadlock if wait < 30 seconds, else no error
res3 = ipmi2.get_power() # get "Session logged out" error
# Impossible to create new command.Command() Session, every command will give "logged out" error
The other problem is that I can't use the asynchronous way by giving an "onlogon callback" function in the command.Command() call, because I will need the callback return value in the caller and that's not possible with this sort of thread behavior.
Edit: I already tried some examples provided here but it's always one-time run scripts, whereas I'm looking for something that can stay "up" forever.
So I finally achieved a sort of solution. I emailed the Pyghmi's main contributor and he said that this lib was not suited for a multi- and reuseable- Session implementation (there is currently an open issue "Session reuse" on Pyghmi repository).
First "solution": use processes
My goal was to create an HTTP API. To avoid the Session timeout issue, I create a new Process (not Thread) for every new request. That works fine, but I did not keep this solution because it is to heavy and sockets consuming. It seems that by creating processes, the memory used by Pyghmi is not shared between processes (that's the goal of processes) so every Session utilisation is not a reuse but a creation.
Second "solution" : use Confluent
Confluent is a tool developed by Lenovo that allow to control hardware via HTTP. It uses a sort of patched version of Pyghmi as backend for IPMI calls. Confluent documentation here.
Once installed and configured on a server, Confluent worked well to control IPMI devices via HTTP. I packaged it in a Docker image along with an ipmi_simulator for testing purposes : confluent dockerized.
The solution today is to run Command.eventloop() after creating the connection. It is documented in ipmi/command.py, which has a very trivial Housekeeper class which in the current version 1.5.53 is actually just a renamed Thread class, with no additional features. It merely runs the eventloop.
The implementation looks like this. One of those mentioned house keeping tasks is sending keepalive messages, if enabled which it is by default and can be influence by supplying keepalive=True at Command instantiation:
class Housekeeper(threading.Thread):
"""A Maintenance thread for housekeeping
Long lived use of pyghmi may warrant some recurring asynchronous behavior.
This stock thread provides a simple minimal context for these housekeeping
tasks to run in. To use, do 'pyghmi.ipmi.command.Maintenance().start()'
and from that point forward, pyghmi should execute any needed ongoing
tasks automatically as needed. This is an alternative to calling
wait_for_rsp or eventloop in a thread of the callers design.
"""
def run(self):
Command.eventloop()

multi-thread application with queue, best approach to deliver each reply to the right caller?

Consider a multi-thread application, in which different pieces of code send commands to a background thread/service through a command queue, and consequently the service puts the replies in a reply queue. Is there a commonly accepted “strategy” for ensuring that a specific reply gets delivered to the rightful caller?
Coming to my specific case (a program in Python3), I was thinking about setting both the command and reply queues to maxsize=1, so that each caller can just put the command and wait for the reply (which will surely be its own), but this could potentially affect the performances of the application. Or else send a sort of unique code (a hash or similar) with the command, and have the background service include that same string in the reply, so that a caller can go through the replies, looking for its own reply and putting back the other replies in the queue. Honestly I don't like either of them. Is there something else that could be done?
I’m asking this because I’ve spent a fair amount of hours investigating online about threading, and reading through the official documentation, but I couldn’t make up my mind on this. I’m unsure which could be the right/best approach and most importantly I’d like to know if there is a mainstream approach to achieve this.
I don’t provide any code because the question deals with general application design.
Associating a unique identifier with each request is basically the standard solution to this problem.
This is the solution employed by protocols from various eras, from DNS to HTTP/2.
You can build whatever abstractions you like on top of it. Consider this semi-example using Twisted's Deferred:
def request(args):
uid = next(id_generator)
request_queue.put((uid, args))
result = waiting[uid] = Deferred()
return result
def process_responses():
uid, response = response_queue.get()
result = waiting.pop(uid)
result.callback(response)
#inlineCallbacks
def foo_doer():
foo = yield request(...)
# foo is the response from the response queue.
The basic mechanism is nothing more than unique-id-tagged items in the two queues. But the user isn't forced to track these UIDs. Instead, they get an easy-to-use abstraction that just gives them the result they want.

How can I speed this up? (urllib2, requests)

Problem: I am trying to validate a captcha can be anything from 0000-9999, using the normal requests module it takes around 45 minutes to go through all of them (0000-9999). How can I multithread this or speed it up? Would be really helpful if I can get the HTTP Status Code from the site to see if i successfully got the code correct or if it is incorrect (200 = correct, 400 = incorrect) If I could get two examples (GET and POST) of this that would be fantastic!
I have been searching for quite some time, most of the modules I look at are outdated (I have been using grequests recently)
example url = https://www.google.com/
example params = captcha=0001
example post data = {"captcha":0001}
Thank you!
You really shouldn't be trying to bypass a captcha programmatically!
You could use several threads to make simultaneous requests but at that point the service you're attacking will most likely ban your IP. At the very least, they've probably got throttling on the service; There's a reason it's supposed to take 45 minutes.
Threading in Python is usually achieved by creating a thread object with a run() method containing your long running code. In your case, you might want to create a thread object which takes a number range to poll. Once instantiated, you'd call the .start() method to have that thread begin working. If any thread should get a success message it would return a message to the main thread, halt itself, and the main thread could then tell all the other threads in the thread pool to stop.

Understanding defer.DeferredQueue() in a proxy example

I am trying to understand a simple python proxy example using Twisted located here. The proxy instantiates a Server Class, which in turn instantiates a client class. defer.DeferredQueue() is used to pass data from client class to server class.
I am now trying to understand how defer.DeferredQueue() works in this example. For example what is the significance of this statement:
self.srv_queue.get().addCallback(self.clientDataReceived)
and it's analogous
self.cli_queue.get().addCallback(self.serverDataReceived)
statement.
What happens when self.cli_queue.put(False) or self.cli_queue = None is executed?
Just trying to get into grips with Twisted now, so things seems pretty daunting. A small explanation of how things are connected would make it far more easy to get into grips with this.
According to the documentation, DeferredQueue has a normal put method to add object to queue and a deferred get method.
The get method returns a Deferred object. You add a callback method (e.g serverDataReceived) to the object. Whenever the object available in the queue, the Deferred object will invoke the callback method. The object will be passed as argument to the method. In case the queue is empty or the serverDataReceived method hasn't finished executing, your program still continues to execute next statements. When new object available in the queue, the callback method will be called regardless of the point of execution of your program.
In other words, it is an asynchronous flow, in contrary to a synchronous flow model, in which, you might have a BlockingQueue, i.e, your program will wait until the next object available in the queue for it to continue executing.
In your example program self.cli_queue.put(False) add a False object to the queue. It is a sort of flag to tell the ProxyClient thread that there won't be anymore data added to the queue. So that it should disconnect the remote connection. You can refer to this portion of code:
def serverDataReceived(self, chunk):
if chunk is False:
self.cli_queue = None
log.msg("Client: disconnecting from peer")
self.factory.continueTrying = False
self.transport.loseConnection()
Set the cli_queue = None is just to discard the queue after the connection is closed.

Python Tornado - making POST return immediately while async function keeps working

so I have a handler below:
class PublishHandler(BaseHandler):
def post(self):
message = self.get_argument("message")
some_function(message)
self.write("success")
The problem that I'm facing is that some_function() takes some time to execute and I would like the post request to return straight away when called and for some_function() to be executed in another thread/process if possible.
I'm using berkeley db as the database and what I'm trying to do is relatively simple.
I have a database of users each with a filter. If the filter matches the message, the server will send the message to the user. Currently I'm testing with thousands of users and hence upon each publication of a message via a post request it's iterating through thousands of users to find a match. This is my naive implementation of doing things and hence my question. How do I do this better?
You might be able to accomplish this by using your IOLoop's add_callback method like so:
loop.add_callback(lambda: some_function(message))
Tornado will execute the callback in the next IOLoop pass, which may (I'd have to dig into Tornado's guts to know for sure, or alternatively test it) allow the request to complete before that code gets executed.
The drawback is that that long-running code you've written will still take time to execute, and this may end up blocking another request. That's not ideal if you have a lot of these requests coming in at once.
The more foolproof solution is to run it in a separate thread or process. The best way with Python is to use a process, due to the GIL (I'd highly recommend reading up on that if you're not familiar with it). However, on a single-processor machine the threaded implementation will work just as fine, and may be simpler to implement.
If you're going the threaded route, you can build a nice "async executor" module with a mutex, a thread, and a queue. Check out the multiprocessing module if you want to go the route of using a separate process.
I've tried this, and I believe the request does not complete before the callbacks are called.
I think a dirty hack would be to call two levels of add_callback, e.g.:
def get(self):
...
def _defered():
ioloop.add_callback(<whatever you want>)
ioloop.add_callback(_defered)
...
But these are hacks at best. I'm looking for a better solution right now, probably will end up with some message queue or simple thread solution.

Categories

Resources