Dbus/GLib Main Loop, Background Thread - python

I'm starting out with DBus and event driven programming in general. The service that I'm trying to create really consists of three parts but two are really "server" things.
1) The actual DBus server talks to a remote website over HTTPS, manages sessions, and conveys info the clients.
2) The other part of the service calls a keep alive page every 2 minutes to keep the session active on the external website
3) The clients make calls to the service to retrieve info from the service.
I found some simple example programs. I'm trying to adapt them to prototype #1 and #2. Rather than building separate programs for both. I thought I that I can run them in a single, two threaded process.
The problem that I'm seeing is that I call time.sleep(X) in my keep alive thread. The thread goes to sleep, but won't ever wake up. I think that the GIL isn't released by the GLib main loop.
Here's my thread code:
class Keepalive(threading.Thread):
def __init__(self, interval=60):
super(Keepalive, self).__init__()
self.interval = interval
bus = dbus.SessionBus()
self.remote = bus.get_object("com.example.SampleService", "/SomeObject")
def run(self):
while True:
print('sleep %i' % self.interval)
time.sleep(self.interval)
print('sleep done')
reply_status = self.remote.keepalive()
if reply_status:
print('Keepalive: Success')
else:
print('Keepalive: Failure')
From the print statements, I know that the sleep starts, but I never see "sleep done."
Here is the main code:
if __name__ == '__main__':
try:
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
session_bus = dbus.SessionBus()
name = dbus.service.BusName("com.example.SampleService", session_bus)
object = SomeObject(session_bus, '/SomeObject')
mainloop = gobject.MainLoop()
ka = Keepalive(15)
ka.start()
print('Begin main loop')
mainloop.run()
except Exception as e:
print(e)
finally:
ka.join()
Some other observations:
I see the "begin main loop" message, so I know it's getting control. Then, I see "sleep %i," and after that, nothing.
If I ^C, then I see "sleep done." After ~20 seconds, I get an exception from self.run() that the remote application didn't respond:
DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
What's the best way to run my keep alive code within the server?
Thanks,

You have to explicitly enable multithreading when using gobject by calling gobject.threads_init(). See the PyGTK FAQ for background info.
Next to that, for the purpose you're describing, timeouts seem to be a better fit. Use as follows:
# Enable timer
self.timer = gobject.timeout_add(time_in_ms, self.remote.keepalive)
# Disable timer
gobject.source_remove(self.timer)
This calls the keepalive function every time_in_ms (milli)seconds. Further details, again, can be found at the PyGTK reference.

Related

How to timeout Paramiko sftp.put() with signal module or other in Python?

I would like timeout the function sftp.put(), I have tried with signal Module but the script doesn't die if the upload time is over 10s.
I use that to transfer files by ssh (paramiko).
[...]
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't upload the fileeeeeeeeeeee!!!!")
[...]
raspi = paramiko.SSHClient()
raspi.set_missing_host_key_policy(paramiko.AutoAddPolicy())
raspi.connect(ip , username= "", password= "" , timeout=10)
sftp = raspi.open_sftp()
[...]
signal.signal(signal.SIGALRM, handler)
signal.alarm(10)
sftp.put(source, destination , callback=None, confirm=True)
signal.alarm(0)
raspi.close()
[...]
Update 1:
I want to abort the transfer if the server stops responding for a while. Actually, my python script check (in loop) any files in a folder, and send it to this remote server. But in the problem here I want to leave this function in the case of the server become inaccessible suddenly during a transfer (ip server changing, no internet anymore,...). But when I simulate a disconnection, the script stays stuck at this function sftp.put anyway...)
Update 2:
When the server goes offline during a transfer, put() seems to be blocked forever. This happens with this line too:
sftp.get_channel().settimeout(xx)
How to do when we lose the Channel?
Update 3 & script goal
Ubuntu 18.04
and paramiko version 2.6.0
Hello,
To follow your remarks and questions, I have to give more details about my very Ugly script, sorry about that :)
Actually, I don’t want to have to kill a thread manually and open a new one. For my application I want that the script run totally in autonomous, and if something wrong during the process, it can still go on. For that I use the Python exception handling. Everything does what I want except when the remote server going off during a transfer: The script stays blocked in the put() function, I think inside a loop.
Below, the script contains in total 3 functions to timeout this thanks to your help, but apparently nothing can leave this damned sftp.put()! Do you have some new idea ?
Import […]
[...]
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't upload the fileeeeeeeeeeee!!!!")
def check_time(size, file_size):
global start_time
if (time.time() - start_time) > 10:
raise Exception
i = 0
while i == 0:
try:
time.sleep(1) # CPU break
print ("go!")
#collect ip server
fichierIplist = open("/home/robert/Documents/iplist.txt", "r")
file_lines = fichierIplist.readlines()
fichierIplist.close()
last_line = file_lines [len (file_lines)-1]
lastKnowip = last_line
data = glob.glob("/home/robert/Documents/data/*")
items = len(data)
if items != 0:
time.sleep(60) #anyway
print("some Files!:)")
raspi = paramiko.SSHClient()
raspi.set_missing_host_key_policy(paramiko.AutoAddPolicy())
raspi.connect(lastKnowip, username= "", password= "" , timeout=10)
for source in data: #Upload file by file
filename = os.path.basename(source) #
destination = '/home/pi/Documents/pest/'+ filename #p
sftp = raspi.open_sftp()
signal.signal(signal.SIGALRM, handler)
signal.alarm(10)
sftp.get_channel().settimeout(10)
start_time = time.time()
sftp.put(source, destination, callback=check_time)
sftp.close()
signal.alarm(0)
raspi.close()
else:
print("noFile!")
except:
pass
If you want to timeout, when the server stops responding:
set the timeout argument of SSHClient.connect (your doing that already),
and set sftp.get_channel().settimeout as already suggested by #EOhm
If you want to timeout even when the server is responding, but slowly, implement the callback argument to abort the transfer after certain time:
start_time = time.time()
def check_time(size, file_size):
global start_time
if (time.time() - start_time) > ...:
raise Exception
sftp.put(source, destination, callback=check_time)
This won't cancel the transfer immediately. To optimize transfer performance, Paramiko queues the write requests to the server. Once you attempt to cancel the transfer, Paramiko has to wait for the responses to those requests in SFTPFile.close() to clear the queue. You might solve that by using SFTPClient.putfo() and avoiding calling the SFTPFile.close() when the transfer is cancelled. But you won't be able to use the connection afterwards. Of course, you can also not use the optimization, then you can cancel the transfer without delays. But that kind of defies the point of all this, doesn't it?
Alternatively, you can run the transfer in a separate thread and kill the thread if it takes too long. Ugly but sure solution.
Use sftp.get_channel().settimeout(s) for that instead.
After trying a lot of things and with your help and advice, I have found a reliable solution for what I wanted. I execute sftp.put in a separate Thread and my script do what I want.
Many thanks for your help
Now if the server shuts down during a transfer, after 60 sec, my script goes on using:
[...]
import threading
[...]
th = threading.Thread(target=sftp.put, args=(source,destination))
th.start()
h.join(60)
[...]

Consuming rabbitmq queue from inside python threads

This is a long one.
I have a list of usernames and passwords. For each one I want to login to the accounts and do something things. I want to use several machines to do this faster. The way I was thinking of doing this is have a main machine whose job is just having a cron which from time to time checks if the rabbitmq queue is empty. If it is, read the list of usernames and passwords from a file and send it to the rabbitmq queue. Then have a bunch of machines which are subscribed to that queue whose job is receiving a user/pass, do stuff on it, acknowledge it, and move on to the next one, until the queue is empty and then the main machine fills it up again. So far I think I have everything down.
Now comes my problem. I have checked that the things to be done with each user/passes aren't so intensive and so I could have each machine doing three of them simultaneously using python's threading. In fact for a single machine I have implemented this where I load the user/passes into a python Queue() and then have three threads consume that Queue(). Now I want to do something similar, but instead of consuming from a python Queue(), each thread of each machine should consume from a rabbitmq queue. This is where I'm stuck. To run tests I started by using rabbitmq's tutorial.
send.py:
import pika, sys
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
message = ' '.join(sys.argv[1:])
channel.basic_publish(exchange='',
routing_key='hello',
body=message)
connection.close()
worker.py
import time, pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print ' [x] received %r' % (body,)
time.sleep( body.count('.') )
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='hello', no_ack=False)
channel.start_consuming()
For the above you can run two worker.py which will subscribe to the rabbitmq queue and consume as expected.
My threading without rabbitmq is something like this:
runit.py
class Threaded_do_stuff(threading.Thread):
def __init__(self, user_queue):
threading.Thread.__init__(self)
self.user_queue = user_queue
def run(self):
while True:
login = self.user_queue.get()
do_stuff(user=login[0], pass=login[1])
self.user_queue.task_done()
user_queue = Queue.Queue()
for i in range(3):
td = Threaded_do_stuff(user_queue)
td.setDaemon(True)
td.start()
## fill up the queue
for user in list_users:
user_queue.put(user)
## go!
user_queue.join()
This also works as expected: you fill up the queue and have 3 threads subscribe to it. Now what I want to do is something like runit.py but instead of using a python Queue(), using something like worker.py where the queue is actually a rabbitmq queue.
Here's something which I tried and didn't work (and I don't understand why)
rabbitmq_runit.py
import time, threading, pika
class Threaded_worker(threading.Thread):
def callback(self, ch, method, properties, body):
print ' [x] received %r' % (body,)
time.sleep( body.count('.') )
ch.basic_ack(delivery_tag = method.delivery_tag)
def __init__(self):
threading.Thread.__init__(self)
self.connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
self.channel = self.connection.channel()
self.channel.queue_declare(queue='hello')
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(self.callback, queue='hello')
def run(self):
print 'start consuming'
self.channel.start_consuming()
for _ in range(3):
print 'launch thread'
td = Threaded_worker()
td.setDaemon(True)
td.start()
I would expect that this launches three threads each of which is blocked by .start_consuming() which just stays there waiting for the rabbitmq queue to send them sometihing. Instead, this program starts, does some prints, and exits. The pattern of the exists is weird too:
launch thread
launch thread
start consuming
launch thread
start consuming
In particular notice there is one "start consuming" missing.
What's going on?
EDIT: One answer I found to a similar question is here
Consuming a rabbitmq message queue with multiple threads (Python Kombu)
and the answer is to "use celery", whatever that means. I don't buy it, I shouldn't need anything remotely as sophisticated as celery. In particular, I'm not trying to set up an RPC and I don't need to read replies from the do_stuff routines.
EDIT 2: The print pattern that I expected would be the following. I do
python send.py first message......
python send.py second message.
python send.py third message.
python send.py fourth message.
and the print pattern would be
launch thread
start consuming
[x] received 'first message......'
launch thread
start consuming
[x] received 'second message.'
launch thread
start consuming
[x] received 'third message.'
[x] received 'fourth message.'
The problem is that you're making the thread daemonic:
td = Threaded_worker()
td.setDaemon(True) # Shouldn't do that.
td.start()
Daemonic threads will be terminated as soon as the main thread exits:
A thread can be flagged as a “daemon thread”. The significance of this
flag is that the entire Python program exits when only daemon threads
are left. The initial value is inherited from the creating thread. The
flag can be set through the daemon property.
Leave out setDaemon(True) and you should see it behave the way you expect.
Also, the pika FAQ has a note about how to use it with threads:
Pika does not have any notion of threading in the code. If you want to
use Pika with threading, make sure you have a Pika connection per
thread, created in that thread. It is not safe to share one Pika
connection across threads.
This suggests you should move everything you're doing in __init__() into run(), so that the connection is created in the same thread you're actually consuming from the queue in.

Python Tornado - Asynchronous Request is blocking

The request handlers are as follows:
class TestHandler(tornado.web.RequestHandler): # localhost:8888/test
#tornado.web.asynchronous
def get(self):
t = threading.Thread(target = self.newThread)
t.start()
def newThread(self):
print "new thread called, sleeping"
time.sleep(10)
self.write("Awake after 10 seconds!")
self.finish()
class IndexHandler(tornado.web.RequestHandler): # localhost:8888/
def get(self):
self.write("It is not blocked!")
self.finish()
When I GET localhost:8888/test, the page loads 10 seconds and shows Awake after 10 seconds; while it is loading, if I open localhost:8888/index in a new browser tab, the new index page is not blocked and loaded instantly. These fit my expectation.
However, while the /test is loading, if I open another /test in a new browser tab, it is blocked. The second /test only starts processing after the first has finished.
What mistakes have I made here?
What you are seeing is actually a browser limitation, not an issue with your code. I added some extra logging to your TestHandler to make this clear:
class TestHandler(tornado.web.RequestHandler): # localhost:8888/test
#tornado.web.asynchronous
def get(self):
print "Thread starting %s" % time.time()
t = threading.Thread(target = self.newThread)
t.start()
def newThread(self):
print "new thread called, sleeping %s" % time.time()
time.sleep(10)
self.write("Awake after 10 seconds!" % time.time())
self.finish()
If I open two curl sessions to localhost/test simultaneously, I get this on the server side:
Thread starting 1402236952.17
new thread called, sleeping 1402236952.17
Thread starting 1402236953.21
new thread called, sleeping 1402236953.21
And this on the client side:
Awake after 10 seconds! 1402236962.18
Awake after 10 seconds! 1402236963.22
Which is exactly what you expect. However in Chromium, I get the same behavior as you. I think that Chromium (perhaps all browsers) will only allow one connection at a time to be opened to the same URL. I confirmed this by making IndexHandler run the same code as TestHandler, except with slightly different log messages. Here's the output when opening two browser windows, one to /test, and one to /index:
index Thread starting 1402237590.03
index new thread called, sleeping 1402237590.03
Thread starting 1402237592.19
new thread called, sleeping 1402237592.19
As you can see both ran concurrently without issue.
I think you picked the "wrong" test for checking parallel GET requests, that's because you're using a blocking function for your test: time.sleep(), which its behavior doesn't really occur when you simply render an HTML page ...
What happens is, that the def get() ( which handle all GET requests ) is actually being blocked when you use time.sleep it cannot process any new GET requests, puts them in some kind of "queue".
So if you really want to test sleep() - use the Tornado non-blocking function: tornado.gen.sleep()
Example:
from tornado import gen
#gen.coroutine
def get(self):
yield self.time_wait()
#gen.coroutine
def time_wait(self):
yield gen.sleep(15)
self.write("done")
Open multiple tabs in your browser, then you'll see that all requests are being processed when they arrive w/o "queueing" the new requests that comes in ..

Can't catch SIGINT in multithreaded program

I've seen many topics about this particular problem but i still can't figure why i'm not catching a SIGINT in my main Thread.
Here is my code:
def connect(self, retry=100):
tries=retry
logging.info('connecting to %s' % self.path)
while True:
try:
self.sp = serial.Serial(self.path, 115200)
self.pileMessage = pilemessage.Pilemessage()
self.pileData = pilemessage.Pilemessage()
self.reception = reception.Reception(self.sp,self.pileMessage,self.pileData)
self.reception.start()
self.collisionlistener = collisionListener.CollisionListener(self)
self.message = messageThread.Message(self.pileMessage,self.collisionlistener)
self.datastreaminglistener = dataStreamingListener.DataStreamingListener(self)
self.datastreaming = dataStreaming.Data(self.pileData,self.datastreaminglistener)
return
except serial.serialutil.SerialException:
logging.info('retrying')
if not retry:
raise SpheroError('failed to connect after %d tries' % (tries-retry))
retry -= 1
def disconnect(self):
self.reception.stop()
self.message.stop()
self.datastreaming.stop()
while not self.pileData.isEmpty():
self.pileData.pop()
self.datastreaminglistener.remove()
while not self.pileMessage.isEmpty():
self.pileMessage.pop()
self.collisionlistener.remove()
self.sp.close()
if __name__ == '__main__':
import time
try:
logging.getLogger().setLevel(logging.DEBUG)
s = Sphero("/dev/rfcomm0")
s.connect()
s.set_motion_timeout(65525)
s.set_rgb(0,255,0)
s.set_back_led_output(255)
s.configure_locator(0,0)
except KeyboardInterrupt:
s.disconnect()
In the main function I call Connect() which is launching Threads over which i don't have direct controll.
When I launch this script I would like to be able to stop it when hitting Control+C by calling the "disconnect()" function which stops all the other threads.
In the code i provided it doesn't work because there is no thread in the main function. But I already tryied putting all the instuctions from Main() in a Thread with a While loop without success.
Is there a simple way to solve my problem ?
Thanx
Your indentation is messed up, but there's enough to go on.
Your main thread isn't catching SIGINT because it's not alive. There is nothing that stops your main thread from continuing past the try block, seeing no more code, and closing up shop.
I am not familiar with Sphero. I just attempted to google its docs and was linked to a bunch of 404 pages, so I'll tell you what you would normally do in a threaded environment - join your threads to the main thread so that the main thread can't finish execution before the worker threads.
for t in my_thread_list:
t.join() #main thread can't get past here until all the threads finish
If your Sphero object doesn't provide join-like functionality, you could hack something in that blocks, i.e.
raw_input('Press Enter to disconnect')
s.disconnect()

pys60 thread not working in symbian python

I am developing a sms app for symbian using pys60.
I have created a thread for sending the sms but theread is not working.
I want this thread to run in background, irrespective of applicaton closed or not.
contact index is a dictionary with contact nos and names.
def send_sms(contact_index):
import thread
appuifw.note(u"entered to send sms thread")
tid = thread.start_new_thread(send_sms_thread, (contact_index, ))
appuifw.note(u"completed")
it enters "entered to send sms thread" but doesnt go after that.
the function sens_sms_thread is :
def send_sms_thread(contact_index):
appuifw.note(u"entering the thread in sending sms in loops")
for numbers in contact_index:
name = contact_index[number]
appuifw.note(u"sending sms to %s ." % name)
messaging.sms_send(numbers, message_content, '7bit', cb, '')
e32.ao_sleep(30)
can anyone tell me why it is not entering into this thread which will run in background inrrespective of application closed or not?
Use the threading module. Threads created by this module will be waited on by the main thread before the process exits.
thread = threading.Thread(target=send_sms_thread, args=(contact_index,))
thread.start()
Threads created elsewhere, or with the daemon attribute are not waited for.
Try the next snippet:
if __name__=='__main__':
th = e32.ao_callgate(Udp_recv)
thread.start_new_thread(th,())
for i in range(10):
tmp = (str(i)+data)[0:10]
Udp_recv is the function running in background.

Categories

Resources