ThreadingTcpServer and Threads: how to shutdown the server from the main thread? - python

A server who listens and handles connections is required for my program and, as ThreadingTcpServer does the whole job for me I decided to use it.
I noted that ThreadingTcpServer.serve_forever() is a blocking method, so I would make the thread server a thread itself.
My code is:
server = None #a variable that will contain the server
def createAndStartServer():
print "starting..."
server = ThreadingTcpServer((socket.gethostname(),1234),MyRequestHandler)
server.serve_forever() #blocking method
print "stopping..."
myThread = threading.Thread(target=createAndStartServer)
myThread.start()
time.sleep(3)
server.shutdown() #this one should stop the server thread from receiving further requests
The idea is to send a shutdown to the server, this way the thread will at most serve the requests it is already serving and, after that, exits from the serve_forever() loop.
This would cause the thread itself to stop as it exits from the createAndStartServer function. I don't know if it is the best method to do that but it sounds logic to me and in java I often do the same, by modifing the value of a boolean variable which handles the threading server loop...I think the shutdown method does something like that, right?
Anyway I got a:
AttributeError: 'NoneType' object has no attribute 'shutdown'
It seems that the server variable is not populated at all by the thread.
Why? And while we are at it tell me if my logic is correct or you have some better idea to handle my problem.

You are lacking of a "global server" inside your createAndStartServer().
By default, python will consider the server = Threading() as a new variable with a local scope. That explains why calling it from the main fails after with the None value.

Related

how to use multithreading with pika and rabbitmq to perform Requests and Responses RPC Messages

I'm working on a Project with Rabbitmq, I'm using the RPC Pattern, basically I'm receiving or consuming Messages from a Queue, make some Processing and then send a Response back. Im using Pika, my goal is to use a Thread per Task so for every Task i ll make a Thread perticularly for that Task. I also read that the best Practice is to make only one Connection and under it many channels as i want to, but i get always this Error :
'start_consuming may not be called from the scope of '
pika.exceptions.RecursionError: start_consuming may not be called from the scope of another BlockingConnection or BlockingChannel callback.
I made some Research and found out that Pika is not thread safe and we should use for every Thread an Independant Connection and a channel. but i dont want to do that since it is considered bad Practice. So I wanted to ask here if someone already achieved to make this work. I read also that it is Possible if i didn't use BlockingConnection to instantiate my Connection and also that there is a Function called add_callback_threadsafe which can make this Possible. but there is unfortunally no Examples for that and I read the Documentation but it's complex and without Examples it was hard for me to grasp what they want to describe.
my Try was to declare two Classes. Each class will represent a Task Executer which receive or consume a message from a queue and based on that made some Processing and deliver a Response back. my idea was to share a rabbitmq Connection between the two Tasks but every Task will get an independant Channel. in the Code above the rabbit Parameter passed to the function is a Class that holds some Variables like Connection and other Functions like EventSubscriber which when called it will assign a new Channel and start consuming messages from that Particular Exchanges and routingKey. Next i declare a Thread and give the subscribe or Consume function as a Target to that Thread. the other Task Class look also the same as this Class that's why i ll only upload this Code. in the main Class i make a Connection to rabbitmq and pass it as Parameter to the constructor of the Two Task Classes.
class On_Deregistration:
def __init__(self, rabbit):
self.event(rabbit) # this will call event function and pass the connection shared between all Tasks. rabbit parameter hold a connection to rabbitmq
def event(self, rabbit):
self.Subscriber = rabbit.EventSubscriber(rabbit, 'testing.test', 'test', False, onDeregistrationFromHRS # this func is task listener)
def subscribeAsync(self):
self.Subscriber.subscribe() # here i call start_consuming
def start(self):
"""start Subscribtion in an Independant Thread """
thread = threading.Thread(target = self.subscribeAsync )
thread.start()
if thread.isAlive():
print("asynchronous subscription started")
MAin Class:
class App:
def __init__(self):
self.rabbitMq = RabbitMqCommunicationInterface(host='localhost', port=5672)
firstTask = On_Deregistration(self.rabbitMq)
secondTask = secondTask(self.rabbitMq)
app = App()
error : 'start_consuming may not be called from the scope of '
pika.exceptions.RecursionError: start_consuming may not be called from the scope of another BlockingConnection or BlockingChannel callback
I searched for the cause of this Error and obviously is pika not thread safe but there must be a Solution for this. maybe Not using a BlockingConnection ? maybe someone can give me an Example how to do that because i tried it and didnt work. Maybe I'm missing something about how to Implement multithreading with rabbitmq
so after a long research, I figure out that Pika is not thread safe. well for the moment at least, maybe in new versions it will be thread safe. so now for my Project I stopped using Pika and I'm using b-rabbit, which is a thread safe wrapper around Rabbitpy. but I must say that Pika is a great Library and I find the API better described and structured than rabbitpy but for my Project it was mandatory to use multithreading and that's why Pika for the moment was a bad choice. I hope this helps someone in the Future

Output reason for Python crash

I have an application which polls a bunch of servers every few minutes. To do this, it spawns one thread per server to poll (15 servers) and writes back the data to an object:
import requests
class ServerResults(object):
def __init__(self):
self.results = []
def add_server(some_argument):
self.results.append(some_argument)
servers = ['1.1.1.1', '1.1.1.2']
results = ServerResults()
for s in servers:
t = CallThreads(poll_server, s, results)
t.daemon = True
t.start()
def poll_server(server, results):
response = requests.get(server, timeout=10)
results.add_server(response.status_code);
The CallThreads class is a helper function to call a function (in this case poll_server() with arguments (in this case s and results), you can see the source at my Github repo of Python utility functions. Most of the time this works fine, however sometimes a thread intermittently hangs. I'm not sure why, since I am using a timeout on the GET request. In any case, if the thread hangs then the hung threads build up over the course of hours or days, and then Python crashes:
File "/usr/lib/python2.7/threading.py", line 495, in start
_start_new_thread(self.__bootstrap, ())
thread.error: can't start new thread
Exception in thread Thread-575 (most likely raised during interpreter shutdown)
Exception in thread Thread-1671 (most likely raised during interpreter shutdown)
Exception in thread Thread-831 (most likely raised during interpreter shutdown)
How might I deal with this? There seems to be no way to kill a blocking thread in Python. This application needs to run on a Raspberry Pi, so large libraries such as twisted won't fit, in fact I need to get rid of the requests library as well!
As far as I can tell, a possible scenario is that when a thread "hangs" for one given server, it will stay there "forever". Next time you query your servers another thread is spawned (_start_new_thread), up to the point where Python crashes.
Probably not your (main) problem, but you should:
use a thread pool - this won't stress the limited resources of your your system as much as spawning new threads again and again.
check that you use a "thread-compatible" mechanism to handle concurrent access to results. Maybe a semaphore or mutex to lock atomic portions of your code. Probably better would be a dedicated data structure such as a queue.
Concerning the "hang" per se -- beware that the timeout argument while "opening a URL" (urlopen) is related to the time-out for establishing the connection. Not for downloading the actual data:
The optional timeout parameter specifies a timeout in seconds for
blocking operations like the connection attempt (if not specified, the
global default timeout setting will be used). This actually only works
for HTTP, HTTPS and FTP connections.

terminate/restart python script if all children are stopped and prevent other children from spawning

I have a socket server that used threading to open a thread for each client that connects.
I also have two other threads that run constantly that are doing maintenance operations.
Basically there is the main thread plus two children running constantly, plus one child for each client that connects.
I want to be able to terminate or restart safely.
I would like to be able to trigger a termination function somehow that would instruct all child processes to terminate safely and then the parent could exit.
Any ideas?
Please do not suggest to connect as a client and send a command that would trigger that.
Already thought of it.
I am looking for a way to do this by executing something in the console.
The python socket server runs as a system service and would like to implement the termination in the init script.
The best way to do this is setup a signal handler in your main thread. This can be done using the signal module. See: http://docs.python.org/library/signal.html. A good way would be to trap the CTRL-C signal (SIGINT).
Please note that the signal handler can also be a class method, so you do not have to use a global method (it took me a while to discover that).
def __init__(self):
signal.signal(signal.SIGINT, self.just_kill_me)
def just_kill_me(self, sig, frame):
self.stopped = True
for t in self.threads:
t.join()
It is not possible to send the equivalent of a kill signal to a thread. Instead you should set a flag that will signal the children to stop.
Your child threads should run in a loop, periodically checking if the parent requests them to stop.
while not parent.stopped:
do_some_maintenance_work

Twisted + qtreactor: How to cleanup after last window closed?

I have a Twisted/PyQt application that (among other things) connects to a bunch of remote resources. When the user closes the window I want to shut down all the connections, cleanly if possible, forcibly if not.
The problem is that by the time I go to close the connections, it appears that the reactor is no longer alive to let me do so.
Here's my app code:
# Create app and connect the Twisted/Qt reactors
app = QApplication(sys.argv)
qtreactor.qt4reactor.install()
# Shutdown Twisted when window is closed
#defer.inlineCallbacks
def stop():
print "="*40, "Closing connections..."
yield closeConnections()
print "="*40, "closed."
print "="*40, "Stopping reactor..."
reactor.stop()
print "="*40, "stopped."
app.connect(app, SIGNAL("lastWindowClosed()"), stop)
reactor.runReturn()
rc = app.exec_()
exit(rc)
And here's a stripped down version of my cleanup code:
#defer.inlineCallbacks
def closeConnections():
for connection in connections:
print "Closing connection #%s" % connection
yield threads.deferToThread(popen("/foo/bar/cleanup %s" % connection))
print "Connection closed."
The first print statement is reached, and the command is executed, but I never get the second one, nor do I go through the for loop multiple times.
Is my analysis correct? Is the problem that the reactor is already down, so I never hear back from threads.deferToThread? Or is there some other problem? Furthermore, how do I fix it?
Thanks,
Jonathan
I don't know exactly when that lastWindowClosed() signal fires. However, even if it fires early enough, before the reactor has shut down (preventing you from doing what you want to do), I'm sure that PyQt doesn't know what to do with the Deferred that is returned by your stop function. This means that the shutdown process will proceed merrily onward while your asynchronous cleanup code tries to run. Likely the GUI shutdown will finish before your network shutdown gets anywhere.
So, use reactor.addSystemEventTrigger('before', 'shutdown', stop) instead. I don't know if this will run slightly earlier or slightly later than lastWindowClosed(), but it will run early enough that the reactor will still be usable, and it will pay attention to the Deferred your function returns. Shutdown will be suspended, in fact, until that Deferred fires. This gives you all the time you need to do your cleanup.
Separately from all that, you shouldn't do threads.deferToThread(popen("/foo/bar/cleanup %s" % connection)):
You need to pass a callable to deferToThread, not the result of calling the callable. As written, your code runs popen in the reactor thread and passes a file object to the thread to be called (which makes no sense, of course)
Mixing threads and child processes is iffy. You might get away with it most of the time, I dunno.
reactor.spawnProcess will let you run a child process without blocking, without threads, and without worrying about mixing threads and processes. See also twisted.internet.utils.getProcessOutput if you don't need all the features of spawnProcess (which you appear not to).

Python GTK/threading/sockets error

I'm trying to build a Python application using pyGTK, treads, and sockets. I'm having this weird error, but given all the modules involved, I'm not entirely sure where the error is. I did a little debugging with some print statements to narrow things down a bit and I think the error is somewhere in this snippet of code:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect(("localhost", 5005))
self.collectingThread = threading.Thread(target=self.callCollect)
self.collectingThread.daemon = True
self.collectingThread.start()
def callCollect(self):
gobject.timeout_add(500, self.collectData)
def collectData(self):
print "hello"
try:
print self.sock.recv(1024)
except:
print "except"
print "return"
return True
So basically what I'm trying to do is setup a socket, connect to a "server" script (which is really just another python script running locally), and create a separate thread to collect all incoming data from the server script. This thread is set to run the collectData method every 500 milliseconds.
After inserting the print statements into the collectData method here is what I notice when running the program:
-Initially the GUI is fully functional
-then the following is printed in the terminal:
hello
**all data received from server script and printed here**
return
hello
-after the text is printed in the terminal, the GUI becomes completely nonfunctional (buttons cant be pressed, etc.) and I have to force quit to close the application
What seems to be happening is that the thread prints "hello", prints the data from the server, and prints "return". 500 milliseconds later, it runs the collectData method again, prints "hello", then tries to print data from the server. However, because there is no data left it raises an exception, but for some unknown reason it doesn't execute the code in the exception block and everything just hangs from there.
Any idea on what is going wrong?
timeout_add is scheduling the action to happen on the main thread -- so the recv just blocks the main thread (when it's just waiting for data) and therefore the GUI, so, no exception unless you put a timeout or set the socket to non-blocking.
You need to delegate the receiving to the thread from the scheduled action rather than viceversa to get the effect you're after: have the thread e.g. wait on an event object, and the scheduled action signal that event every 500 milliseconds.
No, obviously the sock.recv call blocks because the socket wasn't closed yet and socket receives are blocking by default. Make sure you close the connection at some point.
It would make more sense to run the receive call in a new thread, or else it might block the GUI because your current implementation runs the recv call in the GUI thread (using timeout_add). The way you're currently doing it only makes sense if the answer is received very fast and/or you have to access widgets.
By the way, creating a new thread for calling gobject.timeout_add is totally unnecessary. timeout_add() and idle_add() register the specified callback function and return immediately. The GTK event loop then automatically executes the callback after the timeout (or on idle status for idle_add).

Categories

Resources