I want to kill a Thread that runs a server via pymodbus. When I use the .join() method, the server will continue running if a client is connected.
address = ("ip.of.my.pi", 5020)
server = Thread(target=StartTcpServer, args=(context, identity, address))
server.start()
def threaded(context, server):
if master.poll:
thread = Thread(target=server_loop, args=(context,))
thread.start()
time.sleep(5)
master.after(100, lambda: threaded(context, server))
else:
server.join()
threaded(context, server)
The function server_loop runs a measurement, which will stop in this constellation. I cannot use a normal loop, because, as you see at master.after, I'm using Tkinter as a GUI.
To stop a pymodbus server inside a thread, you have to start the server.join
From another thread.
Related
I'm trying to run a python http server in the background using threading. I came across several references that do the following:
import threading
import http.server
import socket
from http.server import HTTPServer, SimpleHTTPRequestHandler
debug = True
server = http.server.ThreadingHTTPServer((socket.gethostname(), 6666), SimpleHTTPRequestHandler)
if debug:
print("Starting Server in background")
thread = threading.Thread(target = server.serve_forever)
thread.daemon = True
thread.start()
else:
print("Starting Server")
print('Starting server at http://{}:{}'.format(socket.gethostname(), 6666))
server.serve_forever()
When thread.daemon is set to True, the program will finish without starting the server (nothing running on port 6666).
And when I set thread.daemon to False, it starts the server in foreground and blocks the terminal until I kill it manually.
Any idea on how to make this work?
In both cases the server is launched in the background, in the separate thread. This means that thread.start() launches the server and python continues executing the rest of the code in the main thread.
However, there seems to be nothing else to execute in your program. Python reaches the end of the file and the main thread is done.
The OS requires all non-daemon threads to be done before the process could be finished. When thread.daemon is set to False the OS waits until the server thread exits (which will never happen, as the name serve_forever implies). When it is True the process is closed immediately after the main thread is done.
Put whatever code you want to be executed asynchronously after the thread.start() and you're done!
I'm trying to create a threaded TCP socket server that can handle multiple socket request at a time.
To test it, I launch several thread in the client side to see if my server can handle it. The first socket is printed successfully but I get a [Errno 32] Broken pipe for the others.
I don't know how to avoid it.
import threading
import socketserver
import graphitesend
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if data != "":
print(data)
class ThreadedTCPServer(socketserver.ThreadingTCPServer):
allow_reuse_address = True
def __init__(self, host, port):
socketserver.ThreadingTCPServer.__init__(self, (host, port), ThreadedTCPRequestHandler)
def stop(self):
self.server_close()
self.shutdown()
def start(self):
threading.Thread(target=self._on_started).start()
def _on_started(self):
self.serve_forever()
def client(g):
g.send("test", 1)
if __name__ == "__main__":
HOST, PORT = "localhost", 2003
server = ThreadedTCPServer(HOST, PORT)
server.start()
g = graphitesend.init(graphite_server = HOST, graphite_port = PORT)
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
server.stop()
It's a little bit difficult to determine what exactly you're expecting to happen, but I think the proximate cause is that you aren't giving your clients time to run before killing the server.
When you construct a Thread object and call its start method, you're creating a thread, and getting it ready to run. It will then be placed on the "runnable" task queue on your system, but it will be competing with your main thread and all your other threads (and indeed all other tasks on the same machine) for CPU time.
Your multiple threads (main plus others) are also likely being serialized by the python interpreter's GIL (Global Interpreter Lock -- assuming you're using the "standard" CPython) which means they may not have even gotten "out of the gate" yet.
But then you're shutting down the server with server_close() before they've had a chance to send anything. That's consistent with the "Broken Pipe" error: your remaining clients are attempting to write to a socket that has been closed by the "remote" end.
You should collect the thread objects as you create them and put them in a list (so that you can reference them later). When you're finished creating and starting all of them, then go back through the list and call the .join method on each thread object. This will ensure that the thread has had a chance to finish. Only then should you shut down the server. Something like this:
threads = []
for n in range(7):
th = threading.Thread(target=client, args=(g,))
th.start()
threads.append(th)
# All threads created. Wait for them to finish.
for th in threads:
th.join()
server.stop()
One other thing to note is that all of your clients are sharing the same single connection to send to the server, so that your server will never create more than one thread: as far as it's concerned, there is only a single client. You should probably move the graphitesend.init into the client function if you actually want separate connections for each client.
(Disclaimer: I know nothing about graphitesend except what I could glean in a 15 second glance at the first result in google; I'm assuming it's basically just a wrapper around a TCP connection.)
I am writing a Python 3.5 program which handles some signals and serves this data to a small amount of websocket clients.
I want the websocket server and the signal handling to happen in the same program, therefore I am using threading.
The problem is I don't know how to send data from the worker thread to the client.
The Websocket server is implemented with a simple library called "websockets". The server is set up and clients can connect and talk to the server within the "new websocket client has connected" handler.
The server is set up with the help of an event loop:
start_server = websockets.serve(newWsHandler, host, port)
loop = asyncio.get_event_loop()
loop.run_until_complete(start_server)
loop.run_forever()
Because I want my program to do signal handling too, and loop.run_forever() is a blocking call, I create an endless worker thread before I start my server. This works as expected.
When the worker thread detects a signal change, it has to alert the connected websocket clients. But a simple client.send() does not work. Putting await in front of it does not work either (since that only works within coroutines, I think). I tried making a separate "async def" function and adding it to the event loop, but it gets a bit complicated because it's not on the same thread.
So the main question is: what is the best way send something to a websocket client from a worker thread? I don't receive anything in response.
EDIT:
It will probably help if I add some mock code.
def signalHandler():
#check signals
...
if alert:
connections[0].send("Alert") #NEED HELP HERE
async def newWsHandler(websocket, path):
connections.append(websocket)
while True:
#keep the connection open until the client disconnects
msg = await websocket.recv()
#top level
connections = []
...
start_server = websockets.serve(newWsHandler, host, port)
signalThread = Thread(target = signalHandler)
signalThread.setDaemon(True)
signalThread.start()
loop = asyncio.get_event_loop()
loop.run_until_complete(start_server)
loop.run_forever()
I am developing a server (daemon).
The server has one "worker thread". The worker thread runs a queue of commands. When the queue is empty, the worker thread is paused (but does not exit, because it should preserve certain state in memory). To have exactly one copy of the state in memory, I need to run all time exactly one (not several and not zero) worker thread.
Requests are added to the end of this queue when a client connects to a Unix socket and sends a command.
After the command is issued, it is added to the queue of commands of the worker thread. After it is added to the queue, the server replies something like "OK". There should be not a long pause between server receiving a command and it "OK" reply. However, running commands in the queue may take some time.
The main "work" of the worker thread is split into small (taking relatively little time) chunks. Between chunks, the worker thread inspects ("eats" and empties) the queue and continues to work based on the data extracted from the queue.
How to implement this server/daemon in Python?
This is a sample code with internet sockets, easily replaced with unix domain sockets. It takes whatever you write to the socket, passes it as a "command" to worker, responds OK as soon as it has queued the command. The single worker simulates a lengthy task with sleep(30). You can queue as many tasks as you want, receive OK immediately and every 30 seconds, your worker prints a command from the queue.
import Queue, threading, socket
from time import sleep
class worker(threading.Thread):
def __init__(self,q):
super(worker,self).__init__()
self.qu = q
def run(self):
while True:
new_task=self.qu.get(True)
print new_task
i=0
while i < 10:
print "working ..."
sleep(1)
i += 1
try:
another_task=self.qu.get(False)
print another_task
except Queue.Empty:
pass
task_queue = Queue.Queue()
w = worker(task_queue)
w.daemon = True
w.start()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 4200))
sock.listen(1)
try:
while True:
conn, addr = sock.accept()
data = conn.recv(32)
task_queue.put(data)
conn.sendall("OK")
conn.close()
except:
sock.close()
I'm playing around with sockets in python, just for the purpose of learning about them. However I am really annoyed with the following problem:
import socket
soc = socket.socket(socket.AF_INET)
soc.bind(('localhost',8000))
soc.listen(0)
client = soc.accept()
While the socket is waiting for a connection, pressing ctrl-c does not quit the application.
How can I quit the application?
A similar issue was addressed in these two questions, but there the accept method was called from a separate thread and the problem was how to make ctrl-c kill that thread. Here the accept method is called from the main thread.
Edit: I am running python 3.3.0 on Win7 64 bit.
You should use CTRL + Break. That should kill it.
I couldn't find a way to kill the application using ctrl-c or any other way except for killing it through the task manager, so I wrote a workaround:
import socket,os
from threading import Thread
class socketListener(Thread):
def run(self):
soc = socket.socket(socket.AF_INET)
soc.bind(('localhost',8000))
soc.listen(0)
client = soc.accept()
pid = os.getpid()
sl = socketListener()
sl.start()
input('Socket is listening, press any key to abort...')
os.kill(pid,9)
This runs the script in a separate thread, while waiting for a keystroke in the main thread. Once the user presses a key, the entire application is killed.
"serversocket" module provides the standard solution. I tested Control-C on Windows, it worked.
This is the link, serversocket example
The Control-C handling is even mentioned in the comment of the code
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
Here is the complete code from the above link:
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print("{} wrote:".format(self.client_address[0]))
print(self.data)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
if __name__ == "__main__":
HOST, PORT = "localhost", 9999
with socketserver.TCPServer((HOST, PORT), MyTCPHandler) as server:
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
If we wanted to re-invent the wheel, we would do a select() or poll() on the listener socket, with a timeout 0.5 seconds.
To save time of the other people searching for this topic, if you laptop keyboard does not have a break button, please try
Ctrl + Fn + F6
or
Ctrl + F6
After being myself confronted to the same problem I found a little workaround, it might not be the cleanest way but at least it works for me :
import socket
from select import select
#create an INET, STREAMing socket
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#bind the socket to localhost
serversocket.bind(('localhost', 8000))
While 1:
serversocket.listen(5)
ready, _, _ = select([serversocket], [], [], 1) #Timeout set to 1 seconds
if ready:
(clientsocket, address) = serversocket.accept()
#Do something with client
else:
#Do nothing, just loop again
By using select you will wait a change on the socket fd until the end of the timeout. As I said this might not be the cleanest way but Ctrl-c will be catch at the end of the timeout.
Portability alert: On Unix, select works both with the sockets and files. On Windows, select works with sockets only.