How to terminate a running Thread in python? - python

I am using socket in this code to connect with other machine.I want to terminate thread when i get message from other machine but how to terminate Thread in Python ?
I refer Many SO Questions and I found that there is no method in python to Close thread.Can anyone tell me the alternate way to close the thread ?
code:
from threading import Thread
import time
import socket
def background(arg):
global thread
thread = Thread(target=arg)
thread.start()
def display():
for i in range(0,20):
print(i)
time.sleep(5)
background(display)
s = socket.socket()
s.bind((ip,6500))
s.listen(5)
print("listening")
val,addr = s.accept()
cmd = val.recv(1024)
if cmd == "Terminate Process":
print("Connected")
thread.close()
print("Process Closed")
Error:
AttributeError: 'Thread' object has no attribute 'close'

Short answer:
thread.join()
The rule of thumb is: don't kill threads (note that in some environments this may not even be possible, e.g. standard C++11 threads). Let the thread fetch the information and terminate itself. Controlling threads from other threads leads to hard to maintain and debug code.
E.g.
SHOULD_TERMINATE = False
def display():
for i in range(0,20):
print(i)
time.sleep(5)
if SHOULD_TERMINATE:
return
thread = Thread(target=display)
thread.start()
// some other code
if cmd == "Terminate Process":
SHOULD_TERMINATE = True
thread.join()
This is of course heavily simplified. Your code can be further refined with event objects (instead of .sleep) or thread pools.

Related

Simple multithreaded Server Client Program

I have a multithreaded server and client programs for a simple game. Whenever a client force exits the game, I try to catch the exception with a try catch "except BrokenPipeError" and inform the other players.
I also want to end the exited client thread; however, I take input such as this:
while True:
client = serverSocket.accept()
t = ServerThread(client)
t.start()
I tried to use a threading event with stop() function; however, I believe I can not use a .join statement to exit the thread because of the way I take input. How should I end the force exited client. I know that multiprocessing library has a terminate function but I am also required to use the threading library. I tried os_exit(1) but I believe this command kills the entire process. What is the standard exit process for programs such as this?
First of all join() do nothing else but waits for thread to stop.
Thread stops when it reaches end of threaded subroutine. For example
class ServerThread(threading.Thread):
def __init__(self,client,name):
super().__init__(self)
self.client = client
self.name = name
def inform(self,msg):
print("{}: got message {}".format( self.name, msg ))
self.client[0].send(msg)
def run(self):
while True:
try:
self.client[0].recv(1024)
except BrokenPipeError: #client exits
# do stuff
break # -> ends loop
return # -> thread exits, join returns
If you want to inform other clients that someone leaves, i would make another monitoring thread
class Monitoring(threading.Thread):
def __init__(self):
super().__init__(self,daemon=True) # daemon means thread stops when main thread do
self.clients=[]
def add_client(self,client):
self.clients.append(client)
def inform_client_leaves(self,client_leaved):
for client in self.clients:
if client.is_alive():
client.inform("Client '{}' leaves".format(client_leaved.name))
def run(self):
while True:
for thread in list(self.threads):
if not thread.is_alive(): # client exited
self.threads.remove(thread)
self.inform_client_exits(thread)
time.sleep(1)
So the initial code would looks like
mon = Monitoring()
mon.start()
while True:
client = serverSocket.accept()
t = ServerThread(client,"Client1")
t.start()
mon.add_client(t)

Python multiprocessing and networking on Windows

I'm trying to implement a tcp 'echo server'.
Simple stuff:
Client sends a message to the server.
Server receives the message
Server converts message to uppercase
Server sends modified message to client
Client prints the response.
It worked well, so I decided to parallelize the server; make it so that it could handle multiple clients at time.
Since most Python interpreters have a GIL, multithreading won't cut it.
I had to use multiproces... And boy, this is where things went downhill.
I'm using Windows 10 x64 and the WinPython suit with Python 3.5.2 x64.
My idea is to create a socket, intialize it (bind and listen), create sub processes and pass the socket to the children.
But for the love of me... I can't make this work, my subprocesses die almost instantly.
Initially I had some issues 'pickling' the socket...
So I googled a bit and thought this was the issue.
So I tried passing my socket thru a multiprocessing queue, through a pipe and my last attempt was 'forkpickling' and passing it as a bytes object during the processing creating.
Nothing works.
Can someone please shed some light here?
Tell me whats wrong?
Maybe the whole idea (sharing sockets) is bad... And if so, PLEASE tell me how can I achieve my initial objective: enabling my server to ACTUALLY handle multiple clients at once (on Windows) (don't tell me about threading, we all know python's threading won't cut it ¬¬)
It also worth noting that no files are create by the debug function.
No process lived long enough to run it, I believe.
The typical output of my server code is (only difference between runs is the process numbers):
Server is running...
Degree of parallelism: 4
Socket created.
Socket bount to: ('', 0)
Process 3604 is alive: True
Process 5188 is alive: True
Process 6800 is alive: True
Process 2844 is alive: True
Press ctrl+c to kill all processes.
Process 3604 is alive: False
Process 3604 exit code: 1
Process 5188 is alive: False
Process 5188 exit code: 1
Process 6800 is alive: False
Process 6800 exit code: 1
Process 2844 is alive: False
Process 2844 exit code: 1
The children died...
Why god?
WHYYyyyyy!!?!?!?
The server code:
# Imports
import socket
import packet
import sys
import os
from time import sleep
import multiprocessing as mp
import pickle
import io
# Constants
DEGREE_OF_PARALLELISM = 4
DEFAULT_HOST = ""
DEFAULT_PORT = 0
def _parse_cmd_line_args():
arguments = sys.argv
if len(arguments) == 1:
return DEFAULT_HOST, DEFAULT_PORT
else:
raise NotImplemented()
def debug(data):
pid = os.getpid()
with open('C:\\Users\\Trauer\\Desktop\\debug\\'+str(pid)+'.txt', mode='a',
encoding='utf8') as file:
file.write(str(data) + '\n')
def handle_connection(client):
client_data = client.recv(packet.MAX_PACKET_SIZE_BYTES)
debug('received data from client: ' + str(len(client_data)))
response = client_data.upper()
client.send(response)
debug('sent data from client: ' + str(response))
def listen(picklez):
debug('started listen function')
pid = os.getpid()
server_socket = pickle.loads(picklez)
debug('acquired socket')
while True:
debug('Sub process {0} is waiting for connection...'.format(str(pid)))
client, address = server_socket.accept()
debug('Sub process {0} accepted connection {1}'.format(str(pid),
str(client)))
handle_connection(client)
client.close()
debug('Sub process {0} finished handling connection {1}'.
format(str(pid),str(client)))
if __name__ == "__main__":
# Since most python interpreters have a GIL, multithreading won't cut
# it... Oughta bust out some process, yo!
host_port = _parse_cmd_line_args()
print('Server is running...')
print('Degree of parallelism: ' + str(DEGREE_OF_PARALLELISM))
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket created.')
server_socket.bind(host_port)
server_socket.listen(DEGREE_OF_PARALLELISM)
print('Socket bount to: ' + str(host_port))
buffer = io.BytesIO()
mp.reduction.ForkingPickler(buffer).dump(server_socket)
picklez = buffer.getvalue()
children = []
for i in range(DEGREE_OF_PARALLELISM):
child_process = mp.Process(target=listen, args=(picklez,))
child_process.daemon = True
child_process.start()
children.append(child_process)
while not child_process.pid:
sleep(.25)
print('Process {0} is alive: {1}'.format(str(child_process.pid),
str(child_process.is_alive())))
print()
kids_are_alive = True
while kids_are_alive:
print('Press ctrl+c to kill all processes.\n')
sleep(1)
exit_codes = []
for child_process in children:
print('Process {0} is alive: {1}'.format(str(child_process.pid),
str(child_process.is_alive())))
print('Process {0} exit code: {1}'.format(str(child_process.pid),
str(child_process.exitcode)))
exit_codes.append(child_process.exitcode)
if all(exit_codes):
# Why do they die so young? :(
print('The children died...')
print('Why god?')
print('WHYYyyyyy!!?!?!?')
kids_are_alive = False
edit: fixed the signature of "listen". My processes still die instantly.
edit2: User cmidi pointed out that this code does work on Linux; so my question is: How can I 'made this work' on Windows?
You can directly pass a socket to a child process. multiprocessing registers a reduction for this, for which the Windows implementation uses the following DupSocket class from multiprocessing.resource_sharer:
class DupSocket(object):
'''Picklable wrapper for a socket.'''
def __init__(self, sock):
new_sock = sock.dup()
def send(conn, pid):
share = new_sock.share(pid)
conn.send_bytes(share)
self._id = _resource_sharer.register(send, new_sock.close)
def detach(self):
'''Get the socket. This should only be called once.'''
with _resource_sharer.get_connection(self._id) as conn:
share = conn.recv_bytes()
return socket.fromshare(share)
This calls the Windows socket share method, which returns the protocol info buffer from calling WSADuplicateSocket. It registers with the resource sharer to send this buffer over a connection to the child process. The child in turn calls detach, which receives the protocol info buffer and reconstructs the socket via socket.fromshare.
It's not directly related to your problem, but I recommend that you redesign the server to instead call accept in the main process, which is the way this is normally done (e.g. in Python's socketserver.ForkingTCPServer module). Pass the resulting (conn, address) tuple to the first available worker over a multiprocessing.Queue, which is shared by all of the workers in the process pool. Or consider using a multiprocessing.Pool with apply_async.
def listen() the target/start for your child processes does not take any argument but you are providing serialized socket as an argument args=(picklez,) to the child process this would cause an exception in the child process and exit immediately.
TypeError: listen() takes no arguments (1 given)
def listen(picklez) should solve the problem this will provide one argument to the target of your child processes.

Killing multiple httpservers running on different ports

I start multiple servers using the following:
from threading import Thread
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write("Hello World!")
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
pass
def serve_on_port(port):
server = ThreadingHTTPServer(("localhost",port), Handler)
server.serve_forever()
Thread(target=serve_on_port, args=[1111]).start()
Thread(target=serve_on_port, args=[2222]).start()
I want to stop these threads on KeyboardInterrupt.
How can I do that?
You can kill lots of threads at the end of your program by defining them as daemon threads. To do this, set their daemon property to true. According to the documentation,
This must be set before start() is called, otherwise RuntimeError is raised. Its initial value is inherited from the creating thread; the main thread is not a daemon thread and therefore all threads created in the main thread default to daemon = False.
The entire Python program exits when no alive non-daemon threads are left.
So, something like this should work:
for port in [1111, 2222]:
t = Thread(target=serve_on_port, args=[port])
t.daemon = True
t.start()
try:
while True:
time.sleep(1000000)
except KeyboardInterrupt:
pass
Note that any threads that are non-daemon and still running will keep your program from exiting. If you have other threads that you also want to be killed on exit, set their daemon properties to True before starting them, too.
To stop one of these servers, you can use its shutdown() method. This means you will need a reference to the server from the code that catches the KeyboardInterrupt. For example:
servers = []
for port in [11111, 22222]:
servers.append(ThreadingHTTPServer(("localhost",port), Handler))
for server in servers:
Thread(target=server.serve_forever).start()
try:
while True:
time.sleep(1000000)
except KeyboardInterrupt:
for server in servers:
server.shutdown()

How to wait for a spawned thread to finish in Python

I want to use threads to do some blocking work. What should I do to:
Spawn a thread safely
Do useful work
Wait until the thread finishes
Continue with the function
Here is my code:
import threading
def my_thread(self):
# Wait for the server to respond..
def main():
a = threading.thread(target=my_thread)
a.start()
# Do other stuff here
You can use Thread.join. Few lines from docs.
Wait until the thread terminates. This blocks the calling thread until the thread whose join() method is called terminates – either normally or through an unhandled exception – or until the optional timeout occurs.
For your example it will be like.
def main():
a = threading.thread(target = my_thread)
a.start()
a.join()

Python - Can't kill main thread with KeyboardInterrupt

I'm making a simple multi-threaded port scanner. It scans all ports on host and returns open ports. The trouble is interrupting the scan. It take a lot of time for a scan to complete and sometimes I wish to kill program with C-c while in the middle of scan. Trouble is the scan won't stop. Main thread is locked on queue.join() and oblivious to KeyboardInterrupt, until all data from queue is processed thus deblocking main thread and exiting program gracefully. All my threads are daemonized so when main thread dies they should die with him.
I tried using signal lib, no success. Overriding threading.Thread class and adding method for graceful termination didn't work... Main thread just won't receive KeyboardInterrupt while executing queue.join()
import threading, sys, Queue, socket
queue = Queue.Queue()
def scan(host):
while True:
port = queue.get()
if port > 999 and port % 1000 == 0:
print port
try:
#sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#sock.settimeout(2) #you need timeout or else it will try to connect forever!
#sock.connect((host, port))
#----OR----
sock = socket.create_connection((host, port), timeout = 2)
sock.send('aaa')
data = sock.recv(100)
print "Port {} open, message: {}".format(port, data)
sock.shutdown()
sock.close()
queue.task_done()
except:
queue.task_done()
def main(host):
#populate queue
for i in range(1, 65536):
queue.put(i)
#spawn worker threads
for port in range(100):
t = threading.Thread(target = scan, args = (host,))
t.daemon = True
t.start()
if __name__ == '__main__':
host = ""
#does input exist?
try:
host = sys.argv[1]
except:
print "No argument was recivied!"
exit(1)
#is input sane?
try:
host = socket.gethostbyname(host)
except:
print "Adress does not exist"
exit(2)
#execute main program and wait for scan to complete
main(host)
print "Post main() call!"
try:
queue.join()
except KeyboardInterrupt:
print "C-C"
exit(3)
EDIT:
I have found a solution by using time module.
#execute main program and wait for scan to complete
main(host)
#a little trick. queue.join() makes main thread immune to keyboardinterrupt. So use queue.empty() with time.sleep()
#queue.empty() is "unreliable" so it may return True a bit earlier then intented.
#when queue is true, queue.join() is executed, to confirm that all data was processed.
#not a true solution, you can't interrupt main thread near the end of scan (when queue.empty() returns True)
try:
while True:
if queue.empty() == False:
time.sleep(1)
else:
break
except KeyboardInterrupt:
print "Alas poor port scanner..."
exit(1)
queue.join()
You made your threads daemons already, but you need to keep your main thread alive while daemon threads are there, there's how to do that: Cannot kill Python script with Ctrl-C
When you create the threads add them to a list of running threads and when dealing with ctrl-C send a kill signal to each thread on the list. That way you are actively cleaning up rather than relying on it being done for you.

Categories

Resources