python async socket programming - python

Now I have two threads, thread 1 is main thread and thread 2 is a task thread.I need thread 2 to do all the network issues, so I put all the sockets in thread 2 and set them to no-blocking. Thread 1 is used to push request to thread 2 to do the job.
At first i write something like this:
request_queue = Queue.Queue()
tasks = []
sockets = []
**thread 1:**
while True:
get_user_input()
#...
request_queue.put(request_task)
**thread 2:**
while True:
if have_requests(request_queue):
t = create_task()
tasks.append(t)
sockets.append(t.socket())
select(sockets,timeout=0) #no blocking select
update_tasks()
#...
Obviously,when there are no requests and tasks,thread 2 will waste cpu.I don't want use sleep(),because when thread 2 is sleeping,it can't handle requests in time.Then I think maybe I should change the request_queue to a local host socket,like this:
request_queue = sock.sock()
request_queue.bind(local_host,some_port)
request_queue.listen()
**thread 1**
while True:
get_user_input()
request_queue.send(new_request)
**thread 2**
while True:
select(sockets) # blocking select
if request_queue is active:
t = request_queue.recv()
t = create_task(t)
tasks.append(t)
sockets.append(t.socket())
#check other sockets
#update tasks...
But this looks like a little tricky,I don't know whether this is a good way or not.All I want is thread 2 can handle request in time, don not waste cpu time and process socket events in the same time. Anyone can help?

For async networking look to Tornado, Twisted or Gevent. Also this article may be usefull for you.
Example with Gevent:
def handle_socket(sock):
sock.sendall("payload")
sock.close()
server = socket.socket()
server.bind(('0.0.0.0', 9999))
server.listen(500) # max connections
while True:
try:
new_sock, address = server.accept()
except KeyboardInterrupt:
break
# handle every new connection with a new handler
gevent.spawn(handle_socket, new_sock)
And Celery is most appropriate for background job execution.

Related

Python Socket Programming - Simulate a radio stream with multiple clients using threads

I've been trying to write a python program that simulates a radio web stream, but I'm not quite sure how to do it properly. To do so, I would like to have the program continuously "playing" the musics even if there are no clients connected, so it would simulate a "live" radio where you connect and listen to whatever is playing.
What I have now is a server/client relation with TCP basic socket programming, the server side has a producer thread that was supposed to keep reading the music, and on-demand consumer threads that should send the audio frame to the client, that plays it with PyAudio. The problem is probably in the way the data is shared between threads.
First I've tried to do it with a single Queue, but as the client reads data from the queue, this data is removed and if I have multiple clients connected, that will make the music skip some frames.
Then I've tried to create a fixed number (10) of Queue objects that would be used for each client, with the producer thread feeding every queue, but each client would create a consumer thread of its own and read only from the queue "assigned" to it with a control variable. The problem here is: if there are any queues not being consumed (if I have only one client connected, for example), the Queue.put() method will block because these queues are full. How do I keep all queues "running" and synchronized even when they are not being used?
This is where I am now, and any advice is appreciated. I am not an experienced programmer yet, so please be patient with me. I believe Queue is not the recommended IPC method in this case, but if there is a way to use it, let me know.
Below is the code I have for now:
server.py
#TCP config omitted
#Producer Thread
def readTheMusics(queue):
#Control variable to keep looping through 2 music files
i=1
while i < 3:
fname = "music" + str(i) + ".wav"
wf = wave.open(fname, 'rb')
data = wf.readframes(CHUNK)
while data:
for k in range (10):
queue[k].put(data)
data = wf.readframes(CHUNK)
wf.close()
i += 1
if i==3:
i=1
#Consumer Thread
def connection(connectionSocket, addr, queue, index):
while True:
data = queue[index-1].get(True)
connectionSocket.send(data)
connectionSocket.close()
def main():
i = 1
#Queue(1) was used to prevent an infinite queue and therefore a memory leak
queueList = [Queue(1) for j in range(10)]
th2 = threading.Thread(target=musicReading, args=(queueList, ))
th2.start()
while True:
connectionSocket, addr = serverSocket.accept()
print("connected - id {}".format(i))
th = threading.Thread(target=connection, args=(connectionSocket, addr, queueList, i))
th.start()
i = i + 1
if __name__ == '__main__':
main()
Tim Roberts' comments were enough to make it work.

Multithreaded TCP socket

I'm trying to create a threaded TCP socket server that can handle multiple socket request at a time.
To test it, I launch several thread in the client side to see if my server can handle it. The first socket is printed successfully but I get a [Errno 32] Broken pipe for the others.
I don't know how to avoid it.
import threading
import socketserver
import graphitesend
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if data != "":
print(data)
class ThreadedTCPServer(socketserver.ThreadingTCPServer):
allow_reuse_address = True
def __init__(self, host, port):
socketserver.ThreadingTCPServer.__init__(self, (host, port), ThreadedTCPRequestHandler)
def stop(self):
self.server_close()
self.shutdown()
def start(self):
threading.Thread(target=self._on_started).start()
def _on_started(self):
self.serve_forever()
def client(g):
g.send("test", 1)
if __name__ == "__main__":
HOST, PORT = "localhost", 2003
server = ThreadedTCPServer(HOST, PORT)
server.start()
g = graphitesend.init(graphite_server = HOST, graphite_port = PORT)
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
server.stop()
It's a little bit difficult to determine what exactly you're expecting to happen, but I think the proximate cause is that you aren't giving your clients time to run before killing the server.
When you construct a Thread object and call its start method, you're creating a thread, and getting it ready to run. It will then be placed on the "runnable" task queue on your system, but it will be competing with your main thread and all your other threads (and indeed all other tasks on the same machine) for CPU time.
Your multiple threads (main plus others) are also likely being serialized by the python interpreter's GIL (Global Interpreter Lock -- assuming you're using the "standard" CPython) which means they may not have even gotten "out of the gate" yet.
But then you're shutting down the server with server_close() before they've had a chance to send anything. That's consistent with the "Broken Pipe" error: your remaining clients are attempting to write to a socket that has been closed by the "remote" end.
You should collect the thread objects as you create them and put them in a list (so that you can reference them later). When you're finished creating and starting all of them, then go back through the list and call the .join method on each thread object. This will ensure that the thread has had a chance to finish. Only then should you shut down the server. Something like this:
threads = []
for n in range(7):
th = threading.Thread(target=client, args=(g,))
th.start()
threads.append(th)
# All threads created. Wait for them to finish.
for th in threads:
th.join()
server.stop()
One other thing to note is that all of your clients are sharing the same single connection to send to the server, so that your server will never create more than one thread: as far as it's concerned, there is only a single client. You should probably move the graphitesend.init into the client function if you actually want separate connections for each client.
(Disclaimer: I know nothing about graphitesend except what I could glean in a 15 second glance at the first result in google; I'm assuming it's basically just a wrapper around a TCP connection.)

Implementing a single thread server/daemon (Python)

I am developing a server (daemon).
The server has one "worker thread". The worker thread runs a queue of commands. When the queue is empty, the worker thread is paused (but does not exit, because it should preserve certain state in memory). To have exactly one copy of the state in memory, I need to run all time exactly one (not several and not zero) worker thread.
Requests are added to the end of this queue when a client connects to a Unix socket and sends a command.
After the command is issued, it is added to the queue of commands of the worker thread. After it is added to the queue, the server replies something like "OK". There should be not a long pause between server receiving a command and it "OK" reply. However, running commands in the queue may take some time.
The main "work" of the worker thread is split into small (taking relatively little time) chunks. Between chunks, the worker thread inspects ("eats" and empties) the queue and continues to work based on the data extracted from the queue.
How to implement this server/daemon in Python?
This is a sample code with internet sockets, easily replaced with unix domain sockets. It takes whatever you write to the socket, passes it as a "command" to worker, responds OK as soon as it has queued the command. The single worker simulates a lengthy task with sleep(30). You can queue as many tasks as you want, receive OK immediately and every 30 seconds, your worker prints a command from the queue.
import Queue, threading, socket
from time import sleep
class worker(threading.Thread):
def __init__(self,q):
super(worker,self).__init__()
self.qu = q
def run(self):
while True:
new_task=self.qu.get(True)
print new_task
i=0
while i < 10:
print "working ..."
sleep(1)
i += 1
try:
another_task=self.qu.get(False)
print another_task
except Queue.Empty:
pass
task_queue = Queue.Queue()
w = worker(task_queue)
w.daemon = True
w.start()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 4200))
sock.listen(1)
try:
while True:
conn, addr = sock.accept()
data = conn.recv(32)
task_queue.put(data)
conn.sendall("OK")
conn.close()
except:
sock.close()

How to implement a python thread pool to test for network connectivity?

I am trying to implement a Python (2.6.x/2.7.x) thread pool that would check for network connectivity(ping or whatever), the entire pool threads must be killed/terminated when the check is successful.
So I am thinking of creating a pool of, let's say, 10 worker threads. If any one of them is successful in pinging, the main thread should terminate all the rest.
How do I implement this?
This is not a compilable code, this is just to give you and idea how to make threads communicate..
Inter process or threads communication happens through queues or pipes and some other ways..here I'm using queues for communication.
It works like this.. I'll send ip addresses in in_queue and add response to out_queue, my main thread monitors out_queue and if it gets desired result, it marks all the threads to terminate.
Below is the pinger thread definition..
import threading
from Queue import Queue, Empty
# A thread that pings ip.
class Pinger(threading.Thread):
def __init__(self, kwargs=None):
threading.Thread.__init__(self)
self.kwargs = kwargs
self.stop_pinging = False
def run(self):
ip_queue = self.kwargs.get('in_queue')
out_queue = self.kwargs.get('out_queue')
while not self.stop_pinging:
try:
data = ip_quque.get(timeout=1)
ping_status = ping(data)
# This is pseudo code, you've to takecare of
# your own ping.
if ping_status:
out_queue.put('success')
# you can even break here if you don't want to
# continue after one success
else:
out_queue.put('failure')
if ip_queue.empty()
break
except Empty, e:
pass
Here is the main thread block..
# Create the shared queue and launch both thread pools
in_queue = Queue()
out_queue = Queue()
ip_list = ['ip1', 'ip2', '....']
# This is to add all the ips to the queue or you can
# customize to add through some producer way.
for ip in ip_list:
in_queue.put(ip)
pingerer_pool = []
for i in xrange(1, 10):
pingerer_worker = Pinger(kwargs={'in_queue': in_queue, 'out_queue': out_queue}, name=str(i))
pingerer_pool.append(pinger_worker)
pingerer_worker.start()
while 1:
if out_queue.get() == 'success':
for pinger in pinger_pool:
pinger_worker.stop_pinging = True
break
Note: This is a pseudo code, you should make this workable as you like.

How to put tcp server on another thread in python

I try to write a daemon in python. But I have no idea how can I use a thread to start parallel tcp server in this daemon. And even what type of server I should use : asyncore?SocketServer?socket?
this is part of my code:
import os
def demonized():
child_pid = os.fork()
if child_pid == 0:
child_pid = os.fork()
if child_pid == 0: #fork twice for demonize
file = open('###', "r") # open file
event = file.read()
while event:
#TODO check for changes put changes in list variable
event = file.read()
file.close()
else:
sys.exit(0)
else:
sys.exit(0)
if __name__ == "__main__":
demonized()
So in a loop I have a list variable with some data appended every circle, and I want to start a thread with tcp server that wait for connection in the loop and if client connects send it this data(with zeroing variable). So I do not need to handle multiple clients, the client will be only one at time. What is the optimal way to implement this?
Thank you.
In case you want to avoid repeating boilerplate, Python will soon have a standard module that does the fork() pair and standard-I/O manipulations (which you have not added to your program yet?) that make it a daemon. You can download and use this module right now, from:
http://pypi.python.org/pypi/python-daemon
Running a TCP server in a separate thread is often as simple as:
import threading
def my_tcp_server():
sock = socket.socket(...)
sock.bind(...)
sock.listen()
while True:
conn, address = sock.accept()
...
... talk on the connection ...
...
conn.close()
def main():
...
threading.Thread(target=my_tcp_server).start()
...
I strongly recommend against trying to get your file-reader thread and your socket-answering thread talking with a list and lock of your own devising; such schemes are hard to get working and hard to keep working. Instead, use the standard library's Queue.Queue() class which does all of the locking and appending correctly for you.
Do you want to append items to the list in while event:... loop and serving this list simultaneously? If so then you have two writers and you must somehow protect your list.
In the sample SocketServer.TCPServer and threading.Lock was used:
import threading
import SocketServer
import time
class DataHandler(SocketServer.StreamRequestHandler):
def handle(self):
self.server.list_block.acquire()
self.wfile.write(', '.join(self.server.data))
self.wfile.flush()
self.server.data = []
self.server.list_block.release()
if __name__ == '__main__':
data = []
list_block = threading.Lock()
server = SocketServer.TCPServer(('localhost', 0), DataHandler)
server.list_block = list_block
server.data = data
t = threading.Thread(target=server.serve_forever)
t.start()
while True:
list_block.acquire()
data.append(1)
list_block.release()
time.sleep(1)

Categories

Resources