Exposing python daemon as a service - python

So, there are two very valuable features that I am already able to extract from a python script. The first is the ability to run a python function as a service from the command line. Assuming that the python script takes in command line args for simplicity. Something along the lines of:
import sys
def foo():
return "%s is your last argument!" % sys.argv[-1]
foo()
which I would then access globally by running python file.py someargand additionally, I could write up a supervisord script to daemonize a script and keep it running in memory. I now find myself in a position where I need both of these features at once and I'm not really sure where to start on this. For clarity, I basically have something along these lines:
if __name__ == "__main__":
big_file = open(slow_loader)
foo(big_file)
Where ideally, once this is running I would be keeping the entire big_file in memory and be able to access the foo method depending on that big_file by running something akin to the original python file.py somearg. I'm not really sure how to progress from here though.
Any help, even if it's just a link to some documentation would be very helpful. Ahead of time, I realized I could wrap this in a shallow flask app and run it through http requests, but for NDA'd reasons I need something that runs through an internal shell command.

Just because I like zmq and gevent, I would probably do something like this:
server.py
import gevent
import gevent.monkey
gevent.monkey.patch_all()
import zmq.green as zmq
import json
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
socket.bind("ipc:///tmp/myapp.ipc")
def do_something(parsed):
return sum(parsed.get("values"))
def handle(msg):
data = msg[1]
parsed = json.loads(data)
total = do_something(parsed)
msg[1] = json.dumps({"response": total})
socket.send_multipart(msg)
def handle_zmq():
while True:
msg = socket.recv_multipart()
gevent.spawn(handle, msg)
if __name__ == "__main__":
handle_zmq()
And then you would have a client.py for your command line tool, like
import json
import zmq
request_data = {
"values": [10, 20, 30 , 40],
}
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect("ipc:///tmp/myapp.ipc")
socket.send(json.dumps(request_data))
print socket.recv()
Obviously this is a contrived example, but you should get the idea. Alternatively you could use something like xmlrpc or jsonrpc for this as well.

Related

Using GLib.IOChannel to send data from one python process to another

I am trying to use GLib.IOChannels to send data from a client to a server running a Glib.Mainloop.
The file used for the socket should be located at /tmp/so/sock, and the server should simply run a function whenever it receives data.
This is the code I've written:
import sys
import gi
from gi.repository import GLib
ADRESS = '/tmp/so/sock'
def server():
loop = GLib.MainLoop()
with open(ADRESS, 'r') as sock_file:
sock = GLib.IOChannel.unix_new(sock_file.fileno())
GLib.io_add_watch(sock, GLib.IO_IN,
lambda *args: print('received:', args))
loop.run()
def client(argv):
sock_file = open(ADRESS, 'w')
sock = GLib.IOChannel.unix_new(sock_file.fileno())
try:
print(sock.write_chars(' '.join(argv).encode('utf-8'), -1))
except GLib.Error:
raise
finally:
sock.shutdown(True)
# sock_file.close() # calling close breaks the script?
if __name__ == '__main__':
if len(sys.argv) > 1:
client(sys.argv[1:])
else:
server()
When called without arguments, it acts as the server, if called with arguments, it sends them to a running server.
When starting the server, I immediately get the following output:
received: (<GLib.IOChannel object at 0x7fbd72558b80 (GIOChannel at 0x55b8397905c0)>, <flags G_IO_IN of type GLib.IOCondition>)
I don't know why that is. Whenever I send something, I get an output like (<enum G_IO_STATUS_NORMAL of type GLib.IOStatus>, bytes_written=4) on the client side, while nothing happens server-side.
What am I missing? I suspect I understood the documentation wrong, as I did not find a concrete example.
I got the inspiration to use the IOChannel instead of normal sockets from this post: How to listen socket, when app is running in gtk.main()?

How to feed information to a Python daemon?

I have a Python daemon running on a Linux system.
I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file.
This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something.
What would be the best way to achieve this?
I was thinking about a named pipe or the Queue library but there could be better solutions.
Here is how you can do it with a fifo:
# receiver.py
import os
import sys
import atexit
# Set up the FIFO
thefifo = 'comms.fifo'
os.mkfifo(thefifo)
# Make sure to clean up after ourselves
def cleanup():
os.remove(thefifo)
atexit.register(cleanup)
# Go into reading loop
while True:
with open(thefifo, 'r') as fifo:
for line in fifo:
print "Hello", line.strip()
You can use it like this from a shell session
$ python receiver.py &
$ echo "Alice" >> comms.fifo
Hello Alice
$ echo "Bob" >> comms.fifo
Hello Bob
There are several options
1) If the daemon should accept messages from other systems, make the daemon an RPC server - Use xmlrpc/jsonrpc.
2) If it is all local, you can use either TCP sockets or Named PIPEs.
3) If there will be a huge set of clients connecting concurrently, you can use select.epoll.
python has a built-in rpc library (using xml for data encoding). the documentation is well written; there is a complete example there:
https://docs.python.org/2.7/library/xmlrpclib.html
(python 2.7) or
https://docs.python.org/3.3/library/xmlrpc.server.html#module-xmlrpc.server
(python 3.3)
that may be worth considering.
Everyone mentioned FIFO-s (that's named pipes in Linux terminology) and XML-RPC, but if you learning these things right now, you have to check TCP/UDP/Unix sockets as well, since they are platform independent (at least, TCP/UDP sockets are). You can check this tutorial for a working example or the Python documentation if you want to go deper in this direction. It's also useful since most of the modern communication platforms (XML-RPC, SOAP, REST) uses these basic things.
There are a few mechanisms you could use, but everything boils down to using IPC (inter-process communication).
Now, the actual mechanism you will use depends on the details of what you can achieve, a good solution though would be to use something like zmq.
Check the following example on pub/sub on zmq
http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/patterns/pubsub.html
also this
http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/multisocket/zmqpoller.html
for the non-blocking way.
I'm not good in python so I would like to share
**Universal Inter process communcation **
nc a.k.a netcat is a server client model program which allow to send data such as text,files over network.
Advantages of nc
Very easy to use
IPC even between different programming langauges
Inbuilt on most linux OS
Example
On deamon
nc -l 1234 > output.txt
From other program or shell/terminal/script
echo HELLO | nc 127.0.0.1 1234
nc can be python by using the system command calling function ( may be os.system ) and read the stdout.
Why not use signals?
I am not a python programmer but presumably you can register a signal handler within your daemon and then signal it from the terminal. Just use SIGUSR or SIGHUP or similar.
This is the usual method you use to rotate logfiles or similar.
One solution could be to use the asynchat library which simplify calls between a server and a client.
Here is an example you could use (adapted from this site)
In deamon.py, a ChatServer object is created. Each time a connection is done, a ChatHandler object is created, inherited from asynchat.async_chat. This object collects data and fills it in self.buffer.
When a special string call the terminator is encountered, data is supposed to be complete and method found_terminator is called. It is in this method that you write your own code.
In sender.py, you create a ChatClient object, inherited from asynchat.async_chat, setup the connection in the constructor, define the terminator (in case the server answers !) and call the push method to send your data. You must append your terminator string to your data for the server to know when it can stop reading data.
daemon.py :
import asynchat
import asyncore
import socket
# Terminator string can be changed here
TERMINATOR = '\n'
class ChatHandler(asynchat.async_chat):
def __init__(self, sock):
asynchat.async_chat.__init__(self, sock=sock)
self.set_terminator(TERMINATOR)
self.buffer = []
def collect_incoming_data(self, data):
self.buffer.append(data)
def found_terminator(self):
msg = ''.join(self.buffer)
# Change here what the daemon is supposed to do when a message is retrieved
print 'Hello', msg
self.buffer = []
class ChatServer(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind((host, port))
self.listen(5)
def handle_accept(self):
pair = self.accept()
if pair is not None:
sock, addr = pair
print 'Incoming connection from %s' % repr(addr)
handler = ChatHandler(sock)
server = ChatServer('localhost', 5050)
print 'Serving on localhost:5050'
asyncore.loop()
sender.py :
import asynchat
import asyncore
import socket
import threading
# Terminator string can be changed here
TERMINATOR = '\n'
class ChatClient(asynchat.async_chat):
def __init__(self, host, port):
asynchat.async_chat.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
self.set_terminator(TERMINATOR)
self.buffer = []
def collect_incoming_data(self, data):
pass
def found_terminator(self):
pass
client = ChatClient('localhost', 5050)
# Data sent from here
client.push("Bob" + TERMINATOR)
client.push("Alice" + TERMINATOR)

Interacting with long-running python script

I have a long running Python script which collect tweets from Twitter, and I would like to know how its doing every once in awhile.
Currently, I am using the signal library to catch interrupts, at which point I call my print function. Something like this:
import signal
def print_info(count):
print "#Tweets:", count
#Print out the process ID so I can interrupt it for info
print 'PID:', os.getpid()
#Start listening for interrupts
signal.signal(signal.SIGUSR1, functools.partial(print_info, tweet_count))
And whenever I want my info, I open up a new terminal and issue my interrupt:
$kill -USR1 <pid>
Is there a better way to do this? I am aware I could have my script something at scheduled intervals, but I am more interested in knowing on demand, and potentially issuing other commands as well.
Sending a signal to process would interrupt the process. Below you will find an approach that uses dedicated thread to emulate python console. The console is exposed as a unix socket.
import traceback
import importlib
from code import InteractiveConsole
import sys
import socket
import os
import threading
from logging import getLogger
# template used to generate file name
SOCK_FILE_TEMPLATE = '%(dir)s/%(prefix)s-%(pid)d.socket'
log = getLogger(__name__)
class SocketConsole(object):
'''
Ported form :eventlet.backdoor.SocketConsole:.
'''
def __init__(self, locals, conn, banner=None): # pylint: diable=W0622
self.locals = locals
self.desc = _fileobject(conn)
self.banner = banner
self.saved = None
def switch(self):
self.saved = sys.stdin, sys.stderr, sys.stdout
sys.stdin = sys.stdout = sys.stderr = self.desc
def switch_out(self):
sys.stdin, sys.stderr, sys.stdout = self.saved
def finalize(self):
self.desc = None
def _run(self):
try:
console = InteractiveConsole(self.locals)
# __builtins__ may either be the __builtin__ module or
# __builtin__.__dict__ in the latter case typing
# locals() at the backdoor prompt spews out lots of
# useless stuff
import __builtin__
console.locals["__builtins__"] = __builtin__
console.interact(banner=self.banner)
except SystemExit: # raised by quit()
sys.exc_clear()
finally:
self.switch_out()
self.finalize()
class _fileobject(socket._fileobject):
def write(self, data):
self._sock.sendall(data)
def isatty(self):
return True
def flush(self):
pass
def readline(self, *a):
return socket._fileobject.readline(self, *a).replace("\r\n", "\n")
def make_threaded_backdoor(prefix=None):
'''
:return: started daemon thread running :main_loop:
'''
socket_file_name = _get_filename(prefix)
db_thread = threading.Thread(target=main_loop, args=(socket_file_name,))
db_thread.setDaemon(True)
db_thread.start()
return db_thread
def _get_filename(prefix):
return SOCK_FILE_TEMPLATE % {
'dir': '/var/run',
'prefix': prefix,
'pid': os.getpid(),
}
def main_loop(socket_filename):
try:
log.debug('Binding backdoor socket to %s', socket_filename)
check_socket(socket_filename)
sockobj = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sockobj.bind(socket_filename)
sockobj.listen(5)
except Exception, e:
log.exception('Failed to init backdoor socket %s', e)
return
while True:
conn = None
try:
conn, _ = sockobj.accept()
console = SocketConsole(locals=None, conn=conn, banner=None)
console.switch()
console._run()
except IOError:
log.debug('IOError closing connection')
finally:
if conn:
conn.close()
def check_socket(socket_filename):
try:
os.unlink(socket_filename)
except OSError:
if os.path.exists(socket_filename):
raise
Example program:
make_threaded_backdoor(prefix='test')
while True:
pass
Example session:
mmatczuk#cactus:~$ rlwrap nc -U /var/run/test-3196.socket
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import os
>>> os.getpid()
3196
>>> quit()
mmatczuk#cactus:~$
This is a pretty robust tool that can be used to:
dump threads,
inspect process memory,
attach debugger on demand, pydev debugger (work for both eclipse and pycharm),
force GC,
monkeypatch function definition on the fly
and even more.
I personally write information to a file so that I have it afterwards, although this has the disadvantage of perhaps being slightly slower because it has to write to a file every time or every few times it retrieves a tweet.
Anyways, if you write it to a file "output.txt", you can open up bash and either type in tail output.txt for the latest 10 lines printed in the file, or you can type tail -f output.txt, which continuously updates the terminal prompt with the lines that you are writing to the file. If you wish to stop, just Ctrl-C.
Here's an example long-running program that also maintains a status socket. When a client connects to the socket, the script writes some status information to the socket.
#!/usr/bin/python
import os
import sys
import argparse
import random
import threading
import socket
import time
import select
val1 = 0
val2 = 0
lastupdate = 0
quit = False
# This function runs in a separate thread. When a client connects,
# we write out some basic status information, close the client socket,
# and wait for the next connection.
def connection_handler(sock):
global val1, val2, lastupdate, quit
while not quit:
# We use select() with a timeout here so that we are able to catch the
# quit flag in a timely manner.
rlist, wlist, xlist = select.select([sock],[],[], 0.5)
if not rlist:
continue
client, clientaddr = sock.accept()
client.send('%s %s %s\n' % (lastupdate, val1, val2))
client.close()
# This function starts the listener thread.
def start_listener():
sock = socket.socket(socket.AF_UNIX)
try:
os.unlink('/var/tmp/myprog.socket')
except OSError:
pass
sock.bind('/var/tmp/myprog.socket')
sock.listen(5)
t = threading.Thread(
target=connection_handler,
args=(sock,))
t.start()
def main():
global val1, val2, lastupdate
start_listener()
# Here is the part of our script that actually does "work".
while True:
print 'updating...'
lastupdate = time.time()
val1 = val1 + random.randint(1,10)
val2 = val2 + random.randint(100,200)
print 'sleeping...'
time.sleep(5)
if __name__ == '__main__':
try:
main()
except (Exception,KeyboardInterrupt,SystemExit):
quit=True
raise
You could write a simple Python client to connect to the socket, or you could use something like socat:
$ socat - unix:/var/tmp/myprog.sock
1403061693.06 6 152
I had write a similar application before.
Here is what I did:
When there are only a few commands needed, I just use signal as you did, just for not making it too complicated. By command, I mean something that you want you application to do, such as print_info in your post.
But when application updated, there are more different commands needed, I began to use a special thread listening on a socket port or reading a local file for accepting commands. Suppose the application need to support prinf_info1 print_info2 print_info3, so you can use a client connect to the target port and write print_info1 to make the application execute command print_info1 (Or just write print_info1 to a local file if you are using the reading local file mechanism).
When using the listening on a socket port mechanism, the disadvantage is it will take a bit more work to write a client to give commands, the advantage is you can give orders anywhere.
When using the reading a local file mechanism, the disadvantage is you have to make the thread check the file in a loop and it will use a bit resource, the advantage is giving orders is very simple (just write a string to a file) and you don't need to write a client and socket listen server.
rpyc is the perfect tool for this task.
In short, you define a rpyc.Service class which exposes the commands you want to expose, and start an rpyc.Server thread.
Your client then connects to your process, and calls the methods which are mapped to the commands your service exposes.
It's as simple and clean as that. No need to worry about sockets, signals, object serialization.
It has other cool features as well, for example the protocol being symmetric.
Your question relates to interprocess communication. You can achieve this by communicating over a unix socket or TCP port, by using a shared memory, or by using a message queue or cache system such as RabbitMQ and Redis.
This post talks about using mmap to achieve shared memory interprocess communication.
Here's how to get started with redis and RabbitMQ, both are rather simple to implement.

Tail -f log on server, process data, then serve to client via twisted

Goal: Show data from server in wxPython GUI on client
Newcomer to Twisted. I have a wxPython GUI running on a Windows 7 client, and I have a program running on an Ubuntu server that produces a log. My current attempt is to tail -f the log, pipe the output to a twisted server, then serve any data that meets my regex conditions to the client. I already have a tunnel open, so I don't need to complicate things with SSH. I've gotten the following block of code running, but it only serves the first line in the input. I know I need to keep checking the input for a newline and then writing it to the transport, but I'm not sure how to do that without breaking the connection.
I haven't been able to find enough information to patch a full solution together. I have also tried various other methods using sockets and file IO, but I think Twisted seems to be a good tool for this issue. Am I on the right track? Any recommendations appreciated. Thanks
#! /usr/bin/python
import optparse, os, sys
from twisted.internet.protocol import ServerFactory, Protocol
def parse_args():
usage = """usage: %prog [options]
"""
parser = optparse.OptionParser(usage)
help = "The port to listen on. Default to a random available port."
parser.add_option('--port', type='int', help=help)
help = "The interface to listen on. Default is localhost."
parser.add_option('--iface', help=help, default='localhost')
options =parser.parse_args()
return options#, log_file
class LogProtocol(Protocol):
def connectionMade(self):
for line in self.factory.log:
self.transport.write(line)
class LogFactory(ServerFactory):
protocol = LogProtocol
def __init__(self,log):
self.log = log
def main():
log = sys.stdin.readline()
options, log_file = parse_args()
factory = LogFactory(log)
from twisted.internet import reactor
port = reactor.listenTCP(options.port or 0, factory,
interface=options.iface)
print 'Serving %s on %s.' % (log_file, port.getHost())
reactor.run()
if __name__ == '__main__':
main()
To answer the first comment, I have also tried to just read the log from within Python, program hangs. Code follows:
#! /usr/bin/python
import optparse, os, sys, time
from twisted.internet.protocol import ServerFactory, Protocol
def parse_args():
usage = """ usage: %prog [options]"""
parser = optparse.OptionParser(usage)
help = "The port to listen on. Default to a random available port"
parser.add_option('--port', type='int', help=help, dest="port")
help = "The logfile to tail and write"
parser.add_option('--file', help=help, default='log/testgen01.log',dest="logfile")
options = parser.parse_args()
return options
class LogProtocol(Protocol):
def connectionMade(self):
for line in self.follow():
self.transport.write(line)
self.transport.loseConnection()
def follow(self):
while True:
line = self.factory.log.readline()
if not line:
time.sleep(0.1)
continue
yield line
class LogFactory(ServerFactory):
protocol = LogProtocol
def __init__(self,log):
self.log = log
def main():
options, log_file = parse_args()
log = open(options.logfile)
factory = LogFactory(log)
from twisted.internet import reactor
port = reactor.listenTCP(options.port or 0, factory) #,interface=options.iface)
print 'Serving %s on %s.' % (options.logfile, port.getHost())
reactor.run()
if __name__ == '__main__':
main()
You've got a few different easily separated goals you're trying to achieve here. First, I'll talk about watching the log file.
Your generator has a couple problems. One of them is big - it calls time.sleep(0.1). The sleep function blocks for the amount of time passed to it. While it is blocking, the thread which called it can't do anything else (that's roughly what "blocking" means, after all). You're iterating over the generator in the same thread as LogProtocol.connectionMade is called in (since connectionMade calls follow). LogProtocol.connectionMade is called in the same thread as the Twisted reactor is running, because Twisted is roughly single threaded.
So, you're blocking the reactor with the sleep calls. As long as sleep is blocking the reactor, the reactor can't do anything - like send bytes over sockets. Blocking is transitive, by the way. So LogProtocol.connectionMade is an even bigger problem: it iterates indefinitely, sleeping and reading. So it blocks the reactor indefinitely.
You need to read lines from the file without blocking. You can do this by polling - which is effectively the approach you're taking now - but avoiding the sleep call. Use reactor.callLater to schedule future reads from the file:
def follow(fObj):
line = fObj.readline()
reactor.callLater(0.1, follow, fObj)
follow(open(filename))
You can also let LoopingCall deal with the part that makes this a loop that runs forever:
def follow(fObj):
line = fObj.readline()
from twisted.internet.task import LoopingCall
loop = LoopingCall(follow, open(filename))
loop.start(0.1)
Either of these will let you read new lines from the file over time without blocking the reactor. Of course, they both just drop the line on the floor after they read it. This leads me to the second problem...
You need to react to the appearance of a new line in the file. Presumably you want to write it out to your connection. This isn't too hard: "reacting" is pretty easy, it usually just means calling a function or a method. In this case, it's easiest to have the LogProtocol set up the log following and supply a callback object to handle lines when they appear. Consider this slight adjustment to the follow function from above:
def follow(fObj, gotLine):
line = fObj.readline()
if line:
gotLine(line)
def printLine(line):
print line
loop = LoopingCall(follow, open(filename), printLine)
loop.start(0.1)
Now you can non-blockingly poll a log file for new lines and learn when one has actually shown up. This is simple to integrate with LogProtocol...
class LogProtocol(Protocol):
def connectionMade(self):
self.loop = LoopingCall(follow, open(filename), self._sendLogLine)
self.loop.start()
def _sendLogLine(self, line):
self.transport.write(line)
One last detail is that you probably want to stop watching the file when the connection is lost:
def connectionLost(self, reason):
self.loop.stop()
So, this solution avoids blocking by using LoopingCall instead of time.sleep and pushes lines to the protocol when they're found using simple method calls.

Kill Process from Makefile

I'm trying to write a makefile that will replicate a client/server program I've written (which is really just two Python scripts, but that's not the real question of concern)...
test:
python server.py 7040 &
python subscriber.py localhost 7040 &
python client.py localhost 7040;
So I run make test
and I get the ability to enter a message from client.py:
python server.py 7040 &
python subscriber.py localhost 7040 &
python client.py localhost 7040;
Enter a message:
When the client enters an empty message, he closes the connection and quits successfully. Now, how can I automate the subscriber (who is just a "listener) of the chat room to close - which will in turn exit the server process.
I was trying to get the process IDs from these calls using pidof - but wasn't really sure if that was the correct route. I am no makefile expert; maybe I could just write a quick Python script that gets executed from my makefile to do the work for me? Any suggestions would be great.
EDIT:
I've gone writing the Python script route, and have the following:
import server
import client
import subscriber
#import subprocess
server.main(8092)
# child = subprocess.Popen("server.py",shell=False)
subscriber.main('localhost',8090)
client.main('localhost', 8090)
However, now I'm getting errors that my global variables are not defined ( I think its directly related to adding the main methods to my server (and subscriber and client, but I'm not getting that far yet:). This may deserve a separate question...
Here's my server code:
import socket
import select
import sys
import thread
import time
# initialize list to track all open_sockets/connected clients
open_sockets = []
# thread for each client that connects
def handle_client(this_client,sleeptime):
global message,client_count,message_lock,client_count_lock
while 1:
user_input = this_client.recv(100)
if user_input == '':
break
message_lock.acquire()
time.sleep(sleeptime)
message += user_input
message_lock.release()
message = message + '\n'
this_client.sendall(message)
# remove 'this_client' from open_sockets list
open_sockets.remove(this_client)
this_client.close()
client_count_lock.acquire()
client_count -= 1
client_count_lock.release()
def main(a):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
port = a
server.bind(('', port))
server.listen(5)
message = ''
message_lock = thread.allocate_lock()
client_count = 2
client_count_lock = thread.allocate_lock()
for i in range(client_count):
(client,address) = server.accept()
open_sockets.append(client)
thread.start_new_thread(handle_client,(client,2))
server.close()
while client_count > 0:
pass
print '************\nMessage log from all clients:\n%s\n************' % message
if __name__ == "__main__":
if sys.argv[1]:
main(int(sys.argv[1]))
else:
main(8070)
Use plain old bash in the script, get the PID and use kill.
Or, much much much much better, create a testing script that handles all that and call that from your Makefile. A single run_tests.py, say.
You want to keep as much logic as possible outside the Makefile.
related to 'global' issue => define handle_client inside main and remove the global message, client_count,... line

Categories

Resources