Why Thrift raises AssertionError when requested by multiple threads? - python

i'm building a kind of simulator that uses thrift protocol.
But when im executing multiple threads of my virtual equipments sending messages, the program breaks after a short time by receiving them, think the buffer is overloaded or something like that, or not, so i'm here asking for some help if its possible.
Here's the main pieces of my code
A class for threading:
class ThreadManager (threading.Thread):
def __init__(self, name, obj, client, layout):
threading.Thread.__init__(self)
self.name = name
self.obj = obj
self.client = client
self.layout = layout
def run(self):
print ("Starting " + self.name)
while True:
sleep(2)
self.obj.auto_gen_msg(self.client, layout=self.layout)
The method for generating messages:
def auto_gen_msg(self, client, layout='', min_delay=15, max_delay=30):
if not layout:
msg = self.gen_message(self.draw_random_model())
else:
msg = self.gen_message(layout)
wait = randint(min_delay, max_delay)
sleep(wait)
print(self.eqp_type, " delivered a message ...")
getattr(client, msg[0])(*msg[1])
The main:
def start(layout, equipment, number):
try:
host = 'localhost'
transport = TSocket.TSocket(host, port=9090)
transport = TTransport.TBufferedTransport(transport)
protocol = TCompactProtocol.TCompactProtocol(transport)
client = SuiteService.Client(protocol)
transport.open()
equips = [Equipment(equipment) for i in range(number)]
threads = [ThreadManager(i.eqp_type, i, client, layout) for i in equips]
for i in range(len(threads)):
threads[i].start()
sleep(2)
while True:
pass
transport.close()
except Thrift.TException as tx:
print ("%s " % (tx.message))
The error haunting me:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Users/lem4fia/Documents/sg/loki/loki-thrift/loki_thrift/loki_thrift/lib/thread_manager.py", line 39, in run
self.obj.auto_gen_msg(self.client, layout=self.layout)
File "/Users/lem4fia/Documents/sg/loki/loki-thrift/loki_thrift/loki_thrift/lib/virtual.py", line 281, in auto_gen_msg
getattr(client, msg[0])(*msg[1])
File "/Users/lem4fia/Documents/sg/loki/thrift-server/thrift_server/suite/SuiteService.py", line 4895, in v1
self.send_v1(ir, ts, ch, o1, o2, o3, o4, o5, o6, o7)
File "/Users/lem4fia/Documents/sg/loki/thrift-server/thrift_server/suite/SuiteService.py", line 4899, in send_v1
self._oprot.writeMessageBegin('v1', TMessageType.CALL, self._seqid)
File "/Users/lem4fia/Documents/sg/loki/lokiv/lib/python3.6/site-packages/thrift-0.11.0-py3.6-macosx-10.6-intel.egg/thrift/protocol/TCompactProtocol.py", line 156, in writeMessageBegin
assert self.state == CLEAR
AssertionError
Curiously, it doesnt bug if instancing 2 virtual equipments in thread, but 10 virtual equipments (sometimes less than this) is sufficient to raise this error.
Can someone please gimme a light? :)

The problem there is that it seems that you have to use one diferent Transport object for each thread. This is probably related to Thrift's implementation!
Reference here : http://grokbase.com/t/thrift/user/134s16ks4m/single-connection-and-multiple-threads

As a general rule 1), Thrift is not intended to be used across multiple threads.
This is, at least to my knowledge, true for all currently supported languages.
One instance per thread will do.
1) aside from server-end things like TThreadedServer or TThreadPoolServer

Related

How to append client names to listbox?

I have a listbox that, when server accepts a connection, should display the names of the clients. The server code is as follows:
class GUI2(GUI): #server GUI
def __init__(self):
self.clientlist = tk.Listbox(self.clientframe) #listbox that should display client names
self.clientlist.pack(expand = 1)
self.s = INITSERVER()
self.process = Process(target = self.s.startChat) #prevents the GUI from freezing due to server running
self.process.start()
class INITSERVER(GUI2):
def startChat(self): #starts the server
print("server is working on " + self.SERVER)
self.server.listen(30) #sets max number to only 30 clients
while True:
self.conn, self.addr = self.server.accept()
self.name = self.conn.recv(1024).decode(self.FORMAT)
self.clientlist.insert("end", self.name) #append client names to listbox supposedly
print(f"Name is :{self.name}")
The client code is as follows:
class INITCLIENT(GUI3): #GUI3 is client GUI; calls INITCLIENT when done packing
def __init__(self):
self.PORT = 5000
self.SERVER = "" #left blank for this post; contains the server's exact address
self.ADDRESS = (self.SERVER, self.PORT)
self.FORMAT = "utf-8"
self.client = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
self.client.connect(self.ADDRESS)
self.name = g.entergname.get() # g = GUI() i.e. root window; entergname is Entry widget where client inputs their names
self.client.send(self.name.encode(self.FORMAT)) #sends inputted names to INITSERVER to display in listbox.... supposedly
Through VS Code, I run the server first, then join the server using another terminal; the problem happens next.
Process Process-1:
Traceback (most recent call last):
File "F:\Program Files (x86)\Python\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "F:\Program Files (x86)\Python\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "f:\project\mainmenu.py", line 341, in startChat
self.clientlist.insert("end", self.name) #append client names to listbox
AttributeError: 'INITSERVER' object has no attribute 'clientlist'`
I tried replacing self.clientlist.insert to super().clientlist.insert but the same error pops up with `AttributeError: 'super' object has no attribute 'clientlist'
Any help in fixing the error, or in pointing me to the right direction is greatly appreciated.
EDIT: So after countless trial and error, I think the error is caused by the duplicate/child processes not knowing what is self.clientlist because they don't know that self (i.e. INITSERVER) is a child of GUI2; Process doesn't duplicate the parent attributes, only the ones within the function of Startchat().
Is there a way to restructure the code so that the clients' names can be displayed through listbox? Or is what I'm doing not compatible with Python and I have to display it in some other way?
Thanks to #acw1668 I was guided to the answer: I just needed to remove the INITSERVER class and move all of its functions and attributes to GUI2 class, then use Threading instead of Process to target startChat to bypass Tkinter pickle errors. The new code is as follows:
class GUI2(GUI): #server GUI
def __init__(self):
self.clientlist = tk.Listbox(self.clientframe) #listbox that should display client names
self.clientlist.pack(expand = 1)
self.thread = Thread(target = self.startChat) #prevents the GUI from freezing due to server running
self.thread.start()
def startChat(self):
if (self.checksignal == 0): #custom-made stop signal for stopping the thread
print("server is working on " + self.SERVER)
self.server.listen(30)
while True:
self.conn, self.addr = self.server.accept()
self.name = self.conn.recv(1024).decode(self.FORMAT)
self.clientlist.insert("end", self.name) #append client names to listbox
else:
return

Python tornado Too many open files Ssl

If i have many connection on my tornado server, i se error on log
Exception in callback (<socket._socketobject object at 0x7f0b9053e3d0>, <function null_wrapper at 0x7f0b9054c140>)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 276, in accept_handler
callback(connection, address)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 264, in _handle_connection
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 517, in ssl_wrap_socket
context = ssl_options_to_context(ssl_options)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 494, in ssl_options_to_context
context.load_cert_chain(ssl_options['certfile'], ssl_options.get('keyfile', None))
IOError: [Errno 24] Too many open files
and disconnect my client. Tornado open ssl certificate file on ewery connect?
Tornado app
class VastWebSocket(tornado.websocket.WebSocketHandler):
connections = set()
current_connect = 0
current_user = 0
status_play_vast = False
def open(self):
c = Connection()
c.connection = self
VastWebSocket.connections.add(c)
self.current_connect = c
def on_message(self, msg):
data = json.loads(msg)
app_log.info("on message = " + msg)
if not 'status' in data:
return
if data["status"] == "start_vast":
VastWebSocket.status_play_vast = True
if data["status"] == "end_vast":
VastWebSocket.status_play_vast = False
app_log.info("status_play_vast = " + str(VastWebSocket.status_play_vast))
if data["status"] == "get_status_vast":
self.current_connect.connection.write_message({"status": VastWebSocket.status_play_vast})
return
for conn in self.connections:
conn.connection.write_message(msg)
def on_close(self):
if self.current_connect <> 0:
VastWebSocket.connections.remove(self.current_connect)
def check_origin(self, origin):
return True
Start tornado server from django command
class Command(BaseCommand):
help = 'Starts the Tornado application for message handling.'
def add_arguments(self, parser):
parser.add_argument('port_number', nargs='+', type=int)
def sig_handler(self, sig, frame):
"""Catch signal and init callback"""
tornado.ioloop.IOLoop.instance().add_callback(self.shutdown)
def shutdown(self):
"""Stop server and add callback to stop i/o loop"""
self.http_server.stop()
io_loop = tornado.ioloop.IOLoop.instance()
io_loop.add_timeout(time.time() + 2, io_loop.stop)
def handle(self, *args, **options):
if "port_number" in options:
try:
port = int(options["port_number"][0])
except ValueError:
raise CommandError('Invalid port number specified')
else:
port = 8030
ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_ctx.load_cert_chain(os.path.join("/www/cert/", "rumma.crt"),
os.path.join("/www/cert/", "rumma.key"))
self.http_server = tornado.httpserver.HTTPServer(application, ssl_options = ssl_ctx)
self.http_server.bind(port, address="0.0.0.0")
self.http_server.start(1)
# Init signals handler
signal.signal(signal.SIGTERM, self.sig_handler)
# This will also catch KeyboardInterrupt exception
signal.signal(signal.SIGINT, self.sig_handler)
tornado.ioloop.IOLoop.instance().start()
Why his open many file, in my mind need open ssl in start server and all.
Stackoverflow asks more info, but in top, all info all need.
Your code seems to run multiple process or thread that all accessing to the ssl keyfile directly. Linux default ulimit is quick low and this error then occurs.
You can check the current setting with:
$ ulimit -a
The quick and dirty solution is to increase this value:
$ ulimit -n <new_value>
Even unlimited is accepted with the -n option.
Note: You can permanently set the value the app user .bashrc file.
In both case you'll need to log out then log in for the change to take effect.
But modifying this value is a bit dirty because it's a global value for a given user environment.
The harder but cleaner solution is to find a way to load the content of the file in memory when your app load and make the loaded value/variable accessible for all processes.

Threaded UDP server bug

I've broken this code somehow and I can't fix it. The server/client code was written by someone else (mostly from the examples in py manuals), and I can't work out what's wrong.
I'm getting issues with super and init and that jazz, mostly because I don't fully understand and find most documentation on the subject leaves me more confused than when I started. For now, I'll be happy enough to get it working. It's likely to be some silly issue, fixed in one line.
Any ideas? I've tried not to paste in code not relevant, but I can add more or provide the whole file if it helps. The code falls over specifically when a handle thread is created. My test case is running code instances, and passing messages between them and it falls over on receipt of the first UDP message.
# Library imports
import threading
import SocketServer
import multiprocessing
# .. More code here ...
class ThreadedUDPServer(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
pass
class NodeDaemon(ThreadedUDPServer):
def __init__(self, host, port, modules):
ThreadedUDPServer.__init__(self, (host, port), NodeProtocolHandler)
# Store parameters in the class
self.modules = modules
self.port = port
self.ip = host
# Check if we have enabled multithreaded listen daemon
if settings.MULTI:
self.server_thread = multiprocessing.Process(target=self.serve_forever)
else:
self.server_thread = threading.Thread(target=self.serve_forever)
# Make the server thread daemonic
self.server_thread.daemon = True
# Start the server thread
self.server_thread.start()
# Update our identity node info
self.modules.identity = NodeInfo(host, port)
def fetch_modules(self):
return self.modules
class NodeProtocolHandler(SocketServer.BaseRequestHandler):
"""
Handles nody things.
Check https://docs.python.org/2/library/socketserver.html
For more sweet deets.
"""
def __init__(self,*args,**kwargs):
super(SocketServer.BaseRequestHandler,self).__init__(args,kwargs)
# Grab modules references
self.modules = self.server.fetch_modules()
# ... More code here .. #
def handle(self):
"""
Main routine to handle all incoming UDP packets.
"""
# Grab the raw message data received
raw_data = self.request[0].strip()
# ... More code here ...
The error generated is:
Exception happened during processing of request from ('127.0.0.1', 60377)
----------------------------------------
Traceback (most recent call last):
File "C:\Python27\lib\SocketServer.py", line 593, in process_request_thread
self.finish_request(request, client_address)
File "C:\Python27\lib\SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\some_dir\node_daemon.py", line 60, in __init__
super(SocketServer.BaseRequestHandler,self).__init__(args,kwargs)
TypeError: must be type, not classobj
def __init__(self,*args,**kwargs):
- super(SocketServer.BaseRequestHandler,self).__init__(args, kwargs)
+ SocketServer.BaseRequestHandler.__init__(self,*args, **kwargs)

Catch Interrupted system call in threading

I have a client-server application using envisage framework, I'm using threads to handle the connection, here is a token from the code
....
SocketServer.TCPServer.allow_reuse_address = True
self.server = TCPFactory( ( HOST, PORT ), TCPRequestHandler, self.application)
self.server_thread = threading.Thread( target = self.server.serve_forever )
self.server_thread.setDaemon( True )
self.server_thread.start()
class TCPFactory( SocketServer.ThreadingTCPServer ):
def __init__( self, server_address, RequestHandlerClass, application ):
SocketServer.ThreadingTCPServer.__init__( self, server_address, RequestHandlerClass )
self.application = application
class TCPRequestHandler( SocketServer.BaseRequestHandler ):
""""""
def setup( self ):
.....
In the envisage framework I call the open_file( ) function, which give us a popup window, but when this window appear than I'm receiving the following error
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/SocketServer.py", line 225, in serve_forever
r, w, e = select.select([self], [], [], poll_interval)
error: (4, 'Interrupted system call')
How can I handle this error?
After Armin Rigo comment, I modified the SockeServer.py
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__is_shut_down.clear()
try:
while not self.__shutdown_request:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
try:
r, w, e = select.select([self], [], [], poll_interval)
except select.error as ex:
#print ex
if ex[0] == 4:
continue
else:
raise
if self in r:
self._handle_request_noblock()
finally:
self.__shutdown_request = False
self.__is_shut_down.set()
I just ran into a similar problem when I added a little httpd server to a program, it receives various signals from other processes. After playing around I came up with a simple solution that avoids actually modifying stlib code, but I'm thinking it's a little risky. I simply wrapped the serve_forever call in a loop that catches and ignores socket errors:
def non_int_serve_forever(self, poll_interval=0.5):
while 1:
try:
self.serve_forever(poll_interval=poll_interval)
break
except select.error:
pass
This removes the risk of needing different solutions for different versions of SocketServer.py, but it's not obvious that serve_forever() should be restartable multiple times, even though it appears to work now.
Any thoughts?

Python Sharing a network socket with multiprocessing.Manager

I am currently writing a nginx proxy server module with a Request queue in front, so the requests are not dropped when the servers behind the nginx can't handle the requests (nginx is configured as a load balancer).
I am using
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
The idea is to put the request in a queue before handling them. I know multiprocessing.Queue supports only simple object and cannot support raw sockets, so I tried using a multiprocess.Manager to make a shared dictionary. The Manager also uses sockets for connection, so this method failed too. Is there a way to share network sockets between processes?
Here is the problematic part of the code:
class ProxyServer(Threader, HTTPServer):
def __init__(self, server_address, bind_and_activate=True):
HTTPServer.__init__(self, server_address, ProxyHandler,
bind_and_activate)
self.manager = multiprocessing.Manager()
self.conn_dict = self.manager.dict()
self.ticket_queue = multiprocessing.Queue(maxsize= 10)
self._processes = []
self.add_worker(5)
def process_request(self, request, client):
stamp = time.time()
print "We are processing"
self.conn_dict[stamp] = (request, client) # the program crashes here
#Exception happened during processing of request from ('172.28.192.34', 49294)
#Traceback (most recent call last):
# File "/usr/lib64/python2.6/SocketServer.py", line 281, in _handle_request_noblock
# self.process_request(request, client_address)
# File "./nxproxy.py", line 157, in process_request
# self.conn_dict[stamp] = (request, client)
# File "<string>", line 2, in __setitem__
# File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in _callmethod
# conn.send((self._id, methodname, args, kwds))
#TypeError: expected string or Unicode object, NoneType found
self.ticket_queue.put(stamp)
def add_worker(self, number_of_workers):
for worker in range(number_of_workers):
print "Starting worker %d" % worker
proc = multiprocessing.Process(target=self._worker, args = (self.conn_dict,))
self._processes.append(proc)
proc.start()
def _worker(self, conn_dict):
while 1:
ticket = self.ticket_queue.get()
print conn_dict
a=0
while a==0:
try:
request, client = conn_dict[ticket]
a=1
except Exception:
pass
print "We are threading!"
self.threader(request, client)
U can use multiprocessing.reduction to transfer the connection and socket objects between processes
Example Code
# Main process
from multiprocessing.reduction import reduce_handle
h = reduce_handle(client_socket.fileno())
pipe_to_worker.send(h)
# Worker process
from multiprocessing.reduction import rebuild_handle
h = pipe.recv()
fd = rebuild_handle(h)
client_socket = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM)
client_socket.send("hello from the worker process\r\n")
Looks like you need to pass file descriptors between processes (assuming Unix here, no clue about Windows). I've never done this in Python, but here is link to python-passfd project that you might want to check.
You can look at this code - https://gist.github.com/sunilmallya/4662837 which is
multiprocessing.reduction socket server with parent processing passing connections to client after accepting connections

Categories

Resources