Tornado [Errno 24] Too many open files [duplicate] - python

This question already has an answer here:
Tornado "error: [Errno 24] Too many open files" error
(1 answer)
Closed 9 years ago.
We are running a Tornado 3.0 service on a RedHat OS and getting the following error:
[E 140102 17:07:37 ioloop:660] Exception in I/O handler for fd 11
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 653, in start
self._handlers[fd](fd, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 241, in wrapped
callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 136, in accept_handler
connection, address = sock.accept()
File "/usr/lib/python2.7/socket.py", line 202, in accept
error: [Errno 24] Too many open files
But we couldn't figure out what that means.
Our Tornado code is as follows:
import sys
from tornado.ioloop import IOLoop
from tornado.options import parse_command_line, define, options
from tornado.httpserver import HTTPServer
from tornado.netutil import bind_sockets
import tornado
sys.path.append("..")
from tornado.web import RequestHandler, Application
from shared.bootstrap import *
from time import time
from clients import ClientFactory
from shared.configuration import Config
from shared.logger import Logger
from algorithms.neighborhood.application import NeighborhoodApplication
import traceback
define('port', default=8000, help="Run on the given port", type=int)
define('debug', default=True, help="Run application in debug mode", type=bool)
class WService(RequestHandler):
_clients = {}
def prepare(self):
self._start_time = time()
RequestHandler.prepare(self)
def get(self, algorithm = None):
self.add_header('Content-type', 'application/json')
response = {'skus' : []}
algorithm = 'neighborhood' if not algorithm else algorithm
try:
if not algorithm in self._clients:
self._clients[algorithm] = ClientFactory.get_instance(algorithm)
arguments = self.get_arguments_by_client(self._clients[algorithm].get_expected_arguments())
response['skus'] = app.get_manager().make_recommendations(arguments)
self.write(response)
except Exception as err:
self.write(response)
error("Erro: " + str(err))
def get_arguments_by_client(self, expected_arguments):
arguments = {}
for key in expected_arguments:
arguments[key] = self.get_argument(key, expected_arguments[key])
return arguments
def on_connection_close(self):
self.finish({'skus':[]})
RequestHandler.on_connection_close(self)
def on_finish(self):
response_time = 1000.0 *(time() - self._start_time)
log("%d %s %.2fms" % (self.get_status(), self._request_summary(), response_time))
RequestHandler.on_finish(self)
def handling_exception(signal, frame):
error('IOLoop blocked for %s seconds in\n%s\n\n' % ( io_loop._blocking_signal_threshold, ''.join(traceback.format_stack(frame)[-3:])))
if __name__ == "__main__":
configuration = Config()
Logger.configure(configuration.get_configs('logger'))
app = NeighborhoodApplication({
'application': configuration.get_configs('neighborhood'),
'couchbase': configuration.get_configs('couchbase'),
'stock': configuration.get_configs('stock')
})
app.run()
log("Neighborhood Matrices successfully created...")
log("Initiating Tornado Service...")
parse_command_line()
application = Application([
(r'/(favicon.ico)', tornado.web.StaticFileHandler, {"path": "./images/"}),
(r"/(.*)", WService)
], **{'debug':options.debug, 'x-headers' : True})
sockets = bind_sockets(options.port, backlog=1024)
server = HTTPServer(application)
server.add_sockets(sockets)
io_loop = IOLoop.instance()
io_loop.set_blocking_signal_threshold(.05, handling_exception)
io_loop.start()
It's a very basic script, basically it gets the URL, process it in the make_recommendation function and sends back the response.
We've tried to set a tornado timeout of 50 ms through the io_loop.set_blocking_signal_threshold function as sometimes the processing of the URL might take this long.
The system receives around 8000 requests per minute and it worked fine for about 30 minutes, but after that it started throwing the "too many files error" and broke down. On general the requests were taking about 20 ms to get processed but when the error started happening the time consumed increased to seconds, all of a sudden.
We tried to see how many connections the port 8000 had and it had several open connections all with the "ESTABLISHED" status.
Is there something wrong in our Tornado script? We believe our timeout function is not working properly, but for what we've researched so far everything seems to be ok.
If you need more info please let me know.
Thanks in advance,

Many linux distributions ship with very low limits (e.g. 250) for the number of open files per process. You can use "ulimit -n" to see the current value on your system (be sure to issue this command in the same environment that your tornado server runs as). To raise the limit you can use the ulimit command or modify /etc/security/limits.conf (try setting it to 50000).
Tornado's HTTP server does not (as of version 3.2) close connections that a web browser has left open, so idle connections may accumulate over time. This is one reason why it is recommended to use a proxy like nginx or haproxy in front of a Tornado server; these servers are more hardened against this and other potential DoS issues.

Related

Google Cloud Speech to Text Audio Timeout Error when used with Twilio "Stream" verb and Websocket

I'm currently trying to make a system that can transcribe a phone call in real time and then display the conversation in my command line. To do this, im using a Twilio phone number which sends out a http request when called. Then using Flask, Ngrok and Websockets to compile my server code, make my local port public and to transfer the data, the TwiML verb "Stream" is used to stream the audio data to the Google Cloud Speech-Text API. I have so far used Twilio's python demo on GitHub (https://github.com/twilio/media-streams/tree/master/python/realtime-transcriptions).
My server code:
from flask import Flask, render_template
from flask_sockets import Sockets
from SpeechClientBridge import SpeechClientBridge
from google.cloud.speech_v1 import enums
from google.cloud.speech_v1 import types
import json
import base64
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "./<KEY>.json"
HTTP_SERVER_PORT = 8080
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.MULAW,
sample_rate_hertz=8000,
language_code='en-US')
streaming_config = types.StreamingRecognitionConfig(
config=config,
interim_results=True)
app = Flask(__name__)
sockets = Sockets(app)
#app.route('/home')
def home():
return render_template("index.html")
#app.route('/twiml', methods=['POST'])
def return_twiml():
print("POST TwiML")
return render_template('streams.xml')
def on_transcription_response(response):
if not response.results:
return
result = response.results[0]
if not result.alternatives:
return
transcription = result.alternatives[0].transcript
print("Transcription: " + transcription)
#sockets.route('/')
def transcript(ws):
print("WS connection opened")
bridge = SpeechClientBridge(
streaming_config,
on_transcription_response
)
while not ws.closed:
message = ws.receive()
if message is None:
bridge.terminate()
break
data = json.loads(message)
if data["event"] in ("connected", "start"):
print(f"Media WS: Received event '{data['event']}': {message}")
continue
if data["event"] == "media":
media = data["media"]
chunk = base64.b64decode(media["payload"])
bridge.add_request(chunk)
if data["event"] == "stop":
print(f"Media WS: Received event 'stop': {message}")
print("Stopping...")
break
bridge.terminate()
print("WS connection closed")
if __name__ == '__main__':
from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler
server = pywsgi.WSGIServer(('', HTTP_SERVER_PORT), app, handler_class=WebSocketHandler)
print("Server listening on: http://localhost:" + str(HTTP_SERVER_PORT))
server.serve_forever()
streams.xml:
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Say> Thanks for calling!</Say>
<Start>
<Stream url="wss://<ngrok-URL/.ngrok.io/"/>
</Start>
<Pause length="40"/>
</Response>
Twilio WebHook:
http://<ngrok-URL>.ngrok.io/twiml
Im am getting the following error when I run the server code and then call the Twilio number:
C:\Users\Max\Python\Twilio>python server.py
Server listening on: http://localhost:8080
POST TwiML
WS connection opened
Media WS: Received event 'connected': {"event":"connected","protocol":"Call","version":"0.2.0"}
Media WS: Received event 'start': {"event":"start","sequenceNumber":"1","start":{"accountSid":"AC8abc5aa74496a227d3eb489","streamSid":"MZe6245f23e2385aa2ea7b397","callSid":"CA5864313b4992607d3fe46","tracks":["inbound"],"mediaFormat":{"encoding":"audio/x-mulaw","sampleRate":8000,"channels":1}},"streamSid":"MZe6245f2397c1285aa2ea7b397"}
Exception in thread Thread-4:
Traceback (most recent call last):
File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages\google\api_core\grpc_helpers.py", line 96, in next
return six.next(self._wrapped)
File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages\grpc\_channel.py", line 416, in __next__
return self._next()
File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages\grpc\_channel.py", line 689, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.OUT_OF_RANGE
details = "Audio Timeout Error: Long duration elapsed without audio. Audio should be sent close to real time."
debug_error_string = "{"created":"#1591738676.565000000","description":"Error received from peer ipv6:[2a00:1450:4009:807::200a]:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Audio Timeout Error: Long duration elapsed without audio. Audio should be sent close to real time.","grpc_status":11}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Max\Python\Twilio\SpeechClientBridge.py", line 37, in process_responses_loop
for response in responses:
File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages\google\api_core\grpc_helpers.py", line 99, in next
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.OutOfRange: 400 Audio Timeout Error: Long duration elapsed without audio. Audio should be sent close to real time.
Media WS: Received event 'stop': {"event":"stop","sequenceNumber":"752","streamSid":"MZe6245f2397c125aa2ea7b397","stop":{"accountSid":"AC8abc5aa74496a60227d3eb489","callSid":"CA5842bc6431314d502607d3fe46"}}
Stopping...
WS connection closed
I cant work out why im getting the audio timeout error? Is it a firewall issue with Twilio and Google? An encoding issue?
Any help would be greatly appreciated.
System:
Windows 10
Python 3.7.1
ngrok 2.3.35
Flask 1.1.2
As your streams.xml returned socket url "wss://<ngrok-URL/.ngrok.io/", please make sure it matches with your routing (e.g. #sockets.route('/'))
If your socket starting with '/', then your should rewrite the streams.xml, see below as an example.
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Say> Thanks for calling!</Say>
<Start>
<Stream url="wss://YOUR_NGROK_ID.ngrok.io/"/>
</Start>
<Pause length="40"/>
</Response>
I ran some tests on this to try to establish what was happening. I put a timer over the
bridge = SpeechClientBridge(
streaming_config,
on_transcription_response)
section of code and found that it was taking ~10.9s to initialize. I believe the google API has a timeout of 10s. I tried running this on my google cloud instance which has more oomph than my laptop and it works perfectly well. Either this, or there are some different versions of libraries/code etc installed on the GCP instance, which I need to check.
This is related to gevent (used by flask_sockets) and grpc (used by google cloud speech) conflict described in this issue https://github.com/grpc/grpc/issues/4629
the solution is to add the following code
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()

Run more than one server using python

I am basically trying to run two servers using a simple script. I have used the solutions from here, there and others.
For instance in the example below I am trying to host two directories in ports 8000 and 8001.
import http.server
import socketserver
import os
import multiprocessing
def run_webserver(path, PORT):
os.chdir(path)
Handler = http.server.SimpleHTTPRequestHandler
httpd = socketserver.TCPServer(('0.0.0.0', PORT), Handler)
httpd.serve_forever()
return
if __name__ == '__main__':
# Define "services" directories and ports to use to host them
server_details = [
("path/to/directory1", 8000),
("path/to/directory2", 8001)
]
# Run servers
servers = []
for s in server_details:
p = multiprocessing.Process(
target=run_webserver,
args=(s[0], s[1])
)
servers.append(p)
for server in servers:
server.start()
for server in servers:
server.join()
Once I execute the code below it all works fine and I can access both directories using http://localhost:8000 and http://localhost:8001. However, when I exit the script using Ctrl+C and then try to run the script again I get the following error:
Traceback (most recent call last):
"/home/user/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/user/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "repos/scripts/webhosts.py", line 12, in run_webserver
File "/home/user/anaconda3/lib/python3.6/socketserver.py", line 453, in __init__
self.server_bind()
File "/home/user/anaconda3/lib/python3.6/socketserver.py", line 467, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
This error only pops up if I actually access the server when running. If I do not access to it, I can re-run the script... From the error message it looks like there is something still accessing the server when typing lsof -n -i4TCP:8000 and lsof -n -i4TCP:8001 I don't get anything... And after a while this error stops appearing and I can actually run the script.
Before starting the server, add:
socketserver.TCPServer.allow_reuse_address = True
or change that attribute of the instance, again before calling serve_forever():
htttp.allow_reuse_address = True
Documentation reference:
BaseServer.allow_reuse_address
Whether the server will allow the reuse of an address. This defaults to False, and can be set in subclasses to change the policy.
Expanding on my comment:
In a previous edit OP registered an exit handler atexit.register(exit_handler). The question is, does it cleans your systems resources (a.k.a open sockets)?
If your program exits without closing the sockets (because you interrupted it with Ctrl+C) it make take some time for the OS clean up the sockets (cause they're in TIME_WAIT state), you can read about TIME_WAIT and how to avoid it in here.
Using exit handlers is a good way to avoid this
import atexit
atexit.register(clean_sockets)
def clean_sockets():
mysocket.server_close() #

gRPC: Rendezvous terminated with (StatusCode.INTERNAL, Received RST_STREAM with error code 2)

I'm implementing gRPC client and server in python. Server receives data from client successfully, but client receives back "RST_STREAM with error code 2".
What does it actually mean, and how do I fix it?
Here's my proto file:
service MyApi {
rpc SelectModelForDataset (Dataset) returns (SelectedModel) {
}
}
message Dataset {
// ...
}
message SelectedModel {
// ...
}
My Services implementation looks like this:
class MyApiServicer(my_api_pb2_grpc.MyApiServicer):
def SelectModelForDataset(self, request, context):
print("Processing started.")
selectedModel = ModelSelectionModule.run(request, context)
print("Processing Completed.")
return selectedModel
I start server with this code:
import grpc
from concurrent import futures
#...
server = grpc.server(futures.ThreadPoolExecutor(max_workers=100))
my_api_pb2_grpc.add_MyApiServicer_to_server(MyApiServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
My client looks like this:
channel = grpc.insecure_channel(target='localhost:50051')
stub = my_api_pb2_grpc.MyApiStub(channel)
dataset = my_api_pb2.Dataset()
# fill the object ...
model = stub.SelectModelForDataset(dataset) # call server
After client makes it's call, server starts the processing until completed (takes a minute, approximately), but the client returns immediately with the following error:
Traceback (most recent call last):
File "Client.py", line 32, in <module>
run()
File "Client.py", line 26, in run
model = stub.SelectModelForDataset(dataset) # call server
File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 484, in __call__
return _end_unary_response_blocking(state, call, False, deadline)
File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 434, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.INTERNAL, Received RST_STREAM with error code 2)>
If I do the request asynchronously and wait on future,
model_future = stub.SelectModelForDataset.future(dataset) # call server
model = model_future.result()
the client waits until completion, but after that still returns an error:
Traceback (most recent call last):
File "AsyncClient.py", line 35, in <module>
run()
File "AsyncClient.py", line 29, in run
model = model_future.result()
File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 276, in result
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.INTERNAL, Received RST_STREAM with error code 2)>
UPD: After enabling tracing GRPC_TRACE=all I discovered the following:
Client, immediately after request:
E0109 17:59:42.248727600 1981 channel_connectivity.cc:126] watch_completion_error: {"created":"#1515520782.248638500","description":"GOAWAY received","file":"src/core/ext/transport/chttp2/transport/chttp2_transport.cc","file_line":1137,"http2_error":0,"raw_bytes":"Server shutdown"}
E0109 17:59:42.451048100 1979 channel_connectivity.cc:126] watch_completion_error: "Cancelled"
E0109 17:59:42.451160000 1979 completion_queue.cc:659] Operation failed: tag=0x7f6e5cd1caf8, error={"created":"#1515520782.451034300","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.cc","file_line":133}
...(last two messages keep repeating 5 times every second)
Server:
E0109 17:59:42.248201000 1985 completion_queue.cc:659] Operation failed: tag=0x7f3f74febee8, error={"created":"#1515520782.248170000","description":"Server Shutdown","file":"src/core/lib/surface/server.cc","file_line":1249}
E0109 17:59:42.248541100 1975 tcp_server_posix.cc:231] Failed accept4: Invalid argument
E0109 17:59:47.362868700 1994 completion_queue.cc:659] Operation failed: tag=0x7f3f74febee8, error={"created":"#1515520787.362853500","description":"Server Shutdown","file":"src/core/lib/surface/server.cc","file_line":1249}
E0109 17:59:52.430612500 2000 completion_queue.cc:659] Operation failed: tag=0x7f3f74febee8, error={"created":"#1515520792.430598800","description":"Server Shutdown","file":"src/core/lib/surface/server.cc","file_line":1249}
... (last message kept repeating every few seconds)
UPD2:
The full content of my Server.py file:
import ModelSelectionModule
import my_api_pb2_grpc
import my_api_pb2
import grpc
from concurrent import futures
import time
class MyApiServicer(my_api_pb2_grpc.MyApiServicer):
def SelectModelForDataset(self, request, context):
print("Processing started.")
selectedModel = ModelSelectionModule.run(request, context)
print("Processing Completed.")
return selectedModel
# TODO(shalamov): what is the best way to run a python server?
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=100))
my_api_pb2_grpc.add_MyApiServicer_to_server(MyApiServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
print("gRPC server started\n")
try:
while True:
time.sleep(24 * 60 * 60) # run for 24h
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
serve()
UPD3:
Seems, that ModelSelectionModule.run is causing the problem. I tried to isolate it into a separate thread, but it didn't help. The selectedModel is eventually calculated, but the client is already gone at that time. How do I prevent this call from messing with grpc?
pool = ThreadPool(processes=1)
async_result = pool.apply_async(ModelSelectionModule.run(request, context))
selectedModel = async_result.get()
The call is rather complicated, it spawns and joins lots of threads, calls different libraries like scikit-learn and smac and other. It'll be too much if I post all of it here.
While debugging, I discovered that after client's request, the server keeps 2 connections open (fd 3 and fd 8). If I manually close fd 8 or write some bytes to it, the error I see in the Client becomes Stream removed (instead of Received RST_STREAM with error code 2). Seems, that the socket (fd 8) somehow becomes corrupted by child processes. How is it possible? How can I protect the socket from being accessed by child processes?
This is a result of using fork() in the process handler. gRPC Python doesn't support this use case.
I met this problem and solved just now, did you used with_call() method?
Error code:
response = stub.SayHello.with_call(request=request, metadata=metadata)
And the response is a tuple.
Success code: don't use with_call()
response = stub.SayHello(request=request, metadata=metadata)
The response is a response object.

unable to run more than one tornado process

I've developed a tornado app but when more than one user logs in it seems to log the previous user out. I come from an Apache background so I thought tornado would either spawn a thread or fork a process but seems like that is not what is happening.
To mitigate this I've installed nginx and configured it as a reverse proxy to forward incoming requests to an available tornado process. Nginx seems to work fine however when I try to start more than one tornado process using a different port I get the following error:
http_server.listen(options.port)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 125, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 145, in bind_sockets
sock.bind(sockaddr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
basically i get this for each process I try to start on a different port.
I've read that I should use supervisor to manage my tornado processes but I'm thinking that is more of a convenience. At the moment I'm wondering if the problem has to do with my actual tornado code or my setup somewhere? My python code looks like this:
from tornado.options import define, options
define("port", default=8000, help="run on given port", type=int)
....
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
my handlers all work fine and I can access the site when I go to localhost:8000 just need a pair of fresh eyes please. ;)
Well I solved the problem. I had .sh file that tried to start multiple processes with:
python initpumpkin.py --port=8000&
python initpumpkin.py --port=8001&
python initpumpkin.py --port=8002&
python initpumpkin.py --port=8003&
unfortunately I didn't tell tornado to parse the command line options so I would always get that address always in use error since port '8000' was defined as my default port, so it would attempt to listen on that port each time. In order to mitigate this make sure to declare tornado.options.parse_command_line() after main:
if __name__ == "__main__":
tornado.options.parse_command_line()
then run from the CLI with whatever arguments.
Have you tried starting your server in this manner:
server = tornado.httpserver.HTTPServer(app)
server.bind(port, "0.0.0.0")
server.start(0)
IOLoop.current().start()
server.start takes a parameter for number of processes where 0 tells Tornado to use one process per CPU on the machine

Cannot run in multiple processes: IOLoop instance has already been initialized. You cannot call IOLoop.instance() before calling start_processes()

I'm trying to run multiple process in Tornado and I tried the suggestions made on this thread : run multiple tornado processess
But the error hasn't gone for me. This is the server file.
server.py
import os
import sys
import tornado
#import pymongo
from tornado import ioloop, web, httpserver, websocket
from tornado.options import options
#Loading default setting files
import settings
#Motorengine - ODM for mongodb
#from motorengine import connect
app = tornado.web.Application(handlers=[
(r'/', MainHandler),
(r'/ws', WSHandler),
(r'/public/(.*)', tornado.web.StaticFileHandler, {'path': options.public_path})],
template_path=os.path.join(os.path.dirname(__file__), "app/templates"),
static_path= options.static_path,
autoreload=True,
#images=os.path.join(os.path.dirname(__file__), "images"),
debug=False)
if __name__ == '__main__':
#read settings from commandline
options.parse_command_line()
server = tornado.httpserver.HTTPServer(app, max_buffer_size=1024*1024*201)
server.bind(options.port)
# autodetect cpu cores and fork one process per core
server.start(0)
#app.listen(options.port,xheaders=True)
try:
ioloop = tornado.ioloop.IOLoop.instance()
#connect("attmlplatform", host="localhost", port=27017, io_loop=ioloop)
print("Connected to database..")
ioloop.start()
print ('Server running on http://localhost:{}'.format(options.port))
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
I've commented out the 'connect' import based on the anticipation that it may be triggering the instance and I'm not connecting to the database at all. This is just trying to get the server up.
This is the entire trace :
File "server.py", line 52, in <module>
server.start(0)
File "/home/vagrant/anaconda3/envs/py34/lib/python3.4/site-packages/tornado/tcpserver.py", line 200, in start
process.fork_processes(num_processes)
File "/home/vagrant/anaconda3/envs/py34/lib/python3.4/site-packages/tornado/process.py", line 126, in fork_processes
raise RuntimeError("Cannot run in multiple processes: IOLoop instance "
RuntimeError: Cannot run in multiple processes: IOLoop instance has already been initialized. You cannot call IOLoop.instance() before calling start_processes()
Any suggestions much appreciated!
Thanks!
autoreload is incompatible with multi-process mode. When autoreload is enabled you must run only one process.

Categories

Resources