unable to run more than one tornado process - python

I've developed a tornado app but when more than one user logs in it seems to log the previous user out. I come from an Apache background so I thought tornado would either spawn a thread or fork a process but seems like that is not what is happening.
To mitigate this I've installed nginx and configured it as a reverse proxy to forward incoming requests to an available tornado process. Nginx seems to work fine however when I try to start more than one tornado process using a different port I get the following error:
http_server.listen(options.port)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 125, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 145, in bind_sockets
sock.bind(sockaddr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
basically i get this for each process I try to start on a different port.
I've read that I should use supervisor to manage my tornado processes but I'm thinking that is more of a convenience. At the moment I'm wondering if the problem has to do with my actual tornado code or my setup somewhere? My python code looks like this:
from tornado.options import define, options
define("port", default=8000, help="run on given port", type=int)
....
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
my handlers all work fine and I can access the site when I go to localhost:8000 just need a pair of fresh eyes please. ;)

Well I solved the problem. I had .sh file that tried to start multiple processes with:
python initpumpkin.py --port=8000&
python initpumpkin.py --port=8001&
python initpumpkin.py --port=8002&
python initpumpkin.py --port=8003&
unfortunately I didn't tell tornado to parse the command line options so I would always get that address always in use error since port '8000' was defined as my default port, so it would attempt to listen on that port each time. In order to mitigate this make sure to declare tornado.options.parse_command_line() after main:
if __name__ == "__main__":
tornado.options.parse_command_line()
then run from the CLI with whatever arguments.

Have you tried starting your server in this manner:
server = tornado.httpserver.HTTPServer(app)
server.bind(port, "0.0.0.0")
server.start(0)
IOLoop.current().start()
server.start takes a parameter for number of processes where 0 tells Tornado to use one process per CPU on the machine

Related

Run more than one server using python

I am basically trying to run two servers using a simple script. I have used the solutions from here, there and others.
For instance in the example below I am trying to host two directories in ports 8000 and 8001.
import http.server
import socketserver
import os
import multiprocessing
def run_webserver(path, PORT):
os.chdir(path)
Handler = http.server.SimpleHTTPRequestHandler
httpd = socketserver.TCPServer(('0.0.0.0', PORT), Handler)
httpd.serve_forever()
return
if __name__ == '__main__':
# Define "services" directories and ports to use to host them
server_details = [
("path/to/directory1", 8000),
("path/to/directory2", 8001)
]
# Run servers
servers = []
for s in server_details:
p = multiprocessing.Process(
target=run_webserver,
args=(s[0], s[1])
)
servers.append(p)
for server in servers:
server.start()
for server in servers:
server.join()
Once I execute the code below it all works fine and I can access both directories using http://localhost:8000 and http://localhost:8001. However, when I exit the script using Ctrl+C and then try to run the script again I get the following error:
Traceback (most recent call last):
"/home/user/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/user/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "repos/scripts/webhosts.py", line 12, in run_webserver
File "/home/user/anaconda3/lib/python3.6/socketserver.py", line 453, in __init__
self.server_bind()
File "/home/user/anaconda3/lib/python3.6/socketserver.py", line 467, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
This error only pops up if I actually access the server when running. If I do not access to it, I can re-run the script... From the error message it looks like there is something still accessing the server when typing lsof -n -i4TCP:8000 and lsof -n -i4TCP:8001 I don't get anything... And after a while this error stops appearing and I can actually run the script.
Before starting the server, add:
socketserver.TCPServer.allow_reuse_address = True
or change that attribute of the instance, again before calling serve_forever():
htttp.allow_reuse_address = True
Documentation reference:
BaseServer.allow_reuse_address
Whether the server will allow the reuse of an address. This defaults to False, and can be set in subclasses to change the policy.
Expanding on my comment:
In a previous edit OP registered an exit handler atexit.register(exit_handler). The question is, does it cleans your systems resources (a.k.a open sockets)?
If your program exits without closing the sockets (because you interrupted it with Ctrl+C) it make take some time for the OS clean up the sockets (cause they're in TIME_WAIT state), you can read about TIME_WAIT and how to avoid it in here.
Using exit handlers is a good way to avoid this
import atexit
atexit.register(clean_sockets)
def clean_sockets():
mysocket.server_close() #

Running a Python web server twice on the same port on Windows: no "Port already in use" message

I'm on Windows 7. When I start a Bottle web server with:
run('0.0.0.0', port=80)
And then once again run the same Python script, it doesn't fail with a Port already in use error (this should be normal behaviour), but instead successfully starts the Python script again!
Question: How to stop this behaviour, in a simple way?
This is related to Multiple processes listening on the same port?, but how can you prevent this in a Python context?
This is a Windows specific behavior that requires the use of the SO_EXCLUSIVEADDRUSE option before binding a network socket.
From the Using SO_REUSEADDR and SO_EXCLUSIVEADDRUSE article in the Windows Socket 2 documentation:
Before the SO_EXCLUSIVEADDRUSE socket option was introduced, there was
very little a network application developer could do to prevent a
malicious program from binding to the port on which the network
application had its own sockets bound. In order to address this
security issue, Windows Sockets introduced the SO_EXCLUSIVEADDRUSE
socket option, which became available on Windows NT 4.0 with Service
Pack 4 (SP4) and later.
...
The SO_EXCLUSIVEADDRUSE option is set by calling the setsockopt
function with the optname parameter set to SO_EXCLUSIVEADDRUSE and the
optval parameter set to a boolean value of TRUE before the socket is
bound.
In order to do this using the Bottle module, you have to create a custom backend facilitating access to the socket before it's bound. This gives an opportunity to set the required socket option as documented.
This is briefly described in the Bottle Deployment documentation:
If there is no adapter for your favorite server or if you need more
control over the server setup, you may want to start the server
manually.
Here's a modified version of the Bottle Hello World example that demonstrates this:
import socket
from wsgiref.simple_server import WSGIServer
from bottle import route, run, template
#route('/hello/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
class CustomSocketServer(WSGIServer):
def server_bind(self):
# This tests if the socket option exists (i.e. only on Windows), then
# sets it.
if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'):
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
# Everything below this point is a concatenation of the server_bind
# implementations pulled from each class in the class hierarchy.
# wsgiref.WSGIServer -> http.HTTPServer -> socketserver.TCPServer
elif self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
host, port = self.server_address[:2]
self.server_name = socket.getfqdn(host)
self.server_port = port
self.setup_environ()
print "Serving..."
run(host='localhost', port=8080, server_class=CustomSocketServer)
Note that the code copied is required to maintain the expected behavior by the super classes.
All of the super class implementations of server_bind() start by calling their parent classes server_bind(). This means that calling any of them results in the immediate binding of the socket, removing the opportunity to set the required socket option.
I tested this on Windows 10 using Python 2.7.
First instance:
PS C:\Users\chuckx\bottle-test> C:\Python27\python.exe test.py
Serving...
Second instance:
PS C:\Users\chuckx\bottle-test> C:\Python27\python.exe test.py
Traceback (most recent call last):
File "test.py", line 32, in <module>
server_class=CustomSocketServer)
File "C:\Python27\lib\wsgiref\simple_server.py", line 151, in make_server
server = server_class((host, port), handler_class)
File "C:\Python27\lib\SocketServer.py", line 417, in __init__
self.server_bind()
File "test.py", line 19, in server_bind
self.socket.bind(self.server_address)
File "C:\Python27\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
An alternative solution is to use #LuisMuñoz's comment: check if the port is already opened before opening it again:
# Bottle web server code here
# ...
import socket
sock = socket.socket()
sock.settimeout(0.2) # this prevents a 2 second lag when starting the server
if sock.connect_ex(('127.0.0.1', 80)) == 0:
print "Sorry, port already in use."
exit()
run(host='0.0.0.0', port=80)

pjsua.error, error = address already in use

I am trying to make calls using PJSIP module in python. For setup of SIP transport, I am doing like
trans_cfg = pj.TransportConfig()
# port for VoIP communication
trans_cfg.port = 5060
# local system address
trans_cfg.bound_addr = inputs.client_addr
transport = lib.create_transport(pj.TransportType.UDP,trans_cfg)
when I finish the call I am clearing the transport setup as, transport = None.
I am able to make call to user by running my program. But every time I restart my PC alone, I get an error while I run my python program
File "pjsuatrail_all.py", line 225, in <module>
main()
File "pjsuatrail_all.py", line 169, in main
transport = transport_setup()
File "pjsuatrail_all.py", line 54, in transport_setup
transport = lib.create_transport(pj.TransportType.UDP,trans_cfg)
File "/usr/local/lib/python2.7/dist-packages/pjsua.py", line 2304, in
create_transport
self._err_check("create_transport()", self, err)
File "/usr/local/lib/python2.7/dist-packages/pjsua.py", line 2723, in _err_check
raise Error(op_name, obj, err_code, err_msg)
pjsua.Error: Object: Lib, operation=create_transport(), error=Address already in use
Exception AttributeError: "'NoneType' object has no attribute 'destroy'" in <bound method Lib.__del__ of <pjsua.Lib instance at 0x7f8a4bbb6170>> ignored
For this currently I am doing like
$sudo lsof -t -i:5060
>> 1137
$sudo kill 1137
Then I run my code it works fine.
By instance from error, I can understand that somewhere I am not closing my transport configuration properly. Can anyone help in this regards.
Reference code used
From the inputs you give, it can be understood that its not the problem with pjsip wrapper. Transport configurations looks fine.
Looking in to the 'create_transport' error, the program is not able to create the connection because 5060 port is already occupied with some other program.
For that you are killing that process and you are able to run the program with out any error. And you say it only on restart, so on your system restart some program is occupying the port.
You can try like this
sudo netstat -nlp|grep 5060
in your case it will give like
1137/ProgramName
go to the 'ProgramName' in your startup configurations and make modifications such that it wont pickup the port.

Address already in use when importing file in python

I have a python server to which I can do POST requests. This is the script
from bottle import Bottle, run, template, get, post, request
app = Bottle()
#app.route('/rotation', method='POST')
def set_rotation():
rotation = request.forms.get('rotation')
return rotation
run(app, host='localhost', port=8080)
So in the POST request I send the rotation value and get that in the script. I need the rotation value in another script so I do this in that script
from mybottle import set_rotation
print set_rotation
When I run the first script and then the second script, I get this error
socket.error: [Errno 98] Address already in use
I'm quite new to python so I don't have a clue as to what I'm doing wrong
If you want to be able to import without starting the run function use
if __name__=="__main__"
if __name__=="__main__":
run(app, host='localhost', port=8080)
Each time you import from the file run(app, host='localhost', port=8080) is going to be executed, using if __name__=="__main__" will only start the server when you execute the file itself so you will avoid your socket.error: [Errno 98] which you are getting trying to start the server when it is already running.
You should verify that no other program use the 8080 port, or simply change the port to another value.
I think you run the server twice. The error you get comes from the second server that can't bind on port 8080 because the first is already using it.
Your code, as given, will start a server when imported. This is probably not what you want.
You can avoid this behavior by test the name of your module, which is __main__ only if it's the called script:
if __name__ == '__main__':
run(app, host='localhost', port=8080)
Then, when imported, no server is ran.

cherrypy not closing the sockets

I am using cherrypy as a webserver. It gives good performance for my application but there is a very big problem with it. cherrypy crashes after couple of hours stating that it could not create a socket as there are too many files open:
[21/Oct/2008:12:44:25] ENGINE HTTP Server
cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down
[21/Oct/2008:12:44:25] ENGINE Stopped thread '_TimeoutMonitor'.
[21/Oct/2008:12:44:25] ENGINE Stopped thread 'Autoreloader'.
[21/Oct/2008:12:44:25] ENGINE Bus STOPPED
[21/Oct/2008:12:44:25] ENGINE Bus EXITING
[21/Oct/2008:12:44:25] ENGINE Bus EXITED
Exception in thread HTTPServer Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.3/threading.py", line 436, in __bootstrap
self.run()
File "/usr/lib/python2.3/threading.py", line 416, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.3/site-packages/cherrypy/process/servers.py", line 73, in
_start_http_thread
self.httpserver.start()
File "/usr/lib/python2.3/site-packages/cherrypy/wsgiserver/__init__.py", line 1388, in start
self.tick()
File "/usr/lib/python2.3/site-packages/cherrypy/wsgiserver/__init__.py", line 1417, in tick
s, addr = self.socket.accept()
File "/usr/lib/python2.3/socket.py", line 167, in accept
sock, addr = self._sock.accept()
error: (24, 'Too many open files')
[21/Oct/2008:12:44:25] ENGINE Waiting for child threads to terminate..
I tried to figure out what was happening. My application does not open any file or any socket etc. My file only opens couple of berkeley dbs. I investigated this issue further. I saw the file descriptors used by my cherrypy process with id 4536 in /proc/4536/fd/
Initially there were new sockets created and cleaned up properly but after an hour I found that it had about 509 sockets that were not cleaned. All the sockets were in CLOSE_WAIT state. I got this information using the following command:
netstat -ap | grep "4536" | grep CLOSE_WAIT | wc -l
CLOSE_WAIT state means that the remote client has closed the connection. Why is cherrypy then not closing the socket and free the file descriptors? What can I do to resolve the problem?
I tried to play with the following:
cherrypy.config.update({'server.socketQueueSize': '10'})
I thought that this would restrict the number of sockets open at any time to 10 but it was not effective at all. This is the only config I have set, so , rest of the configs hold their default values.
Could somebody throw light on this? Do you think its a bug in cherrypy? How can I resolve it? Is there a way I can close these sockets myself?
Following is my systems info:
CherryPy-3.1.0
python 2.3.4
Red Hat Enterprise Linux ES release 4 (Nahant Update 7)
Thanks in advance!
I imagine you're storing (in-memory) some piece of data which has a reference to the socket; if you store the request objects anywhere, for instance, that would likely do it.
The last-ditch chance for sockets to be closed is when they're garbage-collected; if you're doing anything that would prevent garbage collection from reaching them, there's your problem. I suggest that you try to reproduce with a Hello World program written in CherryPy; if you can't reproduce there, you know it's in your code -- look for places where you're persisting information which could (directly or otherwise) reference the socket.

Categories

Resources