Pyro4 Remote connection blocked - python

I am using Pyro4 to make a remote connection between a raspberry and a computer. I've tested the code local on my computer. But now I want to use it on the raspberry. Only problem the target machine refused it. Nameserver is set, I can ask for the metadata, client is not giving any error.
Server code:
daemon = Pyro4.core.Daemon("192.168.0.199")
Pyro4.config.HOST = "192.168.0.199"
ns = Pyro4.locateNS()
print ns.lookup("client", return_metadata=True) #this works
callback = MainController()
daemon.register(callback)
vc2 = Pyro4.core.Proxy("PYRONAME:client#192.168.0.199:12345")
Client code:
ns = Pyro4.locateNS()
Pyro4.config.HOST = "192.168.0.199"
uri = daemon.register(VehicleController)
ns.register("client#192.168.0.199:12345", uri)
print "Connection set!"
daemon.requestLoop()
Firewall is also off.
Thanks

The main issue is that the server never runs the daemon request loop and so cannot respond to requests.
But there are a lot of issues with the code as shown:
it is not complete.
you're mixing up server and client responsibilities; why is the client running a deamon? That's the server's job.
you're registering an object with a logical name that appears to be a physical one. That's not how the name server works.
you're registering things in both the client and server.
the server never runs the request loop of the daemon it creates.
what is that 'vc2' proxy doing in the server? Clients are supposed to create proxies to server objects.
it's generally best to set Pyro's config variables before doing anything else, this way you don't have to repeat yourself with the IP address the daemon binds on.
All in all you seem to be confused about various core concepts of Pyro.
Getting a better understanding (have you worked through the tutorial chapter of the manual?) and fixing the code accordingly will likely resolve your issue.

Okay, got some more info
I can connect when I edit my Pyro4 Core URL from obj_ x #0.0.0.0: x to obj_ x #192.168.0.199: x and connect manually. So I guess there is something wrong with the way I register the address to the nameserver.
I'll keep you in touch
Tom

Related

Strange behaviour in Python SocketServer

I have created a python socket server, using a class inherited from SocketServer.BaseRequestHandler, overriding setup and handle methods. Of cource, SocketServer.BaseRequestHandler.setup is called at the end of my own setup.
This is my server class
class MyServer(SocketServer.ForkingMixIn, SocketServer.TCPServer):
timeout = 30
A typical forking socket server.
Here is how I run my server
while True:
try:
server = MyServer((host, port), MyRequestHandler)
print('Server listening on', (host, port))
server.timeout = 300 # seconds
server.serve_forever()
except:
print('Error with server, retrying in 5 seconds...')
print(sys.exc_info())
sleep(5)
host and port are predefined, no problem with them.
Server works fine, except when clients count reaches 40. After this number, no new connections will be accepted, all will be refused. I checked this with a client test python script from my own system. Only 40!
Why 40? I have checked source code for SocketServer and found nothing related to this. I currently have no clue regarding this issue. Any, and I really mean it, any help is appreciated :))
Thanks in advance
OS: CentOS 6.5
This is probably unrelated to Python. Tune your Linux kernel, in testing phase do stuff like:
turn syncookies off
increase file handles available for the user (every socket opened is also a file handle used - maybe you're running out of them?)
look at stuff like this: http://people.redhat.com/alikins/system_tuning.html#tcp
and: http://people.redhat.com/alikins/system_tuning.html#fds
check if stuff like fail2ban is installed (http://www.fail2ban.org/wiki/index.php/Main_Page)
check if rate limits are applied by iptables (in testing phase you could do iptables -F after making sure that default chain policy is ACCEPT)
and last but not in the very least, check dmesg, /var/log/messages, /var/log/syslog, etc
One thing that theoretically might be related to Python is SO_REUSEADDR:
http://www.unixguide.net/network/socketfaq/4.5.shtml
Check if you have it set for your socket.
UPDATE:
I just realized that since the 40 connections that your socket server maxes out at is actually pretty low, the simplest option could be running your socket server through systrace, just use -f flag to track forked processes as well. You could e.g. start socket server, open 35 simultaneous connections, and then connect systrace to a running process and set up 5 more connections and see what systrace reports. Very often in such situations syscalls fail with errors that are visible in systrace and allow pinpointing root cause relatively easily.
I really have now idea how I missed this in source!
class ForkingMixIn:
"""Mix-in class to handle each request in a new process."""
timeout = 300
active_children = None
max_children = 40
Yeah, now I see the max_children property.
Thanks guys

Change redirect port of twisted proxy

I have a simple proxy server made using twisted
destination = portforward.ProxyFactory(dest_host, dest_port)
reactor.listenTCP(listen_port, destination)
reactor.run()
I would like to change the dest_port under certain conditions without having to restart the server.
I tried:
new_dest = portforward.ProxyFactory(dest_host, new_dest_port)
reactor.listenTCP(listen_port, new_dest)
Of course this produced an address already in use exception.
Is this possible to change the proxy destination during operation?
reactor.listenTCP returns an object which provides IListeningPort which has a stopListening method that stops the server on that port (note that it returns a Deferred and the server isn't actually stopped until the Deferred fires).
You can use this stopListening method before your second listenTCP call to free up the server port for use by the new, reconfigured server.

Python. Need to be sure the connection is made from the local machine?

Imagine you have a HTTP server on your local machine, this is a typical Python/Twisted application. This server is used to access your local data, server is used just as a GUI interface. So user can use his web browser or special application ( acts like a web browser ) to access his local data.
Now you want to be sure that only local user who physically sit near this machine get access to the HTTP server.
Also I will have FTP server and it must be protected the same way too.
At the moment I am running such code for my HTTP server:
class LocalSite(server.Site):
def buildProtocol(self, addr):
if addr.host != '127.0.0.1':
print 'WARNING connection from ' + str(addr)
return None
try:
res = server.Site.buildProtocol(self, addr)
except:
res = None
return res
So I am just check the IP address at the moment and I am not sure this is enough.
Is there any ways to emulate local IP from remote machine.?
Well, If a bad guy get access over my OS I have no way to protect - but this is not my deal. My firewall and antivirus should care about this, right?
Anyway, I would like to listen any extra ideas about increase security of such HTTP server.
May be we can use MAC address to verify connection.?
Check the processes on local machine and detect which is actually executes connection?
We can use HTTPS, but in my understanding this acts in opposite direction: this is for user to trust to the server, not server to trust to the user.
Using CAPTCHA is a kind of solution. But I do not like this at all (it strains users) and this will not work for FTP server.
I am also use random port number every time application starts.
The type of internet connection is not defined - this is a p2p application. Any user in the WEB can use my software and it must be protected against remote access.
I believe the way you handled it is good enough. About it being cross-platform, I believe it is as Windows(starting from windows 7) too maps localhost to 127.0.0.1 but for previous versions, you have to define localhost in the main hosts file.

How can I setup an Autobahn Pub/Sub Server and a Autobahn Webserver listening on the same port

I recently discovered autobahn python and js as a comfortable method to establish a pub/sub server and corresponding client even with rpc-calls.
After looking through the tutorials, I set up a test version with a websocket server and a webserver running on the same port. The server sends periodically data to the client via websockets. The html the user gets lies on the localhost root. All that works fine.
However, what I want to accomplish is: Setup a pub/sub server and a webserver listening on the same port.
The tutorials show only how to setup these on two different ports (as shown at http://autobahn.ws/python/tutorials/pubsub).
Im very new to python in general and autobahn and twisted especially.
Any advice would be really nice!
Thanks very much!
Marc
Sure. You can run a WAMP/WebSocket server and a plain old Web server on one port using Autobahn. Here is an example for pure WebSocket and here is one for WAMP.
Disclaimer: I am author of Autobahn and work for Tavendo.
When using WAMP while having HTTP and WS servers listening on the same port you will need to start your instance of WampServerFactory manually as explained here.
factory = WampServerFactory("ws://localhost:8080")
factory.protocol = YourServerProtocolClass
factory.startFactory() # <--- need to call this manually
resource = WebSocketResource(factory)
root = File(".")
root.putChild("ws", resource)
For more details please see this complete example.
I would put nginx as a frontend that forwards each call either to pubsub or to web... Recent Nginx supports WebSocket forwarding.
Or you man write something similar with Twisted :)
Another alternative would be to adapt autobahn.websocket.WebSocketServerProtocol and its subclass autobahn.wamp.WampServerProtocol to Twisted.web. It should be possible.

104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN?

We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
(104, 'Connection reset by peer')
When I listen in with wireshark, the "good" and "bad" responses look very similar:
Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
** Further debugging - Looks like server on Linux **
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...
** Further testing - wsgiref looks flaky **
We've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production.
I've had this problem. See The Python "Connection Reset By Peer" Problem.
You have (most likely) run afoul of small timing issues based on the Python Global Interpreter Lock.
You can (sometimes) correct this with a time.sleep(0.01) placed strategically.
"Where?" you ask. Beats me. The idea is to provide some better thread concurrency in and around the client requests. Try putting it just before you make the request so that the GIL is reset and the Python interpreter can clear out any pending threads.
Don't use wsgiref for production. Use Apache and mod_wsgi, or something else.
We continue to see these connection resets, sometimes frequently, with wsgiref (the backend used by the werkzeug test server, and possibly others like the Django test server). Our solution was to log the error, retry the call in a loop, and give up after ten failures. httplib2 tries twice, but we needed a few more. They seem to come in bunches as well - adding a 1 second sleep might clear the issue.
We've never seen a connection reset when running through Apache and mod_wsgi. I don't know what they do differently, (maybe they just mask them), but they don't appear.
When we asked the local dev community for help, someone confirmed that they see a lot of connection resets with wsgiref that go away on the production server. There's a bug there, but it is going to be hard to find it.
Normally, you'd get an RST if you do a close which doesn't linger (i.e. in which data can be discarded by the stack if it hasn't been sent and ACK'd) and a normal FIN if you allow the close to linger (i.e. the close waits for the data in transit to be ACK'd).
Perhaps all you need to do is set your socket to linger so that you remove the race condition between a non lingering close done on the socket and the ACKs arriving?
I had the same issue however with doing an upload of a very large file using a python-requests client posting to a nginx+uwsgi backend.
What ended up being the cause was the the backend had a cap on the max file size for uploads lower than what the client was trying to send.
The error never showed up in our uwsgi logs since this limit was actually one imposed by nginx.
Upping the limit in nginx removed the error.

Categories

Resources