Python Remote Procedure Call (Without the Remote Part) - python

I have a Python server which is not running as root, which fronts an application I am developing. However there are some application features which require access to RAW sockets which means root privileges.
Obviously I do not want to run the main server as root, and so my solution is to create a daemon process or command line script which runs as root providing guarded access to said features.
However I want put aside stdin/stdout communication and use an RPC style of interaction such as Pyro. But this exposes the RPC interface to anyone with network access to the machine, whereas I know that the process calling the RPC methods will be another process on the same machine.
Is there not a sort of inter-process procedure call standard which could be used in a similar (local machine only) fashion? I imagine the server doing something like this:
# Server not running as root
pythonically, returned, values = other_process_running_as_root.some_method()
And the process running as root exposing a method:
# Daemon running as root
#expose_this_method
def some_method():
# Play with RAW sockets
return pythonically, returned, values
Is anything like this possible?

Following my comment, I was interested to see if it was possible, so I had a go at putting this together: https://github.com/takowl/ZeroRPC
Bear in mind that this is thrown together in an hour or so, so it's almost certainly inferior to any serious solution (e.g. any errors on the server side will crash it...). But it works as you suggested:
Server:
rpcserver = zerorpc.Server("ipc://myrpc.ipc")
#rpcserver.expose
def product(a, b):
return a * b
rpcserver.run()
Client:
rpcclient = zerorpc.Client("ipc://myrpc.ipc")
print(rpcclient.product(5, 7))
rpcclient._stopserver()

This is an easy problem. You should be able to get what you want from any RPC mechanism that can use Unix sockets, or use regular TCP sockets but only accept connections from the loopback interface (listen on 127.0.0.1).
The multiprocessing library in the Python standard library supports local IPC, too. http://docs.python.org/library/multiprocessing.html#module-multiprocessing.connection

Pyro has a number of security features specifically to limit the access to the RPC interface. Are these too much of a performance burden to use?

Related

Local machine interprocess communication with multiple independent processes (1 server, n clients)

I would like to have a server process (preferably Python) that accepts simple messages and multiple clients (again, preferably Python) that connect to the server and send messages to it. The server and clients will only ever be running on the same local machine and the OS is Linux based. The server will be automatically started by the OS and the clients started later independent of the server. I strongly want to avoid installing a whole separate messaging framework/server to do this. The messages will be simple strings such as "kick" or even just a single byte representing the message type. It also needs to know when a connection is made and lost.
From these requirements, I think named pipes would be a feasible solution, with a new instance of that pipe created for each client connection. However, when I search for examples, all of the ones I have come across deal with processes that are spawned from the same parent process and not independently started which means they can pass a parent reference to the child.
Windows seems to allow multiple instances of a named pipe (one for each client connection), but I'm unsure if this is possible on a Linux based OS?
Please could someone point me in the right direction, preferably with a basic example, even if it's just pseudo-code.
I've looked at the multiprocessing module in Python, but this seems to be oriented around the server and client sharing the same process or having one spawn the other.
Edit
May be important, the host device is not guaranteed to have networking capabilities (embedded device).
I've used zeromq for this sort of thing before. it's a relatively lightweight library that exposes this sort of functionality
otherwise, you could implement it yourself by binding a socket in the server process and having clients connect to it. this works fine for unix domain sockets, just pass AF_UNIX when creating the socket, e.g:
import socket
with socket.socket(socket.AF_UNIX) as s:
s.bind('/tmp/srv')
s.listen(1)
(c, addr) = s.accept()
with c:
c.send(b"hello world")
for the server, and:
with socket.socket(socket.AF_UNIX) as c:
c.connect('/tmp/srv')
print(c.recv(8192))
for the client.
writing a protocol around this is more involved, which is where things like zmq really help where you can easily push JSON messages around

Twisted distribute callRemote across servers

I'm running a client which makes remote calls to my server, both written with twisted. The methods running on the server side can be quite long to return, and they're eating mostly CPU in Python code so threading won't be of any help here.
I've tried a lot of stuff but eventually I think I'm going to run several instances of twisted servers to distribute the tasks.
So I'm telling my servers to listen on several sockets (let's say I create them using serverFromString on socket_1 for server 1 and on socket_2 for server 2), and I'm connecting my client on these sockets with 2 calls to connectUNIX with socket_1 and socket_2 as arguments.
So far I managed to create the servers listening on the ports I want them to, but I'm not sure how to tell my client to distribute callRemote across the sockets. When I compute several callRemote it seems that only one server is actually being used. How do I do that ?
P.S. : I tried using multiprocessing but my methods on the server side are full of unpicklable objects so no chance ; also the API of spawnProcess is utterly non compliant with the code I'm calling. Also I'm not willing to use an undocumented unmaintained project so Ampoule is not an option here.
Edit : No answer so I guess the question wasn't clear enough. Basically it all boils down to : can I pass a 'port' or 'socket' argument to callRemote so I can manage on which server I'm running the remote calls ?
Basically I have a django/uwsgi client in which I call reactor.connectUNIX(my_port). I redirect all the calls made in my python code to a threads.blockingCallFromThread(reactor, callRemote, args). The remote calls go to my_port, though I don't really know where/how the argument is passed. On the server side the application is launched with twisted.scripts._twistd_unix.UnixApplicationRunner, and the server listens on my_port
I'd like to start several servers on different addresses and have my client dispatch the remote calls among the servers. I don't know if I'm clear yet, I would gladly add more precisions.

Twisted Reactor for client-side python interface / raw_input

I am using twisted to run a rather complicated server that allows for data collection, communication, and commanding of a hardware device remotely. On the client-side there are a number of data retrieval and command operations available. Typically I use the wxpython reactor to interface with the client reactor, but I would also like to setup a simpler command-line style interface.
Is there a reactor that I can use to setup a local non-blocking python-like or raw_input-style interface for the client? After successful access to the server, the server will occasionally send data down without being requested as a result of server-side events.
I have considered manhole, but I am not interested in accessing the server as an interface, I am strictly interested in accessing the client-side data and commands. This is mostly for debugging, but it can also come in handy for creating a much more rudimentary client interface when needed.
See the stdin.py and stdiodemo.py examples, I think that's similar to what you're aiming for. They demonstrate connecting a protocol (like a LineReceiver) to StandardIO.
I think you could also use a StandardIOEndpoint (and maybe we should update the examples for that), but that doesn't change the way you'd write your protocol.

Can Perspective Broker be used over stdio instead of TCP?

I'm using Twisted's Perspective Broker for RMI between a process and subprocess.
Rather than listen on a TCP socket (such as by passing reactor.listenTCP() an instance of PBServerFactory) and have the subprocess connect to it, I'd prefer to use the subprocess's stdin and stdout.
I've found twisted.internet.stdio.StandardIO, but if that's the way to go, I'm not sure how to set everything up.
Is it feasible to use PB over stdio instead of TCP? How?
Wait, why?
The subprocess is for running untrusted code. It's sandboxed, but needs to be able to communicate back with the parent process in limited ways. Some form of RMI is by far the cleanest option for the specific use case, and PB has an access model that looks right. But the sandboxed process doesn't have -- and shouldn't need -- network access. That RMI is its only communication with the outside world, and piping it through stdin/stdout seems like a clean way of doing business.
But if I'm not going about this the right way, that's a perfectly valid answer too.
Using a protocol like PB between a parent and child process over a stdio-like connection has two pieces. One piece is in the child process, using file descriptors 0 and 1 to communicate with the parent. The other piece is the parent process, using whatever file descriptors correspond to the child's 0 and 1.
StandardIO is the first piece. You still need the second piece - that's IReactorProcess.spawnProcess.
However, the newer endpoints APIs are a better way to access this functionality.
The basics of endpoints are that a client endpoint lets you connect to a server without caring exactly how that connection is established and a server endpoint lets you accept connections from clients without caring exactly how those clients are connecting.
There is a child process client endpoint and a stdio server endpoint. This means you can write your client something like:
factory = PBClientFactory(...)
d = factory.getRootObject()
...
clientEndpoint.connect(factory)
And your server something like:
factory = PBServerFactory(...)
...
serverEndpoint.listen(factory)
And you now have a client and server that will talk to each other, but you haven't actually specified how they talk to each other yet. Perhaps it's TCP or perhaps it's stdio.
Then all you need is to pick the right endpoints to use. To stick with your idea of communicating over stdio:
clientEndpoint = ProcessEndpoint(reactor, "/path/to/child", ("argv",), ...)
serverEndpoint = StandardIOEndpoint(reactor)
If you change your mind later, then switching to - say - TCP is as easy as:
clientEndpoint = TCP4ClientEndpoint(reactor, "1.2.3.4", 12345)
serverEndpoint = TCP4ServerEndpoint(reactor, 12345)
Or you can use the plugin mechanism for string descriptions of endpoints to turn this into configuration instead:
clientEndpoint = clientFromString(reactor, options["client-endpoint"])
serverEndpoint = serverFromString(reactor, options["server-endpoint"])
Where options["client-endpoint"] and options["server-endpoint"] are strings like "tcp:host=1.2.3.4:port=12345" and "tcp:port=12345".
For more, see the complete endpoints howto.

using pyunit on a network thread

I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads.
For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me.
I have to imagine there are methods for unit testing networked threads, but I can't seem to find any.
I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.
Try not to use threads just yet, to simplify testing.
It depends on how your network software is layered and how detailed you want your tests to be, but it's certainly feasible in some scenarios to make server setup and tear-down part of the test. For example, when I was working on the Python logging package (before it became part of Python), I had a test (I didn't use pyunit/unittest - it was just an ad-hoc script) which fired up (in one test) four servers to listen on TCP, UDP, HTTP and HTTP/SOAP ports, and then sent network traffic to them. If you're interested, the distribution is here and the relevant test script in the archive to look at is log_test.py. The Python logging package has of course come some way since then, but the old package is still around for use with versions of Python < 2.3 and >= 1.5.2.
I've some test cases that run a server in the setUp and close it in the tearDown. I don't know if it is very elegant way to do it but it works of for me.
I am happy to have it and it helps me a lot.
If the server init is very long, an alternative would be to automate it with ant. ant would run/stop the server before/after executing the tests.
See here for very interesting tutorial about ant and python
You would need to create mock sockets. The exact way to do that would depend on how you create sockets and creating a socket generator would be a good idea. You can also use a mocking library like pymox to make your life easier. It can also possibly eliminate the need to create a socket generator just for the sole purpose of testing.
Using pymox, you would do something like this:
def test_connect(self):
m = mox.Mox()
m.StubOutWithMock(socket, 'socket')
socket_mock = m.MockAnything()
m.socket.socket(socket.AF_INET, socket.SOCK_STREAM).AndReturn(socket_mock)
socket_mock.connect(('test_server1', 80))
socket_mock.connect(('test_server2', 81))
socket_mock.connect(('test_server3', 82))
m.ReplayAll()
code_to_be_tested()
m.VerifyAll()
m.UnsetStubs()

Categories

Resources