Redirecting Standard Input/Output in RPyC 3 - python

As explained here, redirecting the stdio/stdin is very simple with RPyC 2. This means that 'print' commands executed in the server, will display the printed string on the client side.
RPyC 2, as explained here, is un-secured and is not recommended, but I couldn't find anywhere how can I print on the client side with RPyC 3.
Does anyone know how to achieve this?
Edit:
For example, this is the code to my server:
import rpyc
import time
from rpyc.utils.server import ThreadedServer
class TimeService(rpyc.Service):
def exposed_print_time(self):
for i in xrange(10):
print time.ctime()
time.sleep(1)
if __name__ == "__main__":
t = ThreadedServer(TimeService, port=8000)
t.start()
and in my client, I run:
conn = rpyc.connect("192.168.1.5")
conn.root.print_time()
my goal is to get the time every second (or anything else I want to print) in the client's stdout, but the client just hangs and the time is only printed in the server.

you could use return on server side:
#on server
def exposed_test(self):
return "this is test"
#on client
conn.root.test()
>>> 'this is test'

Related

Using GLib.IOChannel to send data from one python process to another

I am trying to use GLib.IOChannels to send data from a client to a server running a Glib.Mainloop.
The file used for the socket should be located at /tmp/so/sock, and the server should simply run a function whenever it receives data.
This is the code I've written:
import sys
import gi
from gi.repository import GLib
ADRESS = '/tmp/so/sock'
def server():
loop = GLib.MainLoop()
with open(ADRESS, 'r') as sock_file:
sock = GLib.IOChannel.unix_new(sock_file.fileno())
GLib.io_add_watch(sock, GLib.IO_IN,
lambda *args: print('received:', args))
loop.run()
def client(argv):
sock_file = open(ADRESS, 'w')
sock = GLib.IOChannel.unix_new(sock_file.fileno())
try:
print(sock.write_chars(' '.join(argv).encode('utf-8'), -1))
except GLib.Error:
raise
finally:
sock.shutdown(True)
# sock_file.close() # calling close breaks the script?
if __name__ == '__main__':
if len(sys.argv) > 1:
client(sys.argv[1:])
else:
server()
When called without arguments, it acts as the server, if called with arguments, it sends them to a running server.
When starting the server, I immediately get the following output:
received: (<GLib.IOChannel object at 0x7fbd72558b80 (GIOChannel at 0x55b8397905c0)>, <flags G_IO_IN of type GLib.IOCondition>)
I don't know why that is. Whenever I send something, I get an output like (<enum G_IO_STATUS_NORMAL of type GLib.IOStatus>, bytes_written=4) on the client side, while nothing happens server-side.
What am I missing? I suspect I understood the documentation wrong, as I did not find a concrete example.
I got the inspiration to use the IOChannel instead of normal sockets from this post: How to listen socket, when app is running in gtk.main()?

Adding custom module via RPyC

I'm trying to add a new module to a connection.
I have the following files:
main.py
UpdateDB.py
In UpdateDB:
def UpdateDB():
...
In main.py:
import UpdateDB
import rpyc
conn = rpyc.classic.connect(...)
rpyc.utils.classic.upload_package(conn, UpdateDB)
conn.modules.UpdateDB.UpdateDB()
And I can figure out how to invoke the UpdateDB() function.
I get:
AttributeArror: 'module' object has no attribute 'UpdateDB'
Perhaps I'm trying to do it wrong. So let me explain what I'm trying to do:
I want to create a connection to the server and run on it a function from the UpdateDB.py file.
Not sure how to do that in classic mode (not sure why you'd use it), but here is how to accomplish the task in the newer RPyC service mode.
Script ran as Server:
import rpyc
from rpyc.utils.server import ThreadedServer
class MyService(rpyc.Service):
def exposed_printSomething(self, a):
print a
print "printed on server!"
return 'printed on client!'
if __name__ == '__main__':
server = ThreadedServer(MyService, port=18812)
server.start()
Script ran as Client:
import rpyc
if __name__ == '__main__':
conn = rpyc.connect("127.0.0.1", port=18812)
print conn.root.printSomething("passed to server!")
Result on Server:
passed to server!
printed on server!
Result on Client:
printed on client!

simple rpyc client and server for sending string data

I'm working on a program in python using rpyc. My goal is to create a simple server that accepts bytes of data (String) from a client. I'm new both to python and rpyc.
Here is my server.py code:
from rpyc.utils.server import ThreadedServer # or ForkingServer
class MyService(rpyc.Service):
# My service
pass
if __name__ == "__main__":
server = ThreadedServer(MyService, port = 18812)
server.start()
Then there is my client.py code:
from rpyc.core.stream import SocketStream
from rpyc.core.channel import Channel
b = SocketStream.connect("localhost", 18812)
c = Channel(b, compress=True)
c.send("abc")
b.close()
c.close()
Yet when running my client.py there is an error in console. If I'm understanding it correctly, i must create a stream in server.py that is associated with the client. Is that the case? How can i achieve that?
You're using the low level primitives, but you didn't put a Protocol over those. Anyway, you really don't need to go there. Here's what you want to do:
myserver.py
import rpyc
from rpyc.utils.server import ThreadedServer
class MyService(rpyc.Service):
# My service
def exposed_echo(self, text):
print(text)
if __name__ == "__main__":
server = ThreadedServer(MyService, port = 18812)
server.start()
then open a python shell and try
>>> import rpyc
>>> c = rpyc.connect("localhost", 18812)
>>> c.root.echo("hello")
'hello'
note that this is using the service-oriented mode. you can also use the classic mode. just run bin/rpyc_classic.py and then connect using rpyc.classic.connect("host")

Exposing python daemon as a service

So, there are two very valuable features that I am already able to extract from a python script. The first is the ability to run a python function as a service from the command line. Assuming that the python script takes in command line args for simplicity. Something along the lines of:
import sys
def foo():
return "%s is your last argument!" % sys.argv[-1]
foo()
which I would then access globally by running python file.py someargand additionally, I could write up a supervisord script to daemonize a script and keep it running in memory. I now find myself in a position where I need both of these features at once and I'm not really sure where to start on this. For clarity, I basically have something along these lines:
if __name__ == "__main__":
big_file = open(slow_loader)
foo(big_file)
Where ideally, once this is running I would be keeping the entire big_file in memory and be able to access the foo method depending on that big_file by running something akin to the original python file.py somearg. I'm not really sure how to progress from here though.
Any help, even if it's just a link to some documentation would be very helpful. Ahead of time, I realized I could wrap this in a shallow flask app and run it through http requests, but for NDA'd reasons I need something that runs through an internal shell command.
Just because I like zmq and gevent, I would probably do something like this:
server.py
import gevent
import gevent.monkey
gevent.monkey.patch_all()
import zmq.green as zmq
import json
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
socket.bind("ipc:///tmp/myapp.ipc")
def do_something(parsed):
return sum(parsed.get("values"))
def handle(msg):
data = msg[1]
parsed = json.loads(data)
total = do_something(parsed)
msg[1] = json.dumps({"response": total})
socket.send_multipart(msg)
def handle_zmq():
while True:
msg = socket.recv_multipart()
gevent.spawn(handle, msg)
if __name__ == "__main__":
handle_zmq()
And then you would have a client.py for your command line tool, like
import json
import zmq
request_data = {
"values": [10, 20, 30 , 40],
}
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect("ipc:///tmp/myapp.ipc")
socket.send(json.dumps(request_data))
print socket.recv()
Obviously this is a contrived example, but you should get the idea. Alternatively you could use something like xmlrpc or jsonrpc for this as well.

Kill Process from Makefile

I'm trying to write a makefile that will replicate a client/server program I've written (which is really just two Python scripts, but that's not the real question of concern)...
test:
python server.py 7040 &
python subscriber.py localhost 7040 &
python client.py localhost 7040;
So I run make test
and I get the ability to enter a message from client.py:
python server.py 7040 &
python subscriber.py localhost 7040 &
python client.py localhost 7040;
Enter a message:
When the client enters an empty message, he closes the connection and quits successfully. Now, how can I automate the subscriber (who is just a "listener) of the chat room to close - which will in turn exit the server process.
I was trying to get the process IDs from these calls using pidof - but wasn't really sure if that was the correct route. I am no makefile expert; maybe I could just write a quick Python script that gets executed from my makefile to do the work for me? Any suggestions would be great.
EDIT:
I've gone writing the Python script route, and have the following:
import server
import client
import subscriber
#import subprocess
server.main(8092)
# child = subprocess.Popen("server.py",shell=False)
subscriber.main('localhost',8090)
client.main('localhost', 8090)
However, now I'm getting errors that my global variables are not defined ( I think its directly related to adding the main methods to my server (and subscriber and client, but I'm not getting that far yet:). This may deserve a separate question...
Here's my server code:
import socket
import select
import sys
import thread
import time
# initialize list to track all open_sockets/connected clients
open_sockets = []
# thread for each client that connects
def handle_client(this_client,sleeptime):
global message,client_count,message_lock,client_count_lock
while 1:
user_input = this_client.recv(100)
if user_input == '':
break
message_lock.acquire()
time.sleep(sleeptime)
message += user_input
message_lock.release()
message = message + '\n'
this_client.sendall(message)
# remove 'this_client' from open_sockets list
open_sockets.remove(this_client)
this_client.close()
client_count_lock.acquire()
client_count -= 1
client_count_lock.release()
def main(a):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
port = a
server.bind(('', port))
server.listen(5)
message = ''
message_lock = thread.allocate_lock()
client_count = 2
client_count_lock = thread.allocate_lock()
for i in range(client_count):
(client,address) = server.accept()
open_sockets.append(client)
thread.start_new_thread(handle_client,(client,2))
server.close()
while client_count > 0:
pass
print '************\nMessage log from all clients:\n%s\n************' % message
if __name__ == "__main__":
if sys.argv[1]:
main(int(sys.argv[1]))
else:
main(8070)
Use plain old bash in the script, get the PID and use kill.
Or, much much much much better, create a testing script that handles all that and call that from your Makefile. A single run_tests.py, say.
You want to keep as much logic as possible outside the Makefile.
related to 'global' issue => define handle_client inside main and remove the global message, client_count,... line

Categories

Resources