I want to export a function via RPC. Therefore I use Pyro4 for Python. This works so far. Now I want that function to work also on the data the belong to the RPC-Server. Is this possible? If so, how?
#!/usr/bin/env python3
import Pyro4
import myrpcstuff
listIwantToWorkWith=["apples","bananas","oranges"]
rcpthing=myrpcstuff.myrpcFunction()
daemon=Pyro4.Daemon()
uri=daemon.register(rpcthing)
daemon.requestLoop()
What do I have to write in myrpcstuff.myrpcFunction() to access listIwantToWorkWith, or do I have to mark the list global?
This isn't a Pyro specific question; the question is a general Python question about how to share data between functions or modules.
Pass the data you want to work on, to the objects that you want to have access to the data. You can do this via parameters or perhaps by creating a custom class and pass it the data via its init. This is all basic Python.
If you want to stay within the Pyro realm though, perhaps have a look at the examples that come with it, to see how you can do certain things?
Related
Spent a little too much time trying to figure it out by myself...
I'm working with a FEA app called Simcenter Femap. In my program I need to create N new instances of it after I get some data from base instance for some asyncio fun. Can't even start on the asyncio part because I can't force early binding on new instances.
What is working for me at this point:
Created a makepy wrapper, called it PyFemap as Femap help is suggesting, made an import
Connected to a running instance
femap_object = pythoncom.connect('femap.model')
feAppBaseInstance = PyFemap.model(femap_object)
Every method of every Femap object works perfectly fine after this.
I am able to create instances using DispatchEx('femap.model') and invoke methods that don't require data conversion.
But for the rest of the methods to work I need to force early binding on these instances through already existing wrapper (as I see it).
"Python programming on win32" suggests that I use gencache.EnsureModule to create a wrapper and link it to created instance. But when I try to do it through type library's CLSID I get an error that it's not registered. Is there really no way to do it with a wrapper I already created?
Also tried to do all of this using comtypes. Some parts work better for me with it some are worse. But the end result is the same. If I may, I'd like to ask how to do it with comtypes too.
Sorry if I'm missing something really obvious.
I recommend using pythoncom.New(...) instead of .connect(...).
I'll post the solution since I solved the issue.
It is actually really obvious. I ended up using pythoncom.New(...) for multiple instances, but I think other methods would work just as well if you only need one. The issue was that I didn't attach the wrapper head class (model) to these new instances, which was pretty silly of me.
To create a new instance
femap_object = pythoncom.new('femap.model')
To assign a win32 wrapper (PyFemap) to it.
new_instance = PyFemap.model(femap_object)
I would like to have my Python program have a loop where it sends an argument to a Java program, the Java program returns some value, and if the Python program sees that the value is what it is looking for, the loop stops. The specific Java program is linked here. My python program looks online for Minecraft server IPs and I want the Java program to return data on them. Is this possible?
Yes, it should be easily doable using a library such as Py4J, which can connect to your Java class from Python. All you have to do is to import the library, connect to the JVM like this:
from py4j.java_gateway import JavaGateway
gateway = JavaGateway()
and you can call methods in the Java class like you were calling methods in Python. In your case, you would call the constructor first, like this:
java_object = gateway.jvm.mypackage.ServerPinger()
Then run whatever function you want. I'll take the ping() method as an example:
return_object = java_object.ping("address")
The documentation in the above link is extensive and can show you how to do anything you want. Also refer to this answer written by the author of the library.
A better approach would be to use RESTful APIs for communication between multiple applications.
You can use Spring for Java, Flask for Python.
The example code makes use of this oauth2_client which it immediately locks. The script does not work without these lines. What's the correct way to integrate this into a Flask app? Do I have to manage these locks? Does it matter if my web server spawns multiple threads? Or if I'm using gunicorn+gevent? Is there documentation on this anywhere?
It's not actually locking, it's just instantiating a lock object inside the module. The lock is actually acquired/released internally by oauth2_client; you don't need to manage it yourself. You can see this by looking at the source code, here: https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/third_party/oauth2_plugin/oauth2_client.py
In fact, based on the source code linked above, you should be able to simply call oauth2_client.InitializeMultiprocessingVariables() instead of the try/except block, since that is ultimately doing almost the exact same thing.
I'm using a third party library (PySphere) for a project I'm working on. PySphere provides a simple API to interact with VMware. This is a general problem though, not specific to this library.
A simple use of the library would be to get a VM object, and then perform various operations on it:
vm_obj = vcenter.get_vm_by_name("My VM")
vm_obj.get_status()
vm_obj.power_on()
I'd like to add a few methods to the vm_obj class. These methods are highly specific to the OS in use on the VM and wouldn't be worthwhile to commit back to the library. Right now I've been doing it like so:
set_config_x(vm_obj, args)
This seems really unpythonic. I'd like to be able to add my methods to the vm_obj class, without modifying the class definition in the third party library directly.
While you can attach any callable to the class object (that is, vm_obj.__class__), that function would not be a method and would not have a self attribute. To make a real method, you can use the new module from the standard library:
vm_obj.set_config_x = new.instancemethod(callableFunction, vm_obj, vm_obj.__class__)
where callableFunction takes self (vm_obj) as its first argument.
I'm trying to create a construct in Python 3 that will allow me to easily execute a function on a remote machine.
Assuming I've already got a python tcp server that will run the functions it receives, running on the remote server, I'm currently looking at using a decorator like
#execute_on(address, port)
This would create the necessary context required to execute the function it is decorating and then send the function and context to the tcp server on the remote machine, which then executes it. Firstly, is this somewhat sane? And if not could you recommend a better approach? I've done some googling but haven't found anything that meets these needs.
I've got a quick and dirty implementation for the tcp server and client so fairly sure that'll work. I can get a string representation the function (e.g. func) being passed to the decorator by
import inspect
string = inspect.getsource(func)
which can then be sent to the server where it can be executed. The problem is, how do I get all of the context information that the function requires to execute? For example, if func is defined as follows,
import MyModule
def func():
result = MyModule.my_func()
MyModule will need to be available to func either in the global context or funcs local context on the remote server. In this case that's relatively trivial but it can get so much more complicated depending on when and how import statements are used. Is there an easy and elegant way to do this in Python? The best I've come up with at the moment is using the ast library to pull out all import statements, using the inspect module to get string representations of those modules and then reconstructing the entire context on the remote server. Not particularly elegant and I can see lots of room for error.
Thanks for your time
The approach you outline is extremely risky unless the remote server is somehow very strongly protected or "extremely sandboxed" (e.g a BSD "jail") -- anybody who can send functions to it would be able to run arbitrary code there.
Assuming you have an authentication system that you trust entirely, comes the "fragility" problem that you realized -- the function can depend on any globals defined in its module at the moment of execution (which can be different from those you can detect by inspection: determining the set of imported modules, and more generally of globals, at execution time, is a Turing-complete problem).
You can deal with the globals problem by serializing the function's globals, as well as the function itself, at the time you send it off for remote execution (whether you serialize all this stuff in readable string form, or otherwise, is a minor issue). But that still leaves you with the issue of imports performed inside the function.
Unless you're willing to put some limitations on the "remoted" function, such as "no imports inside the function (and functions called from it)", I'm thinking you could have the server override __import__ (the built-in function that is used by all import statements and is designed to be overridden for peculiar needs, such as yours;-) to ask for the extra module from the sending client (of course, that requires that said client also have "server-like" capabilities, in that it must be able to respond to such "module requests" from the server).
Can't you impose some restrictions on functions that are remoted, to bring this task back into the domain of sanity...?
You may interested in the execnet project.
execnet provides carefully tested means to easily interact with Python interpreters across version, platform and network barriers. It has a minimal and fast API targetting the following uses:
distribute tasks to local or remote CPUs
write and deploy hybrid multi-process applications
write scripts to administer a bunch of exec environments
http://codespeak.net/execnet/example/test_info.html#get-information-from-remote-ssh-account
I've seen a demo of it. But never used it myself.
Its' not clear from your question whether there is some system limitation/requirement for you to solve your problem in this way. If not there may be much easier and quicker ways of doing this using some sort of messaging infrastructure.
For example you might consider whether [Celery][1]
[1]: http://ask.github.com/celery/getting-started/introduction.html will meet your needs.
What's your end goal with this? From your description I can see no reason why you can't just create a simple messaging class and send instances of that to command the remote machine to do 'stuff'.. ?
Whatever you do, your remote machine is going to need the Python source to execute so why not distribute the code there and then run it? You could create a simple server which would accept some python source files, extract them import the relevent modules and then run a command?
This will probably be hard to do and you'll run into issues with security, arguments etc.
Maybe you can just run a Python interpreter remotely that can be given code using a socket or an HTTP service rather than do it on a function by function level?