Communicate with another program using subprocess or rpc / rest in python? - python

I have a situation where there is a program which is written in c++. It is a kind of a server which you need to start first. Then from another konsole you can call the program passing commandline arguments and it does stuff. Also it provides rpc and rest based access. So you can write a rpc or rest based library to interface with the server.
So my question is, since the program can be managed using mere commandline arguments, isn't it better to use python's subprocess module and build a library (wrapper) around it? Or is there any problem with this method?
Consider another case. Say I wanted to build a GUI around any linux utility like grep which allows user to test regular expressions (like we have on websites). So isn't it easier to communicate to grep using subprocess?
Thanks.

I think I'd prefer to use any of the rpc or rest interfaces, because the results you can obtain from them are usually in a format that is easy to parse since those interfaces have been designed for machine interaction. However, a command line interface is designed for human interaction and this means that the output is easy to parse for the human eye, but not necessarily by another program that receives the output.

Related

Difference between pybind and process.call

I have a python code and I would like to call to a function from a Library. I see two ways to achieve it
pybind
This means writing a wrapper around the library, making it a module, install it and calling it directly from python.
This is the more native approach, but adding interfaces is a pain.
process.call
Assuming I can send/receive data through ros, the Library can ba called through process.call and the results received through ros.
This is easier when I would like to add interface or make changes.
For a process which is not realtime (well, this is python...), Is there's any major disadvantage to use process.call?

Can I pass arguments from a Python program to a Java Program, and vice versa?

I would like to have my Python program have a loop where it sends an argument to a Java program, the Java program returns some value, and if the Python program sees that the value is what it is looking for, the loop stops. The specific Java program is linked here. My python program looks online for Minecraft server IPs and I want the Java program to return data on them. Is this possible?
Yes, it should be easily doable using a library such as Py4J, which can connect to your Java class from Python. All you have to do is to import the library, connect to the JVM like this:
from py4j.java_gateway import JavaGateway
gateway = JavaGateway()
and you can call methods in the Java class like you were calling methods in Python. In your case, you would call the constructor first, like this:
java_object = gateway.jvm.mypackage.ServerPinger()
Then run whatever function you want. I'll take the ping() method as an example:
return_object = java_object.ping("address")
The documentation in the above link is extensive and can show you how to do anything you want. Also refer to this answer written by the author of the library.
A better approach would be to use RESTful APIs for communication between multiple applications.
You can use Spring for Java, Flask for Python.

what are my chances to call CICS Transactions or COBOL programs from Python

we have some COBOL Caller Handlers which are executed/called by external applications built in VB/Java. what we are looking is instead of going through other applications, is there way to call those caller handlers directly from Python so we can test them directly from Python automation framework
I have a CICS program/transaction bound to a web interface in CICS, so that i can drive my transaction via http post/put/get, maybe you are looking for a tighter bind though?
For the Java APIs I would recommend, ditching python and wrting the tests in Groovy.
This is a scripting language that runs on a JVM, which means it can call all java APIs natively.
As well as support the normal builtin scripting stuff like dictionarys, currying functions, regex support -- all valid java code is also valid Groovy code. So you can cut and paste your java API calls into your testing scripts.
Python is available for z/OS in two distributions: from Rocket Software and (in beta form at the moment) from IBM. They're both free of charge. Here are the relevant links:
https://www.rocketsoftware.com/zos-open-source
https://developer.ibm.com/mainframe/2020/04/29/python-z-os-beta-is-ready/
Either one should give you the flexibility you need to invoke whatever other z/OS-hosted program you wish to invoke, whatever its programming language, without requiring any sort of network interface or other such configuration. Then you just decide how you'd like to interact with that program. As Cschneid suggested, do you want that to be via REST/JSON APIs? Great, CICS Transaction Server for z/OS supports that. So does Db2 for z/OS ("Db2 Native REST"), which addresses the COBOL part of your question if you're trying to invoke a Db2 stored procedure that happens to be written in COBOL. So does Python.
Another way to figure out a possible path is to figure out how the Visual Basic and Java applications are invoking these COBOL programs. That may not necessarily be the best way, but if it's still a reasonable way then perhaps you could adopt the same basic approach from Python.
CICS supports SOAP and REST, since 2008 I think. COBOL natively parses XML (and has for over a decade) and JSON (this is relatively new).

C++ to python communication. Multiple io streams?

A python program opens a new process of the C++ program and is reading the processes stdout.
No problem so far.
But is it possible to have multiple streams like this for communication? I can get two if I misuse stderr too, but not more. Easy way to hack this would be using temporary files. Is there something more elegant that does not need a detour to the filesystem?
PS: *nix specific solutions are welcome too
On unix systems; the usual way to open a subprocess is with fork(), which will leave any open file descriptors (small integers representing open files or sockets) available in both the child, and the parent, and then exec(), which also allows the new executable to use the file descriptors that were open in the old process. This functionality is preserved in the subprocess.Popen() call (adjustable with the close_fds argument). Thus, what you probably want to do is use os.pipe() to create pairs of sockets to communicate on, then use Popen() to launch the other process, with arguments for each of fd's returned by the previous call to pipe() to tell it which fd's it should use.
Sounds like what you want are to use sockets for communication. Both languages let open raw sockets but you might want to check out the zeromq project as well which has some addition advantages for message passing. Check out their hello world in c++ and python.
assuming windows machine.
you could try using the clipboard for exchanging information between python processes and C++.
assign some unique process id followed by your information and write it to clipboard on python side.....now just parse the string on C++ side.
its akin to using temporary files but all done in memory..... but the drawback being you cannot use clipboard for any other application.
hope it helps
With traditional, synchronous programming and the standard Python library, what you're asking is difficult to accomplish. If, instead, you consider using an asynchronous programming model and the Twisted library, it's a piece of cake. The Using Processes HOWTO describes how to easily communicate with as many processes as you like. Admittedly, there's a bit of a learning curve to Twisted but it's well worth the effort.

How do I copy a python function to a remote machine and then execute it?

I'm trying to create a construct in Python 3 that will allow me to easily execute a function on a remote machine.
Assuming I've already got a python tcp server that will run the functions it receives, running on the remote server, I'm currently looking at using a decorator like
#execute_on(address, port)
This would create the necessary context required to execute the function it is decorating and then send the function and context to the tcp server on the remote machine, which then executes it. Firstly, is this somewhat sane? And if not could you recommend a better approach? I've done some googling but haven't found anything that meets these needs.
I've got a quick and dirty implementation for the tcp server and client so fairly sure that'll work. I can get a string representation the function (e.g. func) being passed to the decorator by
import inspect
string = inspect.getsource(func)
which can then be sent to the server where it can be executed. The problem is, how do I get all of the context information that the function requires to execute? For example, if func is defined as follows,
import MyModule
def func():
result = MyModule.my_func()
MyModule will need to be available to func either in the global context or funcs local context on the remote server. In this case that's relatively trivial but it can get so much more complicated depending on when and how import statements are used. Is there an easy and elegant way to do this in Python? The best I've come up with at the moment is using the ast library to pull out all import statements, using the inspect module to get string representations of those modules and then reconstructing the entire context on the remote server. Not particularly elegant and I can see lots of room for error.
Thanks for your time
The approach you outline is extremely risky unless the remote server is somehow very strongly protected or "extremely sandboxed" (e.g a BSD "jail") -- anybody who can send functions to it would be able to run arbitrary code there.
Assuming you have an authentication system that you trust entirely, comes the "fragility" problem that you realized -- the function can depend on any globals defined in its module at the moment of execution (which can be different from those you can detect by inspection: determining the set of imported modules, and more generally of globals, at execution time, is a Turing-complete problem).
You can deal with the globals problem by serializing the function's globals, as well as the function itself, at the time you send it off for remote execution (whether you serialize all this stuff in readable string form, or otherwise, is a minor issue). But that still leaves you with the issue of imports performed inside the function.
Unless you're willing to put some limitations on the "remoted" function, such as "no imports inside the function (and functions called from it)", I'm thinking you could have the server override __import__ (the built-in function that is used by all import statements and is designed to be overridden for peculiar needs, such as yours;-) to ask for the extra module from the sending client (of course, that requires that said client also have "server-like" capabilities, in that it must be able to respond to such "module requests" from the server).
Can't you impose some restrictions on functions that are remoted, to bring this task back into the domain of sanity...?
You may interested in the execnet project.
execnet provides carefully tested means to easily interact with Python interpreters across version, platform and network barriers. It has a minimal and fast API targetting the following uses:
distribute tasks to local or remote CPUs
write and deploy hybrid multi-process applications
write scripts to administer a bunch of exec environments
http://codespeak.net/execnet/example/test_info.html#get-information-from-remote-ssh-account
I've seen a demo of it. But never used it myself.
Its' not clear from your question whether there is some system limitation/requirement for you to solve your problem in this way. If not there may be much easier and quicker ways of doing this using some sort of messaging infrastructure.
For example you might consider whether [Celery][1]
[1]: http://ask.github.com/celery/getting-started/introduction.html will meet your needs.
What's your end goal with this? From your description I can see no reason why you can't just create a simple messaging class and send instances of that to command the remote machine to do 'stuff'.. ?
Whatever you do, your remote machine is going to need the Python source to execute so why not distribute the code there and then run it? You could create a simple server which would accept some python source files, extract them import the relevent modules and then run a command?
This will probably be hard to do and you'll run into issues with security, arguments etc.
Maybe you can just run a Python interpreter remotely that can be given code using a socket or an HTTP service rather than do it on a function by function level?

Categories

Resources