Running a file with arguments from python embedded within c - python

I'm currently working on a project that uses a C source file that has to interact with a python file (run the file and capture output) and im not exactly sure how to do it. currently the python file is run through terminal (linux) using:
python file arg1 arg2 arg3 arg4
and i am trying to embed python into the C code to just run the file first (no output capture) using the following code:
void python() {
FILE * file;
int argc;
char * argv[5];
argc=5;
argv[0]="pathtofile/file";
argv[1]="arg1";
argv[2]="arg2";
argv[3]="arg3";
argv[4]="arg4";
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc,argv);
file= fopen("pathtofile/file","r");
PyRun_SimpleFile(file,"pathtofile/file");
PyFinalize();
}
args1-2 are hard coded, and args3-4 are determined by the C code (just determines integer values), this is then passed to the python file where it then executes.
When running the above code i get a:
TypeError: unsupported operand type(s) for + :NoneType and 'str'
Any advice from here on what could be my issue is greatly appreciated.
EDIT:
I was using this as a guide as it seems to be similar to what im trying to acheive
Run a python script with arguments

Your argc is uninitialized - did you compile with warnings enabled and warnings made into errors (-Wall, -Werror on GCC?); and your argv is not properly null-terminated. Thus your code has undefined behaviour. Anything might happen including demons flying out of your nose. The argument to fopen mode must be a string yet you pass an int (character constant) - which has another UB.
Thus at least you must do:
int argc = 5;
char *argv[] = {
"pathtofile/file",
"arg1",
"arg2",
"arg3",
"arg4",
0
};
input = fopen(..., "r"); // "r", not 'r'!
Additionally you're not checking the return values of any of these functions. Any of them may fail and with Python you should expect them to fail - including your fopen! (Please tell that they're omitted for brevity).

Related

How can I pass variables from Python back to C++?

I have 2 files - a .cpp file and a .py file. I use system("python something.py"); to run the .py file and it has to get some input. How do I pass the input back to the .cpp file? I don't use the Python.h library, I have two separate files.
system() is a very blunt hammer and doesn't support much in the way of interaction between the parent and the child process.
If you want to pass information from the Python script back to the C++ parent process, I'd suggest having the python script print() to stdout the information you want to send back to C++, and have the C++ program parse the python script's stdout-output. system() won't let you do that, but you can use popen() instead, like this:
#include <stdio.h>
int main(int, char **)
{
FILE * fpIn = popen("python something.py", "r");
if (fpIn)
{
char buf[1024];
while(fgets(buf, sizeof(buf), fpIn))
{
printf("The python script printed: [%s]\n", buf);
// Code to parse out values from the text in (buf) could go here
}
pclose(fpIn); // note: be sure to call pclose(), *not* fclose()
}
else printf("Couldn't run python script!\n");
return 0;
}
If you want to get more elaborate than that, you'd probably need to embed a Python interpreter into your C++ program and then you'd be able to call the Python functions directly and get back their return values as Python objects, but that's a fairly major undertaking which I'm guessing you want to avoid.

How to execute a file written in C from python while passing it string values and accepting/storing the string it returns as output.(Linux)

I have tried the subprocess module but providing the main function of C with string values but it doesn't seem to work for me. There are certain solutions available on the web but I'm having difficulty in understanding and implementing them. Any help will be appreciated.
I want to execute a file written in C passing it arguments from my pyhthon code and after that C file is executed I want to capture what it returns in my python code.
const char* main(int argc , char* argv[])
{
printf("Hello World from C \n");
printf("1st string passed is %s \n ",argv[1]);
printf("2nd string passed is %s \n ", argv[2]);
char b[] = "success";
return b;
}
This is the code in C from where I wish to return a string to the python code.
import subprocess
import os
def excuteC():
s = subprocess.call("./Cvishad hello you;", shell = True)
print("status is "+ str(s))
if __name__=="__main__":
excuteC()
This is the code in python where I am calling the C code. It seems to work but I think it is returning a pointer/address but not the actual string "success" that I want.
The output I get is something like this -
Hello World from C
1st string passed is hello
2nd string passed is you
status is 80
I'm trying out things for the first time in C, so kindly excuse the silly mistakes.
Your C program is returning a pointer to a string as the process return value, and it gets truncated to an unsigned byte (which you're seeing as e.g. 80).
You can transfer arbitrary data between processes by e.g. (and I'm sure I'm forgetting something)
sockets
files
streams (stdout/stderr; subprocess.check_output() in Python)
shared memory

How to call Python script with arguments from C++, using exec syscall

I want to call python script from C++ program as a child process.
I want to run this command: /home/minty99/.virtualenvs/venv_waveglow/bin/python /home/minty99/tacotron2/inference.py mps 1 4
which have commandline argument for python.
I tried these code:
string pt_arg = "mps " + to_string(i) + " " + to_string(fd[i][1]);
[1] execl("bash", "-c", "/home/minty99/.virtualenvs/venv_waveglow/bin/python /home/minty99/tacotron2/inference.py", pt_arg.c_str(), NULL);
[2] execl("/home/minty99/.virtualenvs/venv_waveglow/bin/python", "/home/minty99/tacotron2/inference.py", pt_arg.c_str(), NULL);
But it was not working.
First one: exec fails with "No such file or directory"
Second one: /home/minty99/tacotron2/inference.py: can't open file 'mps
1 4': [Errno 2] No such file or directory
How can I do this properly?
I haven't tried it, but reading the manpage of execl, it says
The first argument, by convention, should point to the file name
associated with the file being executed.
Where
int
execl(const char *file, const char arg0, ... /, (char *)0 */);
To me, this means your second version should likely read
execl("/home/minty99/.virtualenvs/venv_waveglow/bin/python", "python", "/home/minty99/tacotron2/inference.py", pt_arg.c_str(), NULL);
Which makes sense, since if you get the list of arguments in python, sys.argv[0] will be "python", not your first argument.

Pass Python object directly to C++ program without using subprocess

I have a C++ program that, through the terminal, takes a text file as input and produces another text file. I'm executing this program from a Python script which first produces said text string, stores it to a file, runs the C++ program as a subprocess with the created file as input and parses the output text file back into a Python object.
Is it possible to do this without using a subprocess call? In other words: is it possible to avoid the reading and writing and just run the C++ program inside the Python environment with the text-string as input and then capture the output, again inside the Python environment?
For code I refer to the function community_detection_multiplex in this IPython notebook.
You can use ctypes.
It requires the C++ function to be wrapped with extern "c" and compiled as C code.
Say your C++ function looks like that:
char* changeString(char* someString)
{
// do something with your string
return someString;
}
You can call it from python like that:
import ctypes as ct
yourString = "somestring"
yourDLL = ct.CDLL("path/to/dll") # assign the dll to a variable
cppFunc = yourDLL.changeString # assign the cpp func to a variable
cppFunc.restype = ct.c_char_p # set the return type to a string
returnedString = cppfunc(yourString.encode('ascii')).decode()
Now returnedString will have the processed string.

Python "print" not working when embedded into MPI program

I have an Python 3 interpreter embedded into an C++ MPI application. This application loads a script and passes it to the interpreter.
When I execute the program on 1 process without the MPI launcher (simply calling ./myprogram), the script is executed properly and its "print" statements output to the terminal. When the script has an error, I print it on the C++ side using PyErr_Print().
However when I lauch the program through mpirun (even on a single process), I don't get any output from the "print" in the python code. I also don't get anything from PyErr_Print() when my script has errors.
I guess there is something in the way Python deals with standard output that do not match the way MPI (actuall Mpich here) deals with redirecting the processes' output to the launcher and finally to the terminal.
Any idea on how to solve this?
[edit, following the advice from this issue]
You need to flush_io() after each call to PyErr_Print, where flush_io could be this function:
void flush_io(void)
{
PyObject *type, *value, *traceback;
PyErr_Fetch(&type, &value, &traceback); // in Python/pythonrun.c, they save the traceback, let's do the the same
for (auto& s: {"stdout", "stderr"}) {
PyObject *f = PySys_GetObject(s);
if (f) PyObject_CallMethod(f, "flush", NULL);
else PyErr_Clear();
}
PyErr_Restore(type, value, traceback);
}
[below my old analysis, it still has some interesting info]
I ended up with the same issue (PyErr_Print not working from an mpirun). Tracing back (some gdb of python3 involved) and comparing the working thing (./myprogram) and non-working thing (mpirun -np 1 ./myprogram), I ended up in _io_TextIOWrapper_write_impl at ./Modules/_io/textio.c:1277 (python-3.6.0 by the way).
The only difference between the 2 runs is that self->line_buffering is 1 vs. 0 (at this point self represents sys.stderr).
Then, in pylifecycle.c:1128, we can see who decided this value:
if (isatty || Py_UnbufferedStdioFlag)
line_buffering = Py_True;
So it seems that MPI does something to stderr before launching the program, which makes it not a tty. I haven't investigated if there's an option in mpirun to keep the tty flag on stderr ... if someone knows, it'd be interesting (though on second thought mpi probably has good reasons to put his file descriptors in place of stdout&stderr, for its --output-filename for example).
With this info, I can come out with 3 solutions (the first 2 are quick-fixes, the 3rd is better):
1/ in the C code that starts the python interpreter, set the buffering flag before creating sys.stderr. The code becomes :
Py_UnbufferedStdioFlag = 1; // force line_buffering for _all_ I/O
Py_Initialize();
This brings Python's traceback back to screen in all situations; but will probably give catastrophic I/O ... so only an acceptable solution in debug mode.
2/ in the python (embedded) script, at the very beginning add this :
import sys
#sys.stderr.line_buffering = True # would be nice, but readonly attribute !
sys.stderr = open("error.log", 'w', buffering=1 )
The script then dumps the traceback to this file error.log.
I also tried adding a call to fflush(stderr) or fflush(NULL) right after the PyErr_Print() ... but this didn't work (because sys.stderr has its own internal buffering). That'd be a nice solution though.
3/ After a little more digging, I found the perfect function in
Python/pythonrun.c:57:static void flush_io(void);
It is in fact called after each PyErr_Print in this file.
Unfortunately it's static (only exists in that file, no reference to it in Python.h, at least in 3.6.0). I copied the function from this file to myprogram and it turns out to do exactly the job.

Categories

Resources