Python threads in C - python

I'am writing a multi-threaded program in C. Before creating the threads, a global python environment is initialized by calling Py_Initialize(). Then, in every created thread, the global python environment is shared, and each thread calls a python method with the parameters converted in C. Everything works well until here.
When I use time.sleep() in loaded python modules, the C program raises a Segmentation Fault. Furthermore, the loaded python module is supposed to load another C lib to continue the work. I've written the following stupid counter lib to test it:
# python part, call the counter function
lib = ctypes.cdll.LoadLibrary(libpycount.so)
for i in xrange(10):
lib.count()
// C part, dummy countings
#include <stdio.h>
int counter = 1;
void
count() {
printf("counter:%d \n", counter);
counter++;
}
I guess that it might be because I didn't manage the complex thread creation in the right way. And I've found Non-Python created threads in the python doc.
Any ideas or suggestions?

My problem has been solved. You may have your problems more particular, so I'm trying to write my solution in a more generic way here. Hope it helps.
- In main C thread
initialize the Python environment at the very begining:
/*define a global variable to store the main python thread state*/
PyThreadState * mainThreadState = NULL;
if(!Py_IsInitialized())
Py_Initialize();
mainThreadState = = PyThreadState_Get();
Then start the C threads:
pthread_create(pthread_id, NULL, thread_entrance, NULL);
- In every thread, or we can say in the body of thread_entrance function
prepare the environment:
/*get the lock and create new python thread state*/
PyEval_AcquireLock();
PyInterpreterState * mainInterpreterState = mainThreadState->interp;
PyThreadState * myThreadState = PyThreadState_New(mainInterpreterState);
PyEval_ReleaseLock(); /*don't forget to release the lock*/
/*
* some C manipulations here
*/
put embeded Python code here:
/*get the lock and put your C-Python code here*/
PyEval_AcquireLock();
PyThreadState_Swap(myThreadState); /*swap your python thread state*/
PyEval_CallObject(py_function, py_arguments);
/*or just something like PyRun_SimpleString("print \"hello world\""); for test*/
PyThreadState_Swap(NULL); /*clean the thread state before leaving*/
PyEval_ReleaseLock();
- back to main C thread
when every thread finishes their works, finalize the python environment
pthread_join(pthread_id, NULL);
PyEval_RestoreThread(mainThreadState);
Py_Finalize();

Question is whether the Python interpreter is thread safe -- This is what the documentation says about running multiple interpreters in the same process space;
Bugs and caveats: Because sub-interpreters (and the main interpreter)
are part of the same process, the insulation between them isn't
perfect -- for example, using low-level file operations like
os.close() they can (accidentally or maliciously) affect each other's
open files. Because of the way extensions are shared between
(sub-)interpreters, some extensions may not work properly; this is
especially likely when the extension makes use of (static) global
variables, or when the extension manipulates its module's dictionary
after its initialization. It is possible to insert objects created in
one sub-interpreter into a namespace of another sub-interpreter; this
should be done with great care to avoid sharing user-defined
functions, methods, instances or classes between sub-interpreters,
since import operations executed by such objects may affect the wrong
(sub-)interpreter's dictionary of loaded modules. (XXX This is a
hard-to-fix bug that will be addressed in a future release.)
...and I don't think that Python threads are the same thing as native threads such as found in C/C++

Related

How to jump into python code in c++ call through GDB debugger?

I have a C++ project which was written by others, which calls python code at the end of the execution.
In the C++ initializer, it defines a pHandle:
pHandle_ = PyObject_CallObject(pLoad_, NULL);
PyGILState_Release(gstate);
Then it calls python code this way:
PyObject *pyValue = PyObject_CallObject(pProcess_, pArgs);
pProcess_ and pArgs are created earlier. The python code's file name is 'runLogic.py' and the function executed in runLogic.py is 'process()'.
Is there a way to break into the python's process() function while I debug in c++ through GDB? In C++ code, I can step through until the line above, copied again below:
PyObject *pyValue = PyObject_CallObject(pProcess_, pArgs);
Then I don't know how to jump into the Python's function.
Is there a way to do that? I want to trace the full logic of the code, in addition to C++'s code.
Then I don't know how to jump into the Python's function.
Presumably you want to now step through Python code.
GDB doesn't know how to do that, but pdb can.
Assuming you can modify runLogic.py, add import pdb to the top of the file, and pdb.set_trace() in the process() function.
This will let you step through Python code. If you have breakpoints in C++, and Python code calls back into C++, these breakpoints will still be active, you should be able to debug both C++ and Python.

How can I start a Python thread FROM C++?

Note that I'm constrained to use Python 2.6. I have a Python 2.6 application that uses a C++ multi-threaded API library built with boost-python. My use-case was simply to execute a Python function callback from a C++ boost thread but despite the many different attempts and researching all the available online resources I haven't found any way that works. All the proposed solutions revolve around a different combination of the functions: Py_Initialize*, PyEval_InitThreads, PyGILState_Ensure, PyGILState_Release but after trying all possible combinations nothing works in practice e.g.
Embedding Python in multi-threaded C++ applications with code here
PyEval_InitThreads in Python 3: How/when to call it? (the saga continues ad nauseum)
Therefore, this question: how can I start and run a Python thread from C++? I basically want to: create it, run it with a Python target function object and forget about it.
Is that possible?
Based on the text below from your question:
Run a Python thread from C++? I basically want to: create it, run it with a Python target function object and forget about it.
You may find useful to simply spawn a process using sytem:
system("python myscript.py")
And if you need to include arguments:
string args = "arg1 arg2 arg3 ... argn"
system("python myscript.py " + args)
You should call PyEval_InitThreads from your init routine (and leave the GIL held). Then, you can spawn a pthread (using boost), and in that thread, call PyGILState_Ensure, then call your python callback (PyObject_CallFunction?), release any returned value or deal with any error (PyErr_Print?), release the GIL with PyGILState_Release, and let the thread die.
void *my_thread(void *arg)
{
PyGILState_STATE gstate;
PyObject *result;
gstate = PyGILState_Ensure();
if ((result = PyObject_CallFunction(func, NULL))) {
Py_DECREF(result);
}
else {
PyErr_Print();
}
PyGILState_Release(gstate);
return NULL;
}
The answer to the OP How to call Python from a boost thread? also answers this OP or DP (Derived:)). It clearly demonstrates how to start a thread from C++ that callbacks to Python. I have tested it and works perfectly though needs adaptation for pre-C++11. It uses Boost Python, is indeed an all inclusive 5* answer and the example source code is here.

Python "print" not working when embedded into MPI program

I have an Python 3 interpreter embedded into an C++ MPI application. This application loads a script and passes it to the interpreter.
When I execute the program on 1 process without the MPI launcher (simply calling ./myprogram), the script is executed properly and its "print" statements output to the terminal. When the script has an error, I print it on the C++ side using PyErr_Print().
However when I lauch the program through mpirun (even on a single process), I don't get any output from the "print" in the python code. I also don't get anything from PyErr_Print() when my script has errors.
I guess there is something in the way Python deals with standard output that do not match the way MPI (actuall Mpich here) deals with redirecting the processes' output to the launcher and finally to the terminal.
Any idea on how to solve this?
[edit, following the advice from this issue]
You need to flush_io() after each call to PyErr_Print, where flush_io could be this function:
void flush_io(void)
{
PyObject *type, *value, *traceback;
PyErr_Fetch(&type, &value, &traceback); // in Python/pythonrun.c, they save the traceback, let's do the the same
for (auto& s: {"stdout", "stderr"}) {
PyObject *f = PySys_GetObject(s);
if (f) PyObject_CallMethod(f, "flush", NULL);
else PyErr_Clear();
}
PyErr_Restore(type, value, traceback);
}
[below my old analysis, it still has some interesting info]
I ended up with the same issue (PyErr_Print not working from an mpirun). Tracing back (some gdb of python3 involved) and comparing the working thing (./myprogram) and non-working thing (mpirun -np 1 ./myprogram), I ended up in _io_TextIOWrapper_write_impl at ./Modules/_io/textio.c:1277 (python-3.6.0 by the way).
The only difference between the 2 runs is that self->line_buffering is 1 vs. 0 (at this point self represents sys.stderr).
Then, in pylifecycle.c:1128, we can see who decided this value:
if (isatty || Py_UnbufferedStdioFlag)
line_buffering = Py_True;
So it seems that MPI does something to stderr before launching the program, which makes it not a tty. I haven't investigated if there's an option in mpirun to keep the tty flag on stderr ... if someone knows, it'd be interesting (though on second thought mpi probably has good reasons to put his file descriptors in place of stdout&stderr, for its --output-filename for example).
With this info, I can come out with 3 solutions (the first 2 are quick-fixes, the 3rd is better):
1/ in the C code that starts the python interpreter, set the buffering flag before creating sys.stderr. The code becomes :
Py_UnbufferedStdioFlag = 1; // force line_buffering for _all_ I/O
Py_Initialize();
This brings Python's traceback back to screen in all situations; but will probably give catastrophic I/O ... so only an acceptable solution in debug mode.
2/ in the python (embedded) script, at the very beginning add this :
import sys
#sys.stderr.line_buffering = True # would be nice, but readonly attribute !
sys.stderr = open("error.log", 'w', buffering=1 )
The script then dumps the traceback to this file error.log.
I also tried adding a call to fflush(stderr) or fflush(NULL) right after the PyErr_Print() ... but this didn't work (because sys.stderr has its own internal buffering). That'd be a nice solution though.
3/ After a little more digging, I found the perfect function in
Python/pythonrun.c:57:static void flush_io(void);
It is in fact called after each PyErr_Print in this file.
Unfortunately it's static (only exists in that file, no reference to it in Python.h, at least in 3.6.0). I copied the function from this file to myprogram and it turns out to do exactly the job.

Why does Python compile modules but not the script being run?

Why does Python compile libraries that are used in a script, but not the script being called itself?
For instance,
If there is main.py and module.py, and Python is run by doing python main.py, there will be a compiled file module.pyc but not one for main. Why?
Edit
Adding bounty. I don't think this has been properly answered.
If the response is potential disk permissions for the directory of main.py, why does Python compile modules? They are just as likely (if not more likely) to appear in a location where the user does not have write access. Python could compile main if it is writable, or alternatively in another directory.
If the reason is that benefits will be minimal, consider the situation when the script will be used a large number of times (such as in a CGI application).
Files are compiled upon import. It isn't a security thing. It is simply that if you import it python saves the output. See this post by Fredrik Lundh on Effbot.
>>>import main
# main.pyc is created
When running a script python will not use the *.pyc file.
If you have some other reason you want your script pre-compiled you can use the compileall module.
python -m compileall .
compileall Usage
python -m compileall --help
option --help not recognized
usage: python compileall.py [-l] [-f] [-q] [-d destdir] [-x regexp] [directory ...]
-l: don't recurse down
-f: force rebuild even if timestamps are up-to-date
-q: quiet operation
-d destdir: purported directory name for error messages
if no directory arguments, -l sys.path is assumed
-x regexp: skip files matching the regular expression regexp
the regexp is searched for in the full path of the file
Answers to Question Edit
If the response is potential disk permissions for the directory of main.py, why does Python compile modules?
Modules and scripts are treated the same. Importing is what triggers the output to be saved.
If the reason is that benefits will be minimal, consider the situation when the script will be used a large number of times (such as in a CGI application).
Using compileall does not solve this.
Scripts executed by python will not use the *.pyc unless explicitly called. This has negative side effects, well stated by Glenn Maynard in his answer.
The example given of a CGI application should really be addressed by using a technique like FastCGI. If you want to eliminate the overhead of compiling your script you may want eliminate the overhead of starting up python too, not to mention database connection overhead.
A light bootstrap script can be used or even python -c "import script", but these have questionable style.
Glenn Maynard provided some inspiration to correct and improve this answer.
Nobody seems to want to say this, but I'm pretty sure the answer is simply: there's no solid reason for this behavior.
All of the reasons given so far are essentially incorrect:
There's nothing special about the main file. It's loaded as a module, and shows up in sys.modules like any other module. Running a main script is nothing more than importing it with a module name of __main__.
There's no problem with failing to save .pyc files due to read-only directories; Python simply ignores it and moves on.
The benefit of caching a script is the same as that of caching any module: not wasting time recompiling the script every time it's run. The docs acknowledge this explicitly ("Thus, the startup time of a script may be reduced ...").
Another issue to note: if you run python foo.py and foo.pyc exists, it will not be used. You have to explicitly say python foo.pyc. That's a very bad idea: it means Python won't automatically recompile the .pyc file when it's out of sync (due to the .py file changing), so changes to the .py file won't be used until you manually recompile it. It'll also fail outright with a RuntimeError if you upgrade Python and the .pyc file format is no longer compatible, which happens regularly. Normally, this is all handled transparently.
You shouldn't need to move a script to a dummy module and set up a bootstrapping script to trick Python into caching it. That's a hackish workaround.
The only possible (and very unconvincing) reason I can contrive is to avoid your home directory from being cluttered with a bunch of .pyc files. (This isn't a real reason; if that was an actual concern, then .pyc files should be saved as dotfiles.) It's certainly no reason not to even have an option to do this.
Python should definitely be able to cache the main module.
Pedagogy
I love and hate questions like this on SO, because there's a complex mixture of emotion, opinion, and educated guessing going on and people start to get snippy, and somehow everybody loses track of the actual facts and eventually loses track of the original question altogether.
Many technical questions on SO have at least one definitive answer (e.g. an answer that can be verified by execution or an answer that cites an authoritative source) but these "why" questions often do not have just a single, definitive answer. In my mind, there are 2 possible ways to definitively answer a "why" question in computer science:
By pointing to the source code that implements the item of concern. This explains "why" in a technical sense: what preconditions are necessary to evoke this behavior?
By pointing to human-readable artifacts (comments, commit messages, email lists, etc.) written by the developers involved in making that decision. This is the real sense of "why" that I assume the OP is interested in: why did Python's developers make this seemingly arbitrary decision?
The second type of answer is more difficult to corroborate, since it requires getting in the mind of the developers who wrote the code, especially if there's no easy-to-find, public documentation explaining a particular decision.
To date, this thread has 7 answers that solely focus on reading the intent of Python's developers and yet there is only one citation in the whole batch. (And it cites a section of the Python manual that does not answer the OP's question.)
Here's my attempt at answering both of the sides of the "why" question along with citations.
Source Code
What are the preconditions that trigger compilation of a .pyc? Let's look at the source code. (Annoyingly, the Python on GitHub doesn't have any release tags, so I'll just tell you that I'm looking at 715a6e.)
There is promising code in import.c:989 in the load_source_module() function. I've cut out some bits here for brevity.
static PyObject *
load_source_module(char *name, char *pathname, FILE *fp)
{
// snip...
if (/* Can we read a .pyc file? */) {
/* Then use the .pyc file. */
}
else {
co = parse_source_module(pathname, fp);
if (co == NULL)
return NULL;
if (Py_VerboseFlag)
PySys_WriteStderr("import %s # from %s\n",
name, pathname);
if (cpathname) {
PyObject *ro = PySys_GetObject("dont_write_bytecode");
if (ro == NULL || !PyObject_IsTrue(ro))
write_compiled_module(co, cpathname, &st);
}
}
m = PyImport_ExecCodeModuleEx(name, (PyObject *)co, pathname);
Py_DECREF(co);
return m;
}
pathname is the path to the module and cpathname is the same path but with a .pyc extension. The only direct logic is the boolean sys.dont_write_bytecode. The rest of the logic is just error handling. So the answer we seek isn't here, but we can at least see that any code that calls this will result in a .pyc file under most default configurations. The parse_source_module() function has no real relevance to the flow of execution, but I'll show it here because I'll come back to it later.
static PyCodeObject *
parse_source_module(const char *pathname, FILE *fp)
{
PyCodeObject *co = NULL;
mod_ty mod;
PyCompilerFlags flags;
PyArena *arena = PyArena_New();
if (arena == NULL)
return NULL;
flags.cf_flags = 0;
mod = PyParser_ASTFromFile(fp, pathname, Py_file_input, 0, 0, &flags,
NULL, arena);
if (mod) {
co = PyAST_Compile(mod, pathname, NULL, arena);
}
PyArena_Free(arena);
return co;
}
The salient aspect here is that the function parses and compiles a file and returns a pointer to the byte code (if successful).
Now we're still at a dead end, so let's approach this from a new angle. How does Python load it's argument and execute it? In pythonrun.c there are a few functions for loading code from a file and executing it. PyRun_AnyFileExFlags() can handle both interactive and non-interactive file descriptors. For interactive file descriptors, it delegates to PyRun_InteractiveLoopFlags() (this is the REPL) and for non-interactive file descriptors, it delegates to PyRun_SimpleFileExFlags(). PyRun_SimpleFileExFlags() checks if the filename ends in .pyc. If it does, then it calls run_pyc_file() which directly loads compiled byte code from a file descriptor and then runs it.
In the more common case (i.e. .py file as an argument), PyRun_SimpleFileExFlags() calls PyRun_FileExFlags(). This is where we start to find our answer.
PyObject *
PyRun_FileExFlags(FILE *fp, const char *filename, int start, PyObject *globals,
PyObject *locals, int closeit, PyCompilerFlags *flags)
{
PyObject *ret;
mod_ty mod;
PyArena *arena = PyArena_New();
if (arena == NULL)
return NULL;
mod = PyParser_ASTFromFile(fp, filename, start, 0, 0,
flags, NULL, arena);
if (closeit)
fclose(fp);
if (mod == NULL) {
PyArena_Free(arena);
return NULL;
}
ret = run_mod(mod, filename, globals, locals, flags, arena);
PyArena_Free(arena);
return ret;
}
static PyObject *
run_mod(mod_ty mod, const char *filename, PyObject *globals, PyObject *locals,
PyCompilerFlags *flags, PyArena *arena)
{
PyCodeObject *co;
PyObject *v;
co = PyAST_Compile(mod, filename, flags, arena);
if (co == NULL)
return NULL;
v = PyEval_EvalCode(co, globals, locals);
Py_DECREF(co);
return v;
}
The salient point here is that these two functions basically perform the same purpose as the importer's load_source_module() and parse_source_module(). It calls the parser to create an AST from Python source code and then calls the compiler to create byte code.
So are these blocks of code redundant or do they serve different purposes? The difference is that one block loads a module from a file, while the other block takes a module as an argument. That module argument is — in this case — the __main__ module, which is created earlier in the initialization process using a low-level C function. The __main__ module doesn't go through most of the normal module import code paths because it is so unique, and as a side effect, it doesn't go through the code that produces .pyc files.
To summarize: the reason why the __main__ module isn't compiled to .pyc is that it isn't "imported". Yes, it appears in sys.modules, but it gets there via a very different code path than real module imports take.
Developer Intent
Okay, so we can now see that the behavior has more to do with the design of Python than with any clearly expressed rationale in the source code, but that doesn't answer the question of whether this is an intentional decision or just a side effect that doesn't bother anybody enough to be worth changing. One of the benefits of open source is that once we've found the source code that interests us, we can use the VCS to help trace back to the decisions that led to the present implementation.
One of the pivotal lines of code here (m = PyImport_AddModule("__main__");) dates back to 1990 and was written by the BDFL himself, Guido. It has been modified in intervening years, but the modifications are superficial. When it was first written, the main module for a script argument was initialized like this:
int
run_script(fp, filename)
FILE *fp;
char *filename;
{
object *m, *d, *v;
m = add_module("`__main__`");
if (m == NULL)
return -1;
d = getmoduledict(m);
v = run_file(fp, filename, file_input, d, d);
flushline();
if (v == NULL) {
print_error();
return -1;
}
DECREF(v);
return 0;
}
This existed before .pyc files were even introduced into Python! Small wonder that the design at that time didn't take compilation into account for script arguments. The commit message enigmatically says:
"Compiling" version
This was one of several dozen commits over a 3 day period... it appears that Guido was deep into some hacking/refactoring and this was the first version that got back to being stable. This commit even predates the creation of the Python-Dev mailing list by about five years!
Saving the compiled bytecode was introduced 6 months later, in 1991.
This still predates the list serve, so we have no real idea of what Guido was thinking. It appears that he simply thought that the importer was the best place to hook into for the purpose of caching bytecodes. Whether he considered the idea of doing the same for __main__ is unclear: either it didn't occur to him, or else he thought that it was more trouble than it was worth.
I can't find any bugs on bugs.python.org that are related to caching the bytecodes for the main module, nor can I find any messages on the mailing list about it, so apparently nobody else thinks it's worth the trouble to try adding it.
To summarize: the reason why all modules are compiled to .pyc except __main__ is that it's a quirk of history. The design and implementation for how __main__ works was baked into the code before .pyc files even existed. If you want to know more than that, you'll need to e-mail Guido and ask.
Glenn Maynard's answer says:
Nobody seems to want to say this, but I'm pretty sure the answer is simply: there's no solid reason for this behavior.
I agree 100%. There's circumstantial evidence to support this theory and nobody else in this thread has provided a single shred of evidence to support any other theory. I upvoted Glenn's answer.
To answer your question, reference to 6.1.3. “Compiled” Python files in Python official document.
When a script is run by giving its name on the command line, the bytecode for the script is never written to a .pyc or .pyo file. Thus, the startup time of a script may be reduced by moving most of its code to a module and having a small bootstrap script that imports that module. It is also possible to name a .pyc or .pyo file directly on the command line.
Since:
A program doesn’t run any faster when it is read from a .pyc or .pyo file than when it is read from a .py file; the only thing that’s faster about .pyc or .pyo files is the speed with which they are loaded.
That is unnecessary to generate .pyc file for main script. Only the libraries which might be loaded many times should be compiled.
Edited:
It seem you didn't get my point. First, knowing the whole idea of compiling into .pyc file is to make the same file executing faster at the second time. However, consider if Python did compile the script being run. The interpreter will write bytecode into a .pyc file at the first running, this takes time. So it will even run a bit slower. You might argue that it will run faster after. Well, it just a choice. Plus, as this says:
Explicit is better than implicit.
If one wants a speedup by using .pyc file, one should compile it manually and run the .pyc file explicitly.
Because the script being run may be somewhere where it is inappropriate to generate .pyc files, such as /usr/bin.
Because different versions of Python (3.6, 3.7 ...) have different bytecode representations, and trying to design a compile system for that was deemed too complicated. PEP 3147 discusses the rationale.

Is the PyThreadState* of the main python thread expected to be NULL?

I have a python program that calls into a c++ library, which wishes to release all the python locks so that other python threads can run.
Using PyEval_SaveThread and PyEval_ReleaseThread I get errors that the thread state is NULL:
Fatal Python error: PyEval_SaveThread: NULL tstate
However, the lower-level function seem to accept the NULL state happily:
PyThreadState *s;
s = PyThreadState_Swap(NULL);
// (now s = 0)
PyEval_ReleaseLock();
// ....
PyEval_AcquireLock();
PyThreadState_Swap(s);
// everything seems to be just fine :)
Answer: no, it is never meant to be NULL (if it is, it's a fatal error). Turned out this was because I was linking against two different versions of python, one via boost_python, and the other directly.
Top Tip:
use ldd or otool -L to check your library dependencies when funny stuff happens ;)

Categories

Resources