Getting around undefined symbol: _strnicmp - python

I have a library which is a python wrapper for a C/C++ library, which needs to be imported to Python in Linux. Let's call the library abc.so. This library is dependent on another C/C++ library: xyz. Both these libraries used to have/have facilities that are dependent on Windows, Borland compiler or similar compilers. I am able to successfully build abc.so, after fixing some of the windows compiler related issues. However, I cannot import it to my python code. I receive the error:
ImportError: /usr/local/lib/abc.so: undefined symbol: _strnicmp
or a variant of this. I tried various import methods involving packages like ctpes, os, sys and flags like RTLD_LAZY, RTLD_GLOBAL, RTLD_NOW under the assumption that the method of import will fix this problem. However, none of them worked. This answer: undefined reference to stricmp (and the comment above) suggests that strnicmp should be replaced. It also points out that this is a link time error. However, I have not been able to identify part of these libraries expecting an implementation of strnicmp. What would be a good approach to find the source of this issue? Also, should I be trying some alternative path to fix this issue?

The stricmp() and strnicmp() functions are specific to Windows.
POSIX (Linux) uses the <strings.h> header and strcasecmp() and strncasecmp().
You can write a simple cover function or change the calls via a macro. For the cover function, you'd want to use appropriate conditional compilation (#ifdef SOMETHING / #endif — or maybe #ifndef SOMETHING / #endif). You might use an inline function if it only appears in one file. If it appears in many files, you may want a regular function, even though it's a smidgeon less efficient. If there's a convenient header included by all the files that you can modify to add (conditionally) the static inline function definition, that can work well — it's also about the only time you normally add a function body to a header file.
static inline int strnicmp(const char *s1, const char *s2, size_t len)
{
return strncasecmp(s1, s2, len);
}
or
#undef strnicmp
#define strnicmp(s1, s2, len) strncasecmp(s1, s2, len)

Related

Loading C library in Python that dlopens another C library - unresolved shared symbol

I have a commercial c library (a.so) that has several functions. When you call the a.open() function, it performs a dlopen() call for another library dynamically. If calling a.open('b'), it will open b.so. If calling a.open('c'), it will open c.so.
The problem is that a.so and b.so share a global variable defined in a.so, but referenced by b.so (and c.so,etc.). I am able to load a.so correctly in python using ctypes and see all the symbols in Python. However, when I call a.open('b'), it attempts to load b.so but returns undefined symbol.
//a.c -source for a.so library
int aglobal = 0;
void open(char* lib)
{ dlopen(lib); }
//b.c - source for b.so library
extern int aglobal;
Here is my python code to load:
from ctypes import cdll
p = ctypes.CDLL('a.so')
p.open('b')
returns the error code: undefined symbol: aglobal
Some other notes:
files are linked with -fPIC -rdynamic -shared
When I write a C program that does the same as the python program,
there is not a problem.
I've tried swig to wrap the library also, and numerous other things, build options, etc. but same results.
Is Python binding the symbols differently or something?
You need to use RTLD_GLOBAL when you load a.so.
The object's symbols shall be made available for the relocation processing of any other object. In addition, symbol lookup using dlopen(0, mode) and an associated dlsym() allows objects loaded with this mode to be searched.
The Linux man page is a bit more straightforward:
The symbols defined by this shared object will be made
available for symbol resolution of subsequently loaded shared
objects.
The python documentation describes the presence of the option but does not describe what it does.
New in version 2.6: The use_last_error and use_errno optional parameters were added.
ctypes.RTLD_GLOBAL
Flag to use as mode parameter. On platforms where this flag is not available, it is defined as the integer zero.
ctypes.RTLD_LOCAL
Flag to use as mode parameter. On platforms where this is not available, it is the same as RTLD_GLOBAL.
ctypes.DEFAULT_MODE
The default mode which is used to load shared libraries. On OSX 10.3, this is RTLD_GLOBAL, otherwise it is the same as RTLD_LOCAL.
From the documentation, it appears you are on a system where DEFAULT_MODE is the same as RTLD_LOCAL, which is the converse of RTLD_GLOBAL.

Cpp compilation from python fail but not in the shell

I have a cpp file that compiles fine with g++ by using the shell:
extern "C"{
#include <quadmath.h>
}
inline char* print( const __float128& val)
{
char* buffer = new char[128];
quadmath_snprintf(buffer,128,"%+.34Qe", val);
return buffer;
}
int main(){
__float128 a = 1.0;
print(a);
return 0;
}
However, when I try to compile it via a python scrit, it fails with the following error:
"undefined reference to quadmath_snprintf"
Here the code of the python script:
import commands
import string
import os
(status, output) = commands.getstatusoutput("(g++ test/*.c -O3 -lquadmath -m64)")
Any idea how to solve this? Thanks.
When you open a shell a whole of stuff is silently initialized for you, and most important for your issue, environment variables are set. What you most likely miss is the definition of LIBRARY_PATH, which is the variable used by the linker to look for libraries matching the ones you instruct it to link using the -lNAME flags.
What the linker needs is a list of directories where it will search for files matching libNAME.{a,so}. You can also pass these directories directly using the -L flag, but in general, you should probably try to use a program like CMake, Make or any other build tool.
This will give you access to commands like find_package and target_link_libraries (CMake), to find, respectively add libraries to your build targets, instead of having to maintain your python to compile your stuff.

Why does Python compile modules but not the script being run?

Why does Python compile libraries that are used in a script, but not the script being called itself?
For instance,
If there is main.py and module.py, and Python is run by doing python main.py, there will be a compiled file module.pyc but not one for main. Why?
Edit
Adding bounty. I don't think this has been properly answered.
If the response is potential disk permissions for the directory of main.py, why does Python compile modules? They are just as likely (if not more likely) to appear in a location where the user does not have write access. Python could compile main if it is writable, or alternatively in another directory.
If the reason is that benefits will be minimal, consider the situation when the script will be used a large number of times (such as in a CGI application).
Files are compiled upon import. It isn't a security thing. It is simply that if you import it python saves the output. See this post by Fredrik Lundh on Effbot.
>>>import main
# main.pyc is created
When running a script python will not use the *.pyc file.
If you have some other reason you want your script pre-compiled you can use the compileall module.
python -m compileall .
compileall Usage
python -m compileall --help
option --help not recognized
usage: python compileall.py [-l] [-f] [-q] [-d destdir] [-x regexp] [directory ...]
-l: don't recurse down
-f: force rebuild even if timestamps are up-to-date
-q: quiet operation
-d destdir: purported directory name for error messages
if no directory arguments, -l sys.path is assumed
-x regexp: skip files matching the regular expression regexp
the regexp is searched for in the full path of the file
Answers to Question Edit
If the response is potential disk permissions for the directory of main.py, why does Python compile modules?
Modules and scripts are treated the same. Importing is what triggers the output to be saved.
If the reason is that benefits will be minimal, consider the situation when the script will be used a large number of times (such as in a CGI application).
Using compileall does not solve this.
Scripts executed by python will not use the *.pyc unless explicitly called. This has negative side effects, well stated by Glenn Maynard in his answer.
The example given of a CGI application should really be addressed by using a technique like FastCGI. If you want to eliminate the overhead of compiling your script you may want eliminate the overhead of starting up python too, not to mention database connection overhead.
A light bootstrap script can be used or even python -c "import script", but these have questionable style.
Glenn Maynard provided some inspiration to correct and improve this answer.
Nobody seems to want to say this, but I'm pretty sure the answer is simply: there's no solid reason for this behavior.
All of the reasons given so far are essentially incorrect:
There's nothing special about the main file. It's loaded as a module, and shows up in sys.modules like any other module. Running a main script is nothing more than importing it with a module name of __main__.
There's no problem with failing to save .pyc files due to read-only directories; Python simply ignores it and moves on.
The benefit of caching a script is the same as that of caching any module: not wasting time recompiling the script every time it's run. The docs acknowledge this explicitly ("Thus, the startup time of a script may be reduced ...").
Another issue to note: if you run python foo.py and foo.pyc exists, it will not be used. You have to explicitly say python foo.pyc. That's a very bad idea: it means Python won't automatically recompile the .pyc file when it's out of sync (due to the .py file changing), so changes to the .py file won't be used until you manually recompile it. It'll also fail outright with a RuntimeError if you upgrade Python and the .pyc file format is no longer compatible, which happens regularly. Normally, this is all handled transparently.
You shouldn't need to move a script to a dummy module and set up a bootstrapping script to trick Python into caching it. That's a hackish workaround.
The only possible (and very unconvincing) reason I can contrive is to avoid your home directory from being cluttered with a bunch of .pyc files. (This isn't a real reason; if that was an actual concern, then .pyc files should be saved as dotfiles.) It's certainly no reason not to even have an option to do this.
Python should definitely be able to cache the main module.
Pedagogy
I love and hate questions like this on SO, because there's a complex mixture of emotion, opinion, and educated guessing going on and people start to get snippy, and somehow everybody loses track of the actual facts and eventually loses track of the original question altogether.
Many technical questions on SO have at least one definitive answer (e.g. an answer that can be verified by execution or an answer that cites an authoritative source) but these "why" questions often do not have just a single, definitive answer. In my mind, there are 2 possible ways to definitively answer a "why" question in computer science:
By pointing to the source code that implements the item of concern. This explains "why" in a technical sense: what preconditions are necessary to evoke this behavior?
By pointing to human-readable artifacts (comments, commit messages, email lists, etc.) written by the developers involved in making that decision. This is the real sense of "why" that I assume the OP is interested in: why did Python's developers make this seemingly arbitrary decision?
The second type of answer is more difficult to corroborate, since it requires getting in the mind of the developers who wrote the code, especially if there's no easy-to-find, public documentation explaining a particular decision.
To date, this thread has 7 answers that solely focus on reading the intent of Python's developers and yet there is only one citation in the whole batch. (And it cites a section of the Python manual that does not answer the OP's question.)
Here's my attempt at answering both of the sides of the "why" question along with citations.
Source Code
What are the preconditions that trigger compilation of a .pyc? Let's look at the source code. (Annoyingly, the Python on GitHub doesn't have any release tags, so I'll just tell you that I'm looking at 715a6e.)
There is promising code in import.c:989 in the load_source_module() function. I've cut out some bits here for brevity.
static PyObject *
load_source_module(char *name, char *pathname, FILE *fp)
{
// snip...
if (/* Can we read a .pyc file? */) {
/* Then use the .pyc file. */
}
else {
co = parse_source_module(pathname, fp);
if (co == NULL)
return NULL;
if (Py_VerboseFlag)
PySys_WriteStderr("import %s # from %s\n",
name, pathname);
if (cpathname) {
PyObject *ro = PySys_GetObject("dont_write_bytecode");
if (ro == NULL || !PyObject_IsTrue(ro))
write_compiled_module(co, cpathname, &st);
}
}
m = PyImport_ExecCodeModuleEx(name, (PyObject *)co, pathname);
Py_DECREF(co);
return m;
}
pathname is the path to the module and cpathname is the same path but with a .pyc extension. The only direct logic is the boolean sys.dont_write_bytecode. The rest of the logic is just error handling. So the answer we seek isn't here, but we can at least see that any code that calls this will result in a .pyc file under most default configurations. The parse_source_module() function has no real relevance to the flow of execution, but I'll show it here because I'll come back to it later.
static PyCodeObject *
parse_source_module(const char *pathname, FILE *fp)
{
PyCodeObject *co = NULL;
mod_ty mod;
PyCompilerFlags flags;
PyArena *arena = PyArena_New();
if (arena == NULL)
return NULL;
flags.cf_flags = 0;
mod = PyParser_ASTFromFile(fp, pathname, Py_file_input, 0, 0, &flags,
NULL, arena);
if (mod) {
co = PyAST_Compile(mod, pathname, NULL, arena);
}
PyArena_Free(arena);
return co;
}
The salient aspect here is that the function parses and compiles a file and returns a pointer to the byte code (if successful).
Now we're still at a dead end, so let's approach this from a new angle. How does Python load it's argument and execute it? In pythonrun.c there are a few functions for loading code from a file and executing it. PyRun_AnyFileExFlags() can handle both interactive and non-interactive file descriptors. For interactive file descriptors, it delegates to PyRun_InteractiveLoopFlags() (this is the REPL) and for non-interactive file descriptors, it delegates to PyRun_SimpleFileExFlags(). PyRun_SimpleFileExFlags() checks if the filename ends in .pyc. If it does, then it calls run_pyc_file() which directly loads compiled byte code from a file descriptor and then runs it.
In the more common case (i.e. .py file as an argument), PyRun_SimpleFileExFlags() calls PyRun_FileExFlags(). This is where we start to find our answer.
PyObject *
PyRun_FileExFlags(FILE *fp, const char *filename, int start, PyObject *globals,
PyObject *locals, int closeit, PyCompilerFlags *flags)
{
PyObject *ret;
mod_ty mod;
PyArena *arena = PyArena_New();
if (arena == NULL)
return NULL;
mod = PyParser_ASTFromFile(fp, filename, start, 0, 0,
flags, NULL, arena);
if (closeit)
fclose(fp);
if (mod == NULL) {
PyArena_Free(arena);
return NULL;
}
ret = run_mod(mod, filename, globals, locals, flags, arena);
PyArena_Free(arena);
return ret;
}
static PyObject *
run_mod(mod_ty mod, const char *filename, PyObject *globals, PyObject *locals,
PyCompilerFlags *flags, PyArena *arena)
{
PyCodeObject *co;
PyObject *v;
co = PyAST_Compile(mod, filename, flags, arena);
if (co == NULL)
return NULL;
v = PyEval_EvalCode(co, globals, locals);
Py_DECREF(co);
return v;
}
The salient point here is that these two functions basically perform the same purpose as the importer's load_source_module() and parse_source_module(). It calls the parser to create an AST from Python source code and then calls the compiler to create byte code.
So are these blocks of code redundant or do they serve different purposes? The difference is that one block loads a module from a file, while the other block takes a module as an argument. That module argument is — in this case — the __main__ module, which is created earlier in the initialization process using a low-level C function. The __main__ module doesn't go through most of the normal module import code paths because it is so unique, and as a side effect, it doesn't go through the code that produces .pyc files.
To summarize: the reason why the __main__ module isn't compiled to .pyc is that it isn't "imported". Yes, it appears in sys.modules, but it gets there via a very different code path than real module imports take.
Developer Intent
Okay, so we can now see that the behavior has more to do with the design of Python than with any clearly expressed rationale in the source code, but that doesn't answer the question of whether this is an intentional decision or just a side effect that doesn't bother anybody enough to be worth changing. One of the benefits of open source is that once we've found the source code that interests us, we can use the VCS to help trace back to the decisions that led to the present implementation.
One of the pivotal lines of code here (m = PyImport_AddModule("__main__");) dates back to 1990 and was written by the BDFL himself, Guido. It has been modified in intervening years, but the modifications are superficial. When it was first written, the main module for a script argument was initialized like this:
int
run_script(fp, filename)
FILE *fp;
char *filename;
{
object *m, *d, *v;
m = add_module("`__main__`");
if (m == NULL)
return -1;
d = getmoduledict(m);
v = run_file(fp, filename, file_input, d, d);
flushline();
if (v == NULL) {
print_error();
return -1;
}
DECREF(v);
return 0;
}
This existed before .pyc files were even introduced into Python! Small wonder that the design at that time didn't take compilation into account for script arguments. The commit message enigmatically says:
"Compiling" version
This was one of several dozen commits over a 3 day period... it appears that Guido was deep into some hacking/refactoring and this was the first version that got back to being stable. This commit even predates the creation of the Python-Dev mailing list by about five years!
Saving the compiled bytecode was introduced 6 months later, in 1991.
This still predates the list serve, so we have no real idea of what Guido was thinking. It appears that he simply thought that the importer was the best place to hook into for the purpose of caching bytecodes. Whether he considered the idea of doing the same for __main__ is unclear: either it didn't occur to him, or else he thought that it was more trouble than it was worth.
I can't find any bugs on bugs.python.org that are related to caching the bytecodes for the main module, nor can I find any messages on the mailing list about it, so apparently nobody else thinks it's worth the trouble to try adding it.
To summarize: the reason why all modules are compiled to .pyc except __main__ is that it's a quirk of history. The design and implementation for how __main__ works was baked into the code before .pyc files even existed. If you want to know more than that, you'll need to e-mail Guido and ask.
Glenn Maynard's answer says:
Nobody seems to want to say this, but I'm pretty sure the answer is simply: there's no solid reason for this behavior.
I agree 100%. There's circumstantial evidence to support this theory and nobody else in this thread has provided a single shred of evidence to support any other theory. I upvoted Glenn's answer.
To answer your question, reference to 6.1.3. “Compiled” Python files in Python official document.
When a script is run by giving its name on the command line, the bytecode for the script is never written to a .pyc or .pyo file. Thus, the startup time of a script may be reduced by moving most of its code to a module and having a small bootstrap script that imports that module. It is also possible to name a .pyc or .pyo file directly on the command line.
Since:
A program doesn’t run any faster when it is read from a .pyc or .pyo file than when it is read from a .py file; the only thing that’s faster about .pyc or .pyo files is the speed with which they are loaded.
That is unnecessary to generate .pyc file for main script. Only the libraries which might be loaded many times should be compiled.
Edited:
It seem you didn't get my point. First, knowing the whole idea of compiling into .pyc file is to make the same file executing faster at the second time. However, consider if Python did compile the script being run. The interpreter will write bytecode into a .pyc file at the first running, this takes time. So it will even run a bit slower. You might argue that it will run faster after. Well, it just a choice. Plus, as this says:
Explicit is better than implicit.
If one wants a speedup by using .pyc file, one should compile it manually and run the .pyc file explicitly.
Because the script being run may be somewhere where it is inappropriate to generate .pyc files, such as /usr/bin.
Because different versions of Python (3.6, 3.7 ...) have different bytecode representations, and trying to design a compile system for that was deemed too complicated. PEP 3147 discusses the rationale.

ctypes - Beginner

I have the task of "wrapping" a c library into a python class. The docs are incredibly vague on this matter. It seems they expect only advanced python users would implement ctypes.
Some step by step help would be wonderful.
So I have my c library. What do I do? What files do I put where? How do I import the library? I read that there might be a way to "auto wrap" to Python?
(By the way I did the ctypes tutorial on python.net and it doesn't work. Meaning I'm thinking they are assuming I should be able to fill in the rest of the steps.)
In fact this is the error I get with their code:
File "importtest.py", line 1
>>> from ctypes import *
SyntaxError: invalid syntax
I could really use some step by step help on this!
Here's a quick and dirty ctypes tutorial.
First, write your C library. Here's a simple Hello world example:
testlib.c
#include <stdio.h>
void myprint(void);
void myprint()
{
printf("hello world\n");
}
Now compile it as a shared library (mac fix found here):
$ gcc -shared -Wl,-soname,testlib -o testlib.so -fPIC testlib.c
# or... for Mac OS X
$ gcc -shared -Wl,-install_name,testlib.so -o testlib.so -fPIC testlib.c
Then, write a wrapper using ctypes:
testlibwrapper.py
import ctypes
testlib = ctypes.CDLL('/full/path/to/testlib.so')
testlib.myprint()
Now execute it:
$ python testlibwrapper.py
And you should see the output
Hello world
$
If you already have a library in mind, you can skip the non-python part of the tutorial. Make sure ctypes can find the library by putting it in /usr/lib or another standard directory. If you do this, you don't need to specify the full path when writing the wrapper. If you choose not to do this, you must provide the full path of the library when calling ctypes.CDLL().
This isn't the place for a more comprehensive tutorial, but if you ask for help with specific problems on this site, I'm sure the community would help you out.
PS: I'm assuming you're on Linux because you've used ctypes.CDLL('libc.so.6'). If you're on another OS, things might change a little bit (or quite a lot).
The answer by Chinmay Kanchi is excellent but I wanted an example of a function which passes and returns a variables/arrays to a C++ code. I though I'd include it here in case it is useful to others.
Passing and returning an integer
The C++ code for a function which takes an integer and adds one to the returned value,
extern "C" int add_one(int i)
{
return i+1;
}
Saved as file test.cpp, note the required extern "C" (this can be removed for C code).
This is compiled using g++, with arguments similar to Chinmay Kanchi answer,
g++ -shared -o testlib.so -fPIC test.cpp
The Python code uses load_library from the numpy.ctypeslib assuming the path to the shared library in the same directory as the Python script,
import numpy.ctypeslib as ctl
import ctypes
libname = 'testlib.so'
libdir = './'
lib=ctl.load_library(libname, libdir)
py_add_one = lib.add_one
py_add_one.argtypes = [ctypes.c_int]
value = 5
results = py_add_one(value)
print(results)
This prints 6 as expected.
Passing and printing an array
You can also pass arrays as follows, for a C code to print the element of an array,
extern "C" void print_array(double* array, int N)
{
for (int i=0; i<N; i++)
cout << i << " " << array[i] << endl;
}
which is compiled as before and the imported in the same way. The extra Python code to use this function would then be,
import numpy as np
py_print_array = lib.print_array
py_print_array.argtypes = [ctl.ndpointer(np.float64,
flags='aligned, c_contiguous'),
ctypes.c_int]
A = np.array([1.4,2.6,3.0], dtype=np.float64)
py_print_array(A, 3)
where we specify the array, the first argument to print_array, as a pointer to a Numpy array of aligned, c_contiguous 64 bit floats and the second argument as an integer which tells the C code the number of elements in the Numpy array. This then printed by the C code as follows,
1.4
2.6
3.0
Firstly: The >>> code you see in python examples is a way to indicate that it is Python code. It's used to separate Python code from output. Like this:
>>> 4+5
9
Here we see that the line that starts with >>> is the Python code, and 9 is what it results in. This is exactly how it looks if you start a Python interpreter, which is why it's done like that.
You never enter the >>> part into a .py file.
That takes care of your syntax error.
Secondly, ctypes is just one of several ways of wrapping Python libraries. Other ways are SWIG, which will look at your Python library and generate a Python C extension module that exposes the C API. Another way is to use Cython.
They all have benefits and drawbacks.
SWIG will only expose your C API to Python. That means you don't get any objects or anything, you'll have to make a separate Python file doing that. It is however common to have a module called say "wowza" and a SWIG module called "_wowza" that is the wrapper around the C API. This is a nice and easy way of doing things.
Cython generates a C-Extension file. It has the benefit that all of the Python code you write is made into C, so the objects you write are also in C, which can be a performance improvement. But you'll have to learn how it interfaces with C so it's a little bit extra work to learn how to use it.
ctypes have the benefit that there is no C-code to compile, so it's very nice to use for wrapping standard libraries written by someone else, and already exists in binary versions for Windows and OS X.

How do I set up the python/c library correctly?

I have been trying to get the python/c library to like my mingW compiler. The python online doncumentation; http://docs.python.org/c-api/intro.html#include-files only mentions that I need to import the python.h file. I grabbed it from the installation directory (as is required on the windows platform), and tested it by compiling the script:
#include "Python.h". This compiled fine. Next, I tried out the snippet of code shown a bit lower on the python/c API page:
PyObject *t;
t = PyTuple_New(3);
PyTuple_SetItem(t, 0, PyInt_FromLong(1L));
PyTuple_SetItem(t, 1, PyInt_FromLong(2L));
PyTuple_SetItem(t, 2, PyString_FromString("three"));
For some reason, the compiler would compile the code if I'd remove the last 4 lines (so that only the pyObject variable definition would be left), yet calling the actual constructor of the tuple returned errors.
I am probably missing something completely obvious here, given I am very new to C, but does anyone know what it is?
I've done some crafty Googling, and if you are getting errors at the linker stage (the error messages might have hex strings or references to ld), you may need to make sure the Python library that ships with the Windows version has been converted to a format that GCC (MinGW) can read; see here, among other sites. Also ensure that GCC can find and is using the library file if needs be, using -L/my/directory and -lpython26 (substituting appropriately for your path and Python version).
If the errors are at the compilation stage (if line numbers are given in the messages), make sure that you don't need to add any other directories to the include search path. Python might (I've not used its C API) include other header files in Python.h which are stored in some other directory. If this is the case, use the -I/my/directory/ flag to GCC to tell it to search there as well.
Exact (copied-and-pasted) error messages would help, though.
Warning: The text below does not answer the question!
Did you put the code inside a function? Try putting it in main, like so:
int main(int argc, char *argv[]) {
PyObject *t;
t = PyTuple_New(3);
PyTuple_SetItem(t, 0, PyInt_FromLong(1L));
PyTuple_SetItem(t, 1, PyInt_FromLong(2L));
PyTuple_SetItem(t, 2, PyString_FromString("three"));
return 0;
}
This code will be run on execution of the program. You can then use whatever other methods are provided to examine the contents of the tuple. If it isn't to be run separately as an executable program, then stick it in a differently-named method; I assume you have another way to invoke the function.
The PyObject *t; definition is valid outside the function as a global variable definition, as well as inside a function, declaring it as a local variable. The other four lines are function calls, which must be inside another function.
The above code on its own does not a program make. Are you trying to write a C extension to Python? If so, look at some more complete documentation here.
I have made some progress since I asked my question, and I thought I would just share it in case someone else is having similar problems.
These were the errors I got:
In function `main':
undefined reference to `_imp__PyTuple_New'
undefined reference to `_imp__PyInt_FromLong'
undefined reference to `_imp__PyTuple_SetItem'
undefined reference to `_imp__PyInt_FromLong'
undefined reference to `_imp__PyTuple_SetItem'
undefined reference to `_imp__PyString_FromString'
undefined reference to `_imp__PyTuple_SetItem'
The errors I got were the result of missing libraries from the mingW compiler. So only including the header file in the source ode is not enough, there is also a special file required (.lib, .o, .a, ..) that needs to be included for compilation. It is possible to use the -l[path] flag on the mingW command line, but I found that codeBlocks ( http://www.codeblocks.org/ ) is the most convenient to use here. After creating a project, and going to Project>Build options.., you can specify the location of the library file under the linker settings tab. When you are done, build the project, and it will hopefully work.
I hope anyone struggling with similar problems have help from this :)

Categories

Resources