setup.py has a feature for testing if functions are defined:
compiler = distutils.ccompiler.new_compiler ()
if compiler.has_function ('foo_new', libraries=("foo",)):
define_macros.append (('HAVE_FOO_NEW', '1'))
However I can't seem to use this for Python extension functions (specifically PyCapsule_New). The following does not define anything:
if compiler.has_function ('PyCapsule_New'):
define_macros.append (('HAVE_PYCAPSULE_NEW', '1'))
I seem to need to put something in the libraries argument, but what? The name of the Python library changes, and is not available in distutils.sysconfig except as a gcc parameter (eg. BLDLIBRARY is defined as something like -L. -lpython2.7).
It seems like such an obvious/common thing to want to do so the code will work on multiple versions of Python, but no setup.py scripts I can find use has_function in this way.
Instead of doing configure checks for Python features, you could do some compile-time testing. Ideally you could check against the Python version (PY_MAJOR_VERSION, PY_MINOR_VERSION), but you could also rely on macros defined inside the headers.
For your specific feature, note that the Py_CAPSULE_H macro is defined once the header pycapsule.h is included (via Python.h).
Related
So I have discovered the python extension for SPSS, and everything works fine, I have created some scripts now and included them in the extensions map and it works fine. However, now I have created a couple of scripts that require arguments, I thought I could just follow the same method but I guess not.
def Run(args):
import spss
def testing_p(variables):
all_variables = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]
variable_nr = [all_variables.index(i) for i in variables]
print all_variables
print variable_nr
With the following .xml-file:
<Command xmlns="http://xml.spss.com/extension" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Name="testing_p" Language="Python">
</Command>
However, this keep throwing the error when calling testing_p(['my_var', 'my_var2']):
Warnings
This command should specify a valid subcommand at the beginning.
Execution of this command stops.
I cannot wrap my head around this because everything works fine when not put in the extensions map and only doing:
BEGIN PROGRAM.
import spss
def testing_p(variables):
all_variables = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]
variable_nr = [all_variables.index(i) for i in variables]
print all_variables
print variable_nr
END PROGRAM.
For an extension, which can be writen in Python, R, or Java, you need to create a syntax specification containing the command name, any subcommands, and the arguments and argument types you want. Here is a picture of the start of one (SPSSINC_TURF, which is installed with Statistics).
This will guide the Statistics parser in checking the user input. It also then calls the Run function with a complicated structure containing the user input. You can use the functions in the extensions module to map that to your Python variables and do further validation. Here is a picture of the start of the Run function for SPSSINC TURF.
Finally, if the syntax is valid, your Run function calls the worker function to do something useful, mapping all the parameters to the specified arguments by calling
processcmd(oobj, args, superturf, vardict=spssaux.VariableDict())
which was imported from extensions.py.
Look at the doc for extensions in the help system, and look at some of the extensions installed with Statistics for examples.
Finally, here is a slide from one of my presentations summarizing the flow from user input to results.
I need to develop a plugin for GIMP and would like to stay with PyCharm for Python editing, etc.
FYI, I'm on Windows.
After directing PyCharm to use the Python interpreter included with GIMP:
I also added a path to gimpfu.py to get rid of the error on from gimpfu import *:
This fixes the error on the import, even when set to Excluded.
I experimented with setting this directory to Sources, Resources and Excluded and still get errors for constants such as RGBA-IMAGE, TRANSPARENT_FILL, NORMAL_MODE, etc.
Any idea on how to contort PyCharm into playing nice for GIMP plugin development?
Not really running any code from PyCharm, it's really just being used as a nice code editor, facilitate revisions control, etc.
As you find this variables are part of .pyd files (dll files for Python). PyCharm can't get signatures for content of this files.
For Python builtins (like abs, all, any, etc.) PyCharm has it's own .py files that uses only for signatures and docs. You can see it if you'll click on some of this funcs and go to it's declaration:
PyCharm will open builtins.py file in it's folder with following content:
def abs(*args, **kwargs): # real signature unknown
""" Return the absolute value of the argument. """
pass
def all(*args, **kwargs): # real signature unknown
"""
Return True if bool(x) is True for all values x in the iterable.
If the iterable is empty, return True.
"""
pass
def any(*args, **kwargs): # real signature unknown
"""
Return True if bool(x) is True for any x in the iterable.
If the iterable is empty, return False.
"""
pass
As you see functions are defined and documented, but have no implementation, because their implementation created with C and placed somewhere in binary file.
Pycharm can't provide such wrapper for every library. Usually people who created .pyd files provide their .py wrappers (for example, PyQt module: no native python implementation, just signatures).
Looks like Gimp doesn't have such wrapper for some of variables. Only way I see is to create some sort of own wrapper manually. For example, create gimpfu_signatures.py with following content:
RGBA_IMAGE = 1
TRANSPARENT_FILL = 2
NORMAL_MODE = 3
And import it while you're creating plugin:
from gimpfu import *
from gimpfu_signatures import * # comment on release
Not elegant, but better then nothing.
...
One more note about gimpfu.py's path. If I understand correctly, you just added this path to project. It may work, but correct way is to add it to project's PYTHONPATH (in project preferences). See this link for detailed manual.
building libraries with waf is nice and I like the lib<targetname> naming scheme. But when I use is with boost::python, I'd like to get rid of it: I'd like the librarie's name to be like the target name. This is just a simple rename, I know, but: Can I tell waf to leave out putting lib before the target name (alternatively: specify an own name which stays untouched)?
Ok, got it. This feature can be enabled by using the python tool, found here: http://docs.waf.googlecode.com/git/apidocs_16/tools/python.html#module-waflib.Tools.python
The main point is calling conf.init_pyext() and in the build directive for the shared library specifying features='pyext':
def options(opt):
opt.load('python')
def configure(conf):
conf.load('python')
conf.check_python_version((2,4,2))
conf.check_python_headers()
def build(bld):
bld.shlib(
features = 'pyext',
source = "mymodule.cpp",
target = "myfoo",
use = "PYTHON BOOST_PYTHON")
Now, in the build directory there is a shared library called myfoo.so which can directly be imported.
I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
-m implementation is in runpy._run_module_as_main . Its essence is:
mod_name, loader, code, fname = _get_module_details(mod_name)
<...>
exec code in run_globals
A compiled module has no "code object" accociated with it so the 1st statement fails with ImportError("No code object available for <module>"). You need to extend runpy - specifically, _get_module_details - to make it work for a compiled module. I suggest returning a code object constructed from the aforementioned "import mod; mod.main()":
(python 2.6.1)
code = loader.get_code(mod_name)
if code is None:
+ if loader.etc[2]==imp.C_EXTENSION:
+ code=compile("import %(mod)s; %(mod)s.main()"%{'mod':mod_name},"<extension loader wrapper>","exec")
+ else:
+ raise ImportError("No code object available for %s" % mod_name)
- raise ImportError("No code object available for %s" % mod_name)
filename = _get_filename(loader, mod_name)
(Update: fixed an error in format string)
Now...
C:\Documents and Settings\Пользователь>python -m pythoncom
C:\Documents and Settings\Пользователь>
This still won't work for builtin modules. Again, you'll need to invent some notion of "main code unit" for them.
Update:
I've looked through the internals called from _get_module_details and can say with confidence that they don't even attempt to retrieve a code object from a module of type other than imp.PY_SOURCE, imp.PY_COMPILED or imp.PKG_DIRECTORY . So you have to patch this machinery this way or another for -m to work. Python fails before retrieving anything from your module (it doesn't even check if the dll is a valid module) so you can't do anything by building it in a special way.
Does your requirement of single distributed binary allow for the use of an egg? If so, you could package your module with a __main__.py with your calling code and the usual __init__.py...
If you're really adamant, maybe you could extend pkgutil.ImpLoader.get_code to return something for C modules (e.g., maybe a special __code__ function). To do that, I think you're going to have to actually change it in the Python source. Even then, pkgutil uses exec to execute the code block, so it would have to be Python code anyway.
TL;DR: I think you're euchred. While Python modules have code at the global level that runs at import time, C modules don't; they're mostly just a dict namespace. Thus, running a C module doesn't really make sense from a conceptual standpoint. You need some real Python code to direct the action.
I think that you need to start by making a separate file in Python and getting the -m option to work. Then, turn that Python file into a code object and incorporate it into your binary in such a way that it continues to work.
Look up setuptools in PyPi, download the .egg and take a look at the file. You will see that the first few bytes contain a Python script and these are followed by a .ZIP file bytestream. Something similar may work for you.
There's a brand new thing that may solve your problems easily. I've just learnt about it and it looks preety decent to me: http://code.google.com/p/pts-mini-gpl/wiki/StaticPython
In using the m4_ax_python_module.m4 macro in configure.ac (AX_PYTHON_MODULE), one can know at configure time if a given module is installed. It takes two arguments, the module name, and second argument which if not empty, will lead to an exit, useful when the module is a must-have.
In the case where you don't want a fatal exit, how do you test in configure.ac which modules were found or not? They output "yes" or "no" when configure is run, but that's all I've found so far. Basically If I have these lines in configure.ac:
EDIT: added square brackets around module names
AX_PYTHON_MODULE([json],[])
AX_PYTHON_MODULE([simplejson],[])
How do I test which of the two modules were found?
See http://www.gnu.org/software/autoconf-archive/ax_python_module.html#ax_python_module for documentation about this macro.
Ok the best solution I've found so far was:
EDIT: using AS_IF instead of just if test
AS_IF([test "x${HAVE_PYMOD_JSON}" = "xno"],
AS_IF([test "x${HAVE_PYMOD_SIMPLEJSON}" = "xno"],
[AC_MSG_ERROR([Requires one of json or simplejson])]))
What through me off was in the macro, the AS_TR_CPP transforms its arguments into #define style macros, i.e. all upper case.