Using a Python 2.7 enum from C - python

I have an enum in Python (backported enum package to 2.7) that is meant to be of only integers:
import enum
class MyEnum(enum.Enum):
val = 0
Let's say I receive a PyObject * in a C extension pointing to MyEnum.val. I want the integer value associated with the PyObject *. How do I get it most succinctly?

Looking at the source to the enum34 backport, just like the enum module in 3.4+, it's pure Python, and does nothing to expose a custom C API.
So, you just use PyObject_GetAttr and friends to access class attributes. In particular, if you have a MyEnum.val, you need to get its value attribute, which will be an int, which you can then PyInt_AsLong.
This is the same way things work in Python. If you try to use MyEnum.val where an int is expected, you should get a TypeError; if you try to explicitly call int(MyEnum.val), you will definitely get a TypeError. So, although I haven't tested it, PyInt_AsLong directly on the constant instead of its value should raise a TypeError and return -1.
If you want enumeration constants that act like subtypes of int, then, as the enum docs explain, you want IntEnum. Usually, that isn't really what you want (as the docs explain), but if it is, of course it works. And you should be able to PyInt_Check and PyInt_AsLong an IntEnum value (although, again, I haven't tested).

Related

Subtype Polymorphism is broken in Cython v30.0.0a11?

Trying to pass an instance of a derived class to a function which accepts instances of the superclass gives an error in Cython v3.0.0a11:
test.pyx:
class MyInt(int):
pass
def takes_int(a: int):
pass
try.py:
from test import takes_int, MyInt
takes_int(MyInt(1))
try.py OUTPUT:
Traceback (most recent call last):
File "C:\Users\LENOVO PC\PycharmProjects\MyProject\cython_src\try.py", line 3, in <module>
takes_int(MyInt(1))
TypeError: Argument 'a' has incorrect type (expected int, got MyInt)
Changing to v0.29.32, cleaning the generated C file and the object files, and re-running, gets rid of the error.
This is (kind of) expected.
Cython has never allowed subtype polymorphism for builtin type arguments. See https://cython.readthedocs.io/en/latest/src/userguide/language_basics.html#types:
This requires an exact match of the class, it does not allow subclasses. This allows Cython to optimize code by accessing internals of the builtin class, which is the main reason for declaring builtin types in the first place.
This is a restriction which applies only to builtin types - for Cython defined cdef classes it works fine. It's also slightly different to the usual rule for annotations, but it's there because it's the only way that Cython can do much with these annotation.
What's changed is that an int annotation is interpreted as "any object" in Cython 0.29.x and a Python int in Cython 3. (Note that cdef int declares a C int though.) The reason for not using an int annotation in earlier versions of Cython is that Python 2 has two Python integer types, and it isn't easy to accept both of those and usefully use the types.
I'm not quite sure exactly what the final version of Cython 3 will end up doing with int annotations though.
If you don't want Cython to use the annotation (for example you would like your int class to be accepted) then you can turn off annotation_typing locally with the #cython.annotation_typing(False) decorator.

mypy set dictionary keys / interface

Suppose I have a function which takes a dictionary as a parameter:
def f(d: dict) -> None:
x = d["x"]
print(x)
Can I specify that this dictionary must have the key "x" to mypy? I'm looking for something similar to interface from typescript, without changing d to a class.
The reason I don't want to change d to a class, is because I am modifying a large existing codebase to add mypy type checking and this dictionary is used in many places. I would have to modify a lot of code if I had to change all instances of d["x"] to d.x.
As of Python 3.8 you can use typing.TypedDict, added as per PEP 589. For older Python versions you can use the typing-extensions package
Note that the PEP does acknowledge that the better option is that you use dataclasses for this use-case, however:
Dataclasses are a more recent alternative to solve this use case, but there is still a lot of existing code that was written before dataclasses became available, especially in large existing codebases where type hinting and checking has proven to be helpful.
So the better answer is to consider a different data structure, such as a named tuple or a dataclass, where you can specify the attribute a type has. This is what the typescript declaration does, really:
The printLabel function has a single parameter that requires that the object passed in has a property called label of type string.
Python attributes are the moral equivalent of Typescript object properties. That Typescript object notation and Python dictionaries have a lot in common perhaps confuses matters, but you should not look upon Typescript object declarations as anything but classes, when trying to map concepts to Python.
That could look like this:
from dataclasses import dataclass
#dataclass
class SomeClass:
x: str
def f(sc: SomeClass) -> None:
x = sc.x
print(x)
That said, you can use typing.TypedDict here:
from typing import TypedDict
class SomeDict(TypedDict):
x: str
def f(d: SomeDict) -> None:
x = d['x']
print(x)
Keys in a TypeDict declaration are either all required, or all optional (when you set total=False on the declaration); you'd have to use inheritance to produce a type with some keys optional, see the documentation linked. Note that TypedDict currently has issues with a mix of optional and required keys; you may want to use the typing-extensions package to get the Python 3.9 version (which fixes this) as a backport even when using Python 3.8. Just use from typing_extensions import TypedDict instead of the above from typing ... import, the typing-extensions package falls back to the standard library version when appropriate.
Mypy extends PEP 484 by providing a TypedDict type. This allows specifying specific attributes of a dict type. In your case you can do the following:
from mypy_extensions import TypedDict
# you can also do HasX = TypedDict('HasX', {'x': str})
class HasX(TypedDict):
x: str
def f(x: HasX) -> None:
reveal_type(d["x"]) # error: Revealed type is 'builtins.str'

What is GetSetDescriptorType in Python?

I was looking at types.py to understand the built-in types and I came across this GetSetDescriptorType. From the Python documentation:
types.GetSetDescriptorType
The type of objects defined in extension modules with PyGetSetDef,
such as FrameType.f_locals or array.array.typecode. This type is used
as descriptor for object attributes; it has the same purpose as the
property type, but for classes defined in extension modules
I do understand the property type, but could not wrap my mind around this. Can some one who understands this throw some light ?
When you write a Python module using C, you define new types using a C API. This API has a lot of functions and structs to specify all the behavior of the new type.
One way to specify properties of a type using the C API is to define an array of PyGetSetDef structs:
static PyGetSetDef my_props[] = { /*... */ }
And then use the array in the initialization of the type (see this example for details).
Then, in Python, when you use MyType.my_property you have a value of types.GetSetDescriptorType, that is used to resolve the actual value of the property when you write my_obj.my_property.
As such, this type is an implementation detail, unlikely to be very useful.

Get enum definition from shared library

I am using ctypes to access a shared library written in C. The C source of the shared library contains an enum like
enum {
invalid = 0,
type1 = 1,
type2 = 2
} type_enum;
On the Python side I was intending to just define integer constants for the various enum values, like:
INVALID = 0
TYPE1 = 1
TYPE2 = 2
And then use these numerical "constants" in the Python code calling the C functions. This seems to work OK, however I would strongly prefer to get the numerical values for the enums directly from the shared library (introspection?); however using e.g. nm on the shared library it does not seem to contain any of the symbols 'invalid', 'type1' or 'type2'. So my question is:
Is it possible to extract the numerical values from enum definitions from a shared library - or is the whole enum concept 'dropped on the floor' when the compiler is done?
If the enum values exist in the shared library - how can I access them from Python/ctypes?
Enum definitions are not exported so your current solution is the only one available.
In any case, C enum values are nothing much more than integer constants. There's no type safety on the C side, you can pass any integer values to an enum parameter. So it's not like the C compiler is doing much anyway.
See MSDN on the benefits of enums: "alternative to the #define preprocessor directive with the advantages that the values can be generated for you and obey normal scoping rules" - notably missing is type safety. This strongly suggests that, as you suggest, enums are dropped on the floor once compiled.

Boost.Python function pointers as class constructor argument

I have a C++ class that requires a function pointer in it's constructor (float(*myfunction)(vector<float>*))
I've already exposed some function pointers to Python.
The ideal way to use this class is something like this:
import mymodule
mymodule.some_class(mymodule.some_function)
So I tell Boost about this class like so:
class_<SomeClass>("some_class", init<float(*)(vector<float>*)>);
But I get:
error: no matching function for call to 'register_shared_ptr1(Sample (*)(std::vector<double, std::allocator<double> >*))'
when I try to compile it.
So, does anyone have any ideas on how I can fix the error without losing the flexibility gained from function pointers (ie no falling back to strings that indicate which function to call)?
Also, the main point of writing this code in C++ is for speed. So it would be nice if I was still able to keep that benefit (the function pointer gets assigned to a member variable during initialization and will get called over a million times later on).
OK, so this is a fairly difficult question to answer in general. The root cause of your problem is that there really is no python type which is exactly equivalent to a C function pointer. Python functions are sort-of close, but their interface doesn't match for a few reasons.
Firstly, I want to mention the technique for wrapping a constructor from here:
http://wiki.python.org/moin/boost.python/HowTo#namedconstructors.2BAC8factories.28asPythoninitializers.29. This lets you write an __init__ function for your object that doesn't directly correspond to an actual C++ constructor. Note also, that you might have to specify boost::python::no_init in the boost::python::class_ construction, and then def a real __init__ function later, if your object isn't default-constructible.
Back to the question:
Is there only a small set of functions that you'll usually want to pass in? In that case, you could just declare a special enum (or specialized class), make an overload of your constructor that accepts the enum, and use that to look up the real function pointer. You can't directly call the functions yourself from python using this approach, but it's not that bad, and the performance will be the same as using real function pointers.
If you want to provide a general approach that will work for any python callable, things get more complex. You'll have to add a constructor to your C++ object that accepts a general functor, e.g. using boost::function or std::tr1::function. You could replace the existing constructor if you wanted, because function pointers will convert to this type correctly.
So, assuming you've added a boost::function constructor to SomeClass, you should add these functions to your python wrapping code:
struct WrapPythonCallable
{
typedef float * result_type;
explicit WrapPythonCallable(const boost::python::object & wrapped)
: wrapped_(wrapped)
{ }
float * operator()(vector<float>* arg) const
{
//Do whatever you need to do to convert into a
//boost::python::object here
boost::python::object arg_as_python_object = /* ... */;
//Call out to python with the object - note that wrapped_
//is callable using an operator() overload, and returns
//a boost::python::object.
//Also, the call can throw boost::python::error_already_set -
//you might want to handle that here.
boost::python::object result_object = wrapped_(arg_as_python_object);
//Do whatever you need to do to extract a float * from result_object,
//maybe using boost::python::extract
float * result = /* ... */;
return result;
}
boost::python::object wrapped_;
};
//This function is the "constructor wrapper" that you'll add to SomeClass.
//Change the return type to match the holder type for SomeClass, like if it's
//held using a shared_ptr.
std::auto_ptr<SomeClass> CreateSomeClassFromPython(
const boost::python::object & callable)
{
return std::auto_ptr<SomeClass>(
new SomeClass(WrapPythonCallable(callable)));
}
//Later, when telling Boost.Python about SomeClass:
class_<SomeClass>("some_class", no_init)
.def("__init__", make_constructor(&CreateSomeClassFromPython));
I've left out details on how to convert pointers to and from python - that's obviously something that you'll have to work out, because there are object lifetime issues there.
If you need to call the function pointers that you'll pass in to this function from Python, then you'll need to def these functions using Boost.Python at some point. This second approach will work fine with these def'd functions, but calling them will be slow, because objects will be unnecessarily converted to and from Python every time they're called.
To fix this, you can modify CreateSomeClassFromPython to recognize known or common function objects, and replace them with their real function pointers. You can compare python objects' identity in C++ using object1.ptr() == object2.ptr(), equivalent to id(object1) == id(object2) in python.
Finally, you can of course combine the general approach with the enum approach. Be aware when doing this, that boost::python's overloading rules are different from C++'s, and this can bite you when dealing with functions like CreateSomeClassFromPython. Boost.Python tests functions in the order that they are def'd to see if the runtime arguments can be converted to the C++ argument types. So, CreateSomeClassFromPython will prevent single-argument constructors def'd later than it from being used, because its argument matches any python object. Be sure to put it after other single-argument __init__ functions.
If you find yourself doing this sort of thing a lot, then you might want to look at the general boost::function wrapping technique (mentioned on the same page with the named constructor technique): http://wiki.python.org/moin/boost.python/HowTo?action=AttachFile&do=view&target=py_boost_function.hpp.

Categories

Resources