Python ioapiTools module can't do basic math operations - python

I've installed ioapiTools, a python module to manage ioapi format files. The module is supposed to handle file and perform operations on them, including basic arithmetic operations. But something is wrong and when I try to, say, multiply an array by a float or an integer, the result is a zero-valued array (both the array and the float/integer are different from zero).
The module in question creates a temporary variable using cdms2 according to the following syntax:
import cdms2 as cdms, cdtime, MV2 as MV, cdutil
import numpy as N
..........
def __mul__(self, other):
"""
Wrapper around cdms tvariable multiply
"""
tmpVar = cdms.tvariable.TransientVariable.__mul__(self,other)
iotmpVar = createVariable(tmpVar, self.ioM, id = self.id,\
attributes=self.attributes, copyFlag = False)
return iotmpVar
But the variable returns nothing but zeros.
Any ideas?

I tried to use ioapiTools, and latest version i found was 0.3.2 from http://www2-pcmdi.llnl.gov/Members/azubrow/ioapiTools/download-source-file .
unfortunately, the code doesn't seem to catchup with evolution of cdat, which now recommend using numpy instead of Numeric. automated translation tool may be resolving some problems, but not all. For example, the class iovar (defined in ioapiTools.py:2103) now needs to have _____new_____ method, as it is a subclass of numpy masked array (i dont know how things are in Numeric). With that, i seems to have _____mul_____ working. i couldn't reproduce your problem though, because i couldn't even get an instance of iovar without having _____new_____ method defined.
i can pass what i got to you if you still need one, but i am sure there are more problems hiding... let me know if you need it though.

Related

Python library architecture with two diferent math backend, how to?

I want to build a library in python for scientific computing. Precisely for natural coordinates mechanics. I want to think a bit before jumping into the code. This post asks for any advice, bad or good ideas. I'm not a pro in python, and I'm sure. There can be experts around.
First, I want my library to be usable with numpy (numeric computing) and casadi (symbolic computing). To be as simple as possible, I'd like the library to be loadable:
import my_package_numpy # option 1
import my_package_casadi # option 2
Second, I want each class or each function and method to work both with these two types of objects ndarrays (numpy) and MX (casadi).
my_var_np = np.array([2,2]) # numeric
my var_mx = MX.sym("var",[2,1]) # symbolic
from my_package_numpy import HelloWorld # option 1
hw = HelloWorld(my_var_np)
from my_package_casadi import HelloWorld # option 2
hw = HelloWorld(my_var_mx)
Third, I want to be able to switch to know when the user uses one or another backend in specific methods. because maths operations may be defined differently. (Is there a global variable available to know that ?)
class HelloWorld:
def __init__(value):
self.value = value
if numpy:
self.transpose_value = value.transpose()
elif casadi:
self.transpose_value = transpose(value)
I know the if are not well written, I don't know where to get this value.
This library could grow fast. I want to avoid as much as possible copied code and things to be written only once.
Any help, advice, or comment, would be appreciated.
I'm trying to build a package with two math backends. And I expect it to be as simple as possible for the user. but I don't know how to build the backend
Maybe you could use an if condition to check the type of the value like this (if the value is a numpy array then use numpy):
class HelloWorld:
def __init__(value):
if type(value) == np.ndarray:
numpy = True
self.value = value
if numpy:
self.transpose_value = value.transpose()
elif casadi:
self.transpose_value = transpose(value)
I don't know if this is the right way, but maybe it works for you :)

How to pass Initialized Array as OUT parameter to COM call

I have a commercial package which has a COM interface. I am trying to control it via the COM interface from Python. Most things are working just fine with regular input parameters and outputs.
However one particular set of functions appear to take a pre-allocated data structure as input which they will then fill out with the results of the query. So: an out-parameter of type array in this instance.
Some helpful example VBA code which accompanies the product alongside an Excel spreadsheet seems to work just fine. It declares the input array as: Dim myArray(customsize) as Integer
This then gets passed directly into the call on the COM object: cominterface.GetContent(myArray)
However when I try something similar in Python, I either get errors or no results depending on how I try to pass the array in:
import comtypes
... code to generate bindings, create object, grab the interface ...
# create an array for storing the results
my_array_type = c_ulong * 1000
my_array_instance = my_array_type ()
# attempt to pass the array into the call on the COM interface
r = cominterface.GetContent(my_array_instance )
# expect to see id's 1,2,3,4
print(my_array_instance)
The above gives me error:
TypeError: Cannot put <__main__.c_ulong_Array_1000 object at 0x00000282514F5640> in VARIANT
So it would seem that the comtypes does not support ctype arrays for passing through as it tries to make it a VARIANT.
Thus a different attempt:
# create an array for storing the results
my_array_instance = [0] * 100
# attempt to pass the array into the call on the COM interface
r = cominterface.GetContent(my_array_instance )
# expect to see id's 1,2,3,4
print(my_array_instance)
The above call has a return code indicating success, but the array is unchanged and still contains the initial 0's it was preseaded with.
So I am assuming here that comtypes is somehow not transporting the written values back into the Python list. But thats a big assumption - I really don't know.
I have tried a number of things including using POINTER, byref() and various things. Almost everything results in some kind of error - either in the code doing the bindings or an error from the COM function I am calling to say the parameter does not meet its requirements.
If someone knows how I can pass in a pre-allocated array for this COM function to write to, I would be very much appreciative.
EDIT:
I rewrote the code in C# and it had the same problem, so I began to suspect the COM interface was not correct. By providing my own interface with modified function signatures (adding a 'ref' for the parameters), I was able to get the calls to work.
I suspect the tlb file was in error and happened to work with VBA, but I am unsure.

How do I use the GeometryConstraint class?

I've been trying to get this to work for so long now, I've read the docs here, but I can't seem to understand how to implement the GeometryConstraint.
Normally, the derivative version of this would be:
geometryConstraintNode = pm.geometryConstraint(target, object)
However, in Pymel, It looks a little nicer when setting attributes, which is why I want to use it, because it's much more readable.
I've tried this:
geometryConstraintNode = nt.GeometryConstraint(target, object).setName('geoConstraint')
But no luck, can someone take a look?
Shannon
this doesn't work for you?
import pymel.core as pm
const = pm.geometryConstraint('pSphere1', 'locator1', n='geoConstraint')
print const
const.rename('fred')
print const
output would be
geoConstraint
fred
and a constraint object named 'fred'.
The pymel node is the return value that comes back from the command defined in pm.animation.geometryConstraint. What it returns is a class wrapper for the actual in-scene constraint, which is defined in pm.nodetypes.GeometryConstraint. It's the class version where you get to do all the attribute setting, etc; the command version is a match for the same thing in maya.cmds with sometimes a little syntactic sugar added.
In this case, the pymel node is like any other pymel node, so things like renamimg use the same '.rename' functionality inherited from DagNode. You could also use functions inherited from Transform, like 'getChildren()' or 'setParent()' The docs make this clear in a round-about way by including the inheritance tree at the top of the nodetype's page. Basically all pynode returns will share at least DagNode (stuff like naming) and usually Transform (things like move, rotate, parent) or Shape (query components, etc)

Pickling cv2.KeyPoint causes PicklingError

I want to search surfs in all images in a given directory and save their keypoints and descriptors for future use. I decided to use pickle as shown below:
#!/usr/bin/env python
import os
import pickle
import cv2
class Frame:
def __init__(self, filename):
surf = cv2.SURF(500, 4, 2, True)
self.filename = filename
self.keypoints, self.descriptors = surf.detect(cv2.imread(filename, cv2.CV_LOAD_IMAGE_GRAYSCALE), None, False)
if __name__ == '__main__':
Fdb = open('db.dat', 'wb')
base_path = "img/"
frame_base = []
for filename in os.listdir(base_path):
frame_base.append(Frame(base_path+filename))
print filename
pickle.dump(frame_base,Fdb,-1)
Fdb.close()
When I try to execute, I get a following error:
File "src/pickle_test.py", line 23, in <module>
pickle.dump(frame_base,Fdb,-1)
...
pickle.PicklingError: Can't pickle <type 'cv2.KeyPoint'>: it's not the same object as cv2.KeyPoint
Does anybody know, what does it mean and how to fix it? I am using Python 2.6 and Opencv 2.3.1
Thank you a lot
The problem is that you cannot dump cv2.KeyPoint to a pickle file. I had the same issue, and managed to work around it by essentially serializing and deserializing the keypoints myself before dumping them with Pickle.
So represent every keypoint and its descriptor with a tuple:
temp = (point.pt, point.size, point.angle, point.response, point.octave,
point.class_id, desc)
Append all these points to some list that you then dump with Pickle.
Then when you want to retrieve the data again, load all the data with Pickle:
temp_feature = cv2.KeyPoint(x=point[0][0],y=point[0][1],_size=point[1], _angle=point[2],
_response=point[3], _octave=point[4], _class_id=point[5])
temp_descriptor = point[6]
Create a cv2.KeyPoint from this data using the above code, and you can then use these points to construct a list of features.
I suspect there is a neater way to do this, but the above works fine (and fast) for me. You might have to play around with your data format a bit, as my features are stored in format-specific lists. I tried to present the above using my idea at its generic base. I hope that this may help you.
Part of the issue is cv2.KeyPoint is a function in python that returns a cv2.KeyPoint object. Pickle is getting confused because, literally, "<type 'cv2.KeyPoint'> [is] not the same object as cv2.KeyPoint". That is, cv2.KeyPoint is a function object, while the type was cv2.KeyPoint. Why OpenCV is like that, I can only make guesses at unless I go digging. I have a feeling it has something to do with it being a wrapper around a C/C++ library.
Python does give you the ability to fix this yourself. I found the inspiration on this post about pickling methods of classes.
I actually use this clip of code, highly modified from the original in the post
import copyreg
import cv2
def _pickle_keypoints(point):
return cv2.KeyPoint, (*point.pt, point.size, point.angle,
point.response, point.octave, point.class_id)
copyreg.pickle(cv2.KeyPoint().__class__, _pickle_keypoints)
Key points of note:
In Python 2, you need to use copy_reg instead of copyreg and point.pt[0], point.pt[1] instead of *point.pt.
You can't directly access the cv2.KeyPoint class for some reason, so you make a temporary object and use that.
The copyreg patching will use the otherwise problematic cv2.KeyPoint function as I have specified in the output of _pickle_keypoints when unpickling, so we don't need to implement an unpickling routine.
And to be nauseatingly complete, cv2::KeyPoint::KeyPoint is an overloaded function in C++, but in Python, this isn't exactly a thing. Whereas in the C++, there's a function that takes the point for the first argument, in Python, it would try to interpret that as an int instead. The * unrolls the point into two arguments, x and y to match the only int argument constructor.
I had been using casper's excellent solution until I realized this was possible.
A similar solution to the one provided by Poik. Just call this once before pickling.
def patch_Keypoint_pickiling(self):
# Create the bundling between class and arguments to save for Keypoint class
# See : https://stackoverflow.com/questions/50337569/pickle-exception-for-cv2-boost-when-using-multiprocessing/50394788#50394788
def _pickle_keypoint(keypoint): # : cv2.KeyPoint
return cv2.KeyPoint, (
keypoint.pt[0],
keypoint.pt[1],
keypoint.size,
keypoint.angle,
keypoint.response,
keypoint.octave,
keypoint.class_id,
)
# C++ Constructor, notice order of arguments :
# KeyPoint (float x, float y, float _size, float _angle=-1, float _response=0, int _octave=0, int _class_id=-1)
# Apply the bundling to pickle
copyreg.pickle(cv2.KeyPoint().__class__, _pickle_keypoint)
More than for the code, this is for the incredibly clear explanation available there : https://stackoverflow.com/a/50394788/11094914
Please note that if you want to expand this idea to other "unpickable" class of openCV, you only need to build a similar function to "_pickle_keypoint". Be sure that you store attributes in the same order as the constructor. You can consider copying the C++ constructor, even in Python, as I did. Mostly C++ and Python constructors seems not to differ too much.
I has issue with the "pt" tuple. However, a C++ constructor exists for X and Y separated coordinates, and thus, allow this fix/workaround.

Is it possible to pickle python "units" units?

I'm using the Python "units" package (http://pypi.python.org/pypi/units/) and I've run into some trouble when trying to pickle them. I've tried to boil it down to the simplest possible to case to try and figure out what's going on. Here's my simple test:
from units import unit, named_unit
from units.predefined import define_units
from units.compatibility import compatible
from units.registry import REGISTRY
a = unit('m')
a_p = pickle.dumps(a)
a_up = pickle.loads(a_p)
logging.info(repr(unit('m')))
logging.info(repr(a))
logging.info(repr(a_up))
logging.info(a.is_si())
logging.info(a_up.is_si())
logging.info( compatible(a,a_up) )
logging.info(a(10) + a_up(10))
The output I'm seeing when I run this is:
LeafUnit('m', True)
LeafUnit('m', True)
LeafUnit('m', True)
True
True
False
IncompatibleUnitsError
I'd understand if pickling units broke them, if it weren't for the fact that repr() is returning identical results for them. What am I missing?
This is using v0.04 of the units package, and Google App Engine 1.4 SDK 1
It seems that the problem is not that Unit instances are not pickable since your case shows otherwise but rather than the de-serialized instance does not compare equal to the original instance hence they're treated as incompatible units even though they're equivlent.
I have never used unit before but after skimming its source it seems that the problem is that units.compatibility.compatible checks if both instances compare equal but LeafUnit nor its bases define an __eq__ method hence object's identity is checked instead (per python's semantics).
That is, two unit instances will only compare equal if they are the same instance (the same memory address, etc), not two equivalent ones. Normally, after you unpickle a serialized instance it will not be the same instance as the original one (equivalent, yes, but not the same)
A solution could be to monkey-patch units.abstract.AbstractUnit to have an __eq__ method:
AbstractUnit.__eq__ = lambda self, other: repr(self)==repr(other)
Note that comparing the instances' representations is suboptimal but not being familiar with unit is the best I can come up with. Better ask the author(s) to make unit more "comparable friendly".
If you'd like that pickle creates the same instances as your code then you could register __reduce__() implementation in copy_reg.dispatch_table:
import copy_reg
from units import LeafUnit
def leafunit_reduce(self):
return LeafUnit, (self.specifier, self.is_si())
copy_reg.pickle(LeafUnit, leafunit_reduce)

Categories

Resources