I'm generating a very large lookup table in C++ and using it from a variety of C++ functions. These functions are exposed to python using boost::python.
When not used as part of a class the desired behaviour is achieved. When I try and use the functions as part of a class written in just python I run in to issues accessing the lookup table.
------------------------------------
C++ for some numerical heavy lifting
------------------------------------
include <boost/python>
short really_big_array[133784630]
void fill_the_array(){
//Do some things which populate the array
}
int use_the_array(std::string thing){
//Lookup a few million things in the array depending on thing
//Return some int dependent on the values found
}
BOOST_PYTHON_MODULE(bigarray){
def("use_the_array", use_the_array)
def("fill_the_array", fill_the_array)
}
---------
PYTHON Working as desired
---------
import bigarray
if __name__ == "__main__"
bigarray.fill_the_array()
bigarray.use_the_array("Some Way")
bigarray.use_the_array("Some Other way")
#All works fine
---------
PYTHON Not working as desired
---------
import bigarray
class BigArrayThingDoer():
"""Class to do stuff which uses the big array,
use_the_array() in a few different ways
"""
def __init__(self,other_stuff):
bigarray.fill_the_array()
self.other_stuff = other_stuff
def use_one_way(thing):
return bigarray.use_the_array(thing)
if __name__ == "__main__"
big_array_thing_doer = BigArrayThingDoer()
big_array_thing_doer.use_one_way(thing)
#Segfaults or random data
I think I am probably not exposing enough to python to make sure the array is accessible at the right times, but im not quite sure exactly what I shoudld be exposing. Equally likely that there is some sort of problem involving ownership of the lookup table.
I dont ever need to manipulate the lookup table except via the other c++ functions.
Missing a self in a definition. Used key word arguments where I should probably have used keyword only arguments so got no error when running.
Related
I am currently working on a project where I am trying to make an GUI for programming an I2C device. I have a bunch of register addresses and values in those registers both in hex strings like so:
[Address, Value][0x00, 0x01][0x01, 0xFF][0x02, 0xA0]... //not code, just abstract representation
Each register value represents something different. I have to convert each register value from a hex string to its associated human-understandable representation. My plan for this is to create a dictionary whose keys are the register addresses and whose values are a register object. The register object would contain information about the register like a description and a conversion function which takes in the hex string and outputs a converted value. The conversion function is different for every register since they represent different things. I want to create a generic register class and pass in each register's unique conversion function at instantiation. Passing this function is where I'm not sure if I'm making things more complicated than they need to be.
My code currently looks like this:
private class my_register
{
private string Description;
{get;}
//using delegate to be able to store function specified outside of class
public delegate string convert_del(string reg_val);
public convert_del conversion_func;
//constructor uses a delegate to store the function passed at instantiation
my_register(string desc, func<string, string> convert_func_input)
this.Description = desc;
this.conversion_func = new convert_del(convert_func_input)
}
//Now I can create the object and pass in the function
static void Main()
{
// lambda function is just a simple place holder to show that I can pass a function
my_register first_reg = new my_register("Temp", reg_val => (Convert.ToInt32(reg_val, 16)
+ 10).ToString())
Console.WriteLine(first_reg.conversion_func("0x0A")) //output is 20
}
This code works at least for the minimal testing I ran. This took me a long time to figure out, perhaps because I just didn't understand delegates well, but I am wondering if this is a convoluted way of going about this in C#. I come from a python background, though I'm not particularly skilled in that either. The way I would do this in python is as follows:
class my_register(object):
def __init__(self, desc, conversion):
self.Description = desc
self.conversion = conversion
def temp_convert(reg_val):
return str(int(reg_val, 0) + 10)
first_reg = my_register("Temp", temp_convert)
first_reg.conversion(10) #returns '20'
The pythonic way just seems way simpler, so I'm wondering if there a better more canonical way of achieving this function passing in C# or if there is a way to avoid passing the function all together?
I have a commercial package which has a COM interface. I am trying to control it via the COM interface from Python. Most things are working just fine with regular input parameters and outputs.
However one particular set of functions appear to take a pre-allocated data structure as input which they will then fill out with the results of the query. So: an out-parameter of type array in this instance.
Some helpful example VBA code which accompanies the product alongside an Excel spreadsheet seems to work just fine. It declares the input array as: Dim myArray(customsize) as Integer
This then gets passed directly into the call on the COM object: cominterface.GetContent(myArray)
However when I try something similar in Python, I either get errors or no results depending on how I try to pass the array in:
import comtypes
... code to generate bindings, create object, grab the interface ...
# create an array for storing the results
my_array_type = c_ulong * 1000
my_array_instance = my_array_type ()
# attempt to pass the array into the call on the COM interface
r = cominterface.GetContent(my_array_instance )
# expect to see id's 1,2,3,4
print(my_array_instance)
The above gives me error:
TypeError: Cannot put <__main__.c_ulong_Array_1000 object at 0x00000282514F5640> in VARIANT
So it would seem that the comtypes does not support ctype arrays for passing through as it tries to make it a VARIANT.
Thus a different attempt:
# create an array for storing the results
my_array_instance = [0] * 100
# attempt to pass the array into the call on the COM interface
r = cominterface.GetContent(my_array_instance )
# expect to see id's 1,2,3,4
print(my_array_instance)
The above call has a return code indicating success, but the array is unchanged and still contains the initial 0's it was preseaded with.
So I am assuming here that comtypes is somehow not transporting the written values back into the Python list. But thats a big assumption - I really don't know.
I have tried a number of things including using POINTER, byref() and various things. Almost everything results in some kind of error - either in the code doing the bindings or an error from the COM function I am calling to say the parameter does not meet its requirements.
If someone knows how I can pass in a pre-allocated array for this COM function to write to, I would be very much appreciative.
EDIT:
I rewrote the code in C# and it had the same problem, so I began to suspect the COM interface was not correct. By providing my own interface with modified function signatures (adding a 'ref' for the parameters), I was able to get the calls to work.
I suspect the tlb file was in error and happened to work with VBA, but I am unsure.
I want to test my code that is based on the API created by someone else, but im not sure how should I do this.
I have created some function to save the json into file so I don't need to send requests each time I run test, but I don't know how to make it work in situation when the original (check) function takes an input arg (problem_report) which is an instance of some class provided by API and it has this
problem_report.get_correction(corr_link) method. I just wonder if this is a sign of bad written code by me, beacuse I can't write a test to this, or maybe I should rewrite this function in my tests file like I showed at the end of provided below code.
# I to want test this function
def check(problem_report):
corrections = {}
for corr_link, corr_id in problem_report.links.items():
if re.findall(pattern='detailCorrection', string=corr_link):
correction = problem_report.get_correction(corr_link)
corrections.update({corr_id: correction})
return corrections
# function serves to load json from file, normally it is downloaded by API from some page.
def load_pr(pr_id):
print('loading')
with open('{}{}_view_pr.json'.format(saved_prs_path, pr_id)) as view_pr:
view_pr = json.load(view_pr)
...
pr_info = {'view_pr': view_pr, ...}
return pr_info
# create an instance of class MyPR which takes json to __init__
#pytest.fixture
def setup_pr():
print('setup')
pr = load_pr('123')
my_pr = MyPR(pr['view_pr'])
return my_pr
# test function
def test_check(setup_pr):
pr = setup_pr
checked_pr = pr.check(setup_rft[1]['problem_report_pr'])
assert checker_pr
# rewritten check function in test file
#mock.patch('problem_report.get_correction', side_effect=get_corr)
def test_check(problem_report):
corrections = {}
for corr_link, corr_id in problem_report.links.items():
if re.findall(pattern='detailCorrection', string=corr_link):
correction = problem_report.get_correction(corr_link)
corrections.update({corr_id: correction})
return corrections
Im' not sure if I provided enough code and explanation to underastand the problem, but I hope so. I wish you could tell me if this is normal that some function are just hard to test, and if this is good practice to rewritte them separately so I can mock functions inside the tested function. I also was thinking that I could write new class with similar functionality but API is very large and it would be very long process.
I understand your question as follows: You have a function check that you consider hard to test because of its dependency on the problem_report. To make it better testable you have copied the code into the test file. You will test the copied code because you can modify this to be easier testable. And, you want to know if this approach makes sense.
The answer is no, this does not make sense. You are not testing the real function, but completely different code. Well, the code may not start being completely different, but in short time the copy and the original will deviate, and it will be a maintenance nightmare to ensure that the copy always resembles the original. Improving code for testability is a different story: You can make changes to the check function to improve its testability. But then, exactly the same resulting function should be used both in the test and the production code.
How to better test the function check then? First, are you sure that using the original problem_report objects really can not be sensibly used in your tests? (Here are some criteria that help you decide: What to mock for python test cases?). Now, lets assume that you come to the conclusion you can not sensibly use the original problem_report.
In that case, here the interface is simple enough to define a mocked problem_report. Keep in mind that Python uses duck typing, so you only have to create a class that has a links member which has an items() method. Plus, your mocked problem_report class needs a method get_correction(). Beyond that, your mock does not have to produce types that are similar to the types used by problem_report. The items() method can return simply a list of lists, like [["a",2],["xxxxdetailCorrectionxxxx",4]]. The same argument holds for get_correction, which could for example simply return its argument or a derived value, like, its negative.
For the above example (items() returning [["a",2],["xxxxdetailCorrectionxxxx",4]] and get_correction returning the negative of its argument) the expected result would be {4: -4}. No need to simulate real correction objects. And, you can create your mocked versions of problem_report without need to read data from files - the mocks can be setup completely from within the unit-testing code.
Try patching the problem_report symbol in the module. You should put your tests in a separate class.
#mock.patch('some.module.path.problem_report')
def test_check(problem_report):
problem_report.side_effect = get_corr
corrections = {}
for corr_link, corr_id in problem_report.links.items():
if re.findall(pattern='detailCorrection', string=corr_link):
correction = problem_report.get_correction(corr_link)
corrections.update({corr_id: correction})
return corrections
I want to search surfs in all images in a given directory and save their keypoints and descriptors for future use. I decided to use pickle as shown below:
#!/usr/bin/env python
import os
import pickle
import cv2
class Frame:
def __init__(self, filename):
surf = cv2.SURF(500, 4, 2, True)
self.filename = filename
self.keypoints, self.descriptors = surf.detect(cv2.imread(filename, cv2.CV_LOAD_IMAGE_GRAYSCALE), None, False)
if __name__ == '__main__':
Fdb = open('db.dat', 'wb')
base_path = "img/"
frame_base = []
for filename in os.listdir(base_path):
frame_base.append(Frame(base_path+filename))
print filename
pickle.dump(frame_base,Fdb,-1)
Fdb.close()
When I try to execute, I get a following error:
File "src/pickle_test.py", line 23, in <module>
pickle.dump(frame_base,Fdb,-1)
...
pickle.PicklingError: Can't pickle <type 'cv2.KeyPoint'>: it's not the same object as cv2.KeyPoint
Does anybody know, what does it mean and how to fix it? I am using Python 2.6 and Opencv 2.3.1
Thank you a lot
The problem is that you cannot dump cv2.KeyPoint to a pickle file. I had the same issue, and managed to work around it by essentially serializing and deserializing the keypoints myself before dumping them with Pickle.
So represent every keypoint and its descriptor with a tuple:
temp = (point.pt, point.size, point.angle, point.response, point.octave,
point.class_id, desc)
Append all these points to some list that you then dump with Pickle.
Then when you want to retrieve the data again, load all the data with Pickle:
temp_feature = cv2.KeyPoint(x=point[0][0],y=point[0][1],_size=point[1], _angle=point[2],
_response=point[3], _octave=point[4], _class_id=point[5])
temp_descriptor = point[6]
Create a cv2.KeyPoint from this data using the above code, and you can then use these points to construct a list of features.
I suspect there is a neater way to do this, but the above works fine (and fast) for me. You might have to play around with your data format a bit, as my features are stored in format-specific lists. I tried to present the above using my idea at its generic base. I hope that this may help you.
Part of the issue is cv2.KeyPoint is a function in python that returns a cv2.KeyPoint object. Pickle is getting confused because, literally, "<type 'cv2.KeyPoint'> [is] not the same object as cv2.KeyPoint". That is, cv2.KeyPoint is a function object, while the type was cv2.KeyPoint. Why OpenCV is like that, I can only make guesses at unless I go digging. I have a feeling it has something to do with it being a wrapper around a C/C++ library.
Python does give you the ability to fix this yourself. I found the inspiration on this post about pickling methods of classes.
I actually use this clip of code, highly modified from the original in the post
import copyreg
import cv2
def _pickle_keypoints(point):
return cv2.KeyPoint, (*point.pt, point.size, point.angle,
point.response, point.octave, point.class_id)
copyreg.pickle(cv2.KeyPoint().__class__, _pickle_keypoints)
Key points of note:
In Python 2, you need to use copy_reg instead of copyreg and point.pt[0], point.pt[1] instead of *point.pt.
You can't directly access the cv2.KeyPoint class for some reason, so you make a temporary object and use that.
The copyreg patching will use the otherwise problematic cv2.KeyPoint function as I have specified in the output of _pickle_keypoints when unpickling, so we don't need to implement an unpickling routine.
And to be nauseatingly complete, cv2::KeyPoint::KeyPoint is an overloaded function in C++, but in Python, this isn't exactly a thing. Whereas in the C++, there's a function that takes the point for the first argument, in Python, it would try to interpret that as an int instead. The * unrolls the point into two arguments, x and y to match the only int argument constructor.
I had been using casper's excellent solution until I realized this was possible.
A similar solution to the one provided by Poik. Just call this once before pickling.
def patch_Keypoint_pickiling(self):
# Create the bundling between class and arguments to save for Keypoint class
# See : https://stackoverflow.com/questions/50337569/pickle-exception-for-cv2-boost-when-using-multiprocessing/50394788#50394788
def _pickle_keypoint(keypoint): # : cv2.KeyPoint
return cv2.KeyPoint, (
keypoint.pt[0],
keypoint.pt[1],
keypoint.size,
keypoint.angle,
keypoint.response,
keypoint.octave,
keypoint.class_id,
)
# C++ Constructor, notice order of arguments :
# KeyPoint (float x, float y, float _size, float _angle=-1, float _response=0, int _octave=0, int _class_id=-1)
# Apply the bundling to pickle
copyreg.pickle(cv2.KeyPoint().__class__, _pickle_keypoint)
More than for the code, this is for the incredibly clear explanation available there : https://stackoverflow.com/a/50394788/11094914
Please note that if you want to expand this idea to other "unpickable" class of openCV, you only need to build a similar function to "_pickle_keypoint". Be sure that you store attributes in the same order as the constructor. You can consider copying the C++ constructor, even in Python, as I did. Mostly C++ and Python constructors seems not to differ too much.
I has issue with the "pt" tuple. However, a C++ constructor exists for X and Y separated coordinates, and thus, allow this fix/workaround.
I know this must be a trivial question, but I've tried many different ways, and searched quie a bit for a solution, but how do I create and reference subfunctions in the current module?
For example, I am writing a program to parse through a text file, and for each of the 300 different names in it, I want to assign to a category.
There are 300 of these, and I have a list of these structured to create a dict, so of the form lookup[key]=value (bonus question; any more efficient or sensible way to do this than a massive dict?).
I would like to keep all of this in the same module, but with the functions (dict initialisation, etc) at the
end of the file, so I dont have to scroll down 300 lines to see the code, i.e. as laid out as in the example below.
When I run it as below, I get the error 'initlookups is not defined'. When I structure is so that it is initialisation, then function definition, then function use, no problem.
I'm sure there must be an obvious way to initialise the functions and associated dict without keeping the code inline, but have tried quite a few so far without success. I can put it in an external module and import this, but would prefer not to for simplicity.
What should I be doing in terms of module structure? Is there any better way than using a dict to store this lookup table (It is 300 unique text keys mapping on to approx 10 categories?
Thanks,
Brendan
import ..... (initialisation code,etc )
initLookups() # **Should create the dict - How should this be referenced?**
print getlookup(KEY) # **How should this be referenced?**
def initLookups():
global lookup
lookup={}
lookup["A"]="AA"
lookup["B"]="BB"
(etc etc etc....)
def getlookup(value)
if name in lookup.keys():
getlookup=lookup[name]
else:
getlookup=""
return getlookup
A function needs to be defined before it can be called. If you want to have the code that needs to be executed at the top of the file, just define a main function and call it from the bottom:
import sys
def main(args):
pass
# All your other function definitions here
if __name__ == '__main__':
exit(main(sys.argv[1:]))
This way, whatever you reference in main will have been parsed and is hence known already. The reason for testing __name__ is that in this way the main method will only be run when the script is executed directly, not when it is imported by another file.
Side note: a dict with 300 keys is by no means massive, but you may want to either move the code that fills the dict to a separate module, or (perhaps more fancy) store the key/value pairs in a format like JSON and load it when the program starts.
Here's a more pythonic ways to do this. There aren't a lot of choices, BTW.
A function must be defined before it can be used. Period.
However, you don't have to strictly order all functions for the compiler's benefit. You merely have to put your execution of the functions last.
import # (initialisation code,etc )
def initLookups(): # Definitions must come before actual use
lookup={}
lookup["A"]="AA"
lookup["B"]="BB"
(etc etc etc....)
return lookup
# Any functions initLookups uses, can be define here.
# As long as they're findable in the same module.
if __name__ == "__main__": # Use comes last
lookup= initLookups()
print lookup.get("Key","")
Note that you don't need the getlookup function, it's a built-in feature of a dict, named get.
Also, "initialisation code" is suspicious. An import should not "do" anything. It should define functions and classes, but not actually provide any executable code. In the long run, executable code that is processed by an import can become a maintenance nightmare.
The most notable exception is a module-level Singleton object that gets created by default. Even then, be sure that the mystery object which makes a module work is clearly identified in the documentation.
If your lookup dict is unchanging, the simplest way is to just make it a module scope variable. ie:
lookup = {
'A' : 'AA',
'B' : 'BB',
...
}
If you may need to make changes, and later re-initialise it, you can do this in an initialisation function:
def initLookups():
global lookup
lookup = {
'A' : 'AA',
'B' : 'BB',
...
}
(Alternatively, lookup.update({'A':'AA', ...}) to change the dict in-place, affecting all callers with access to the old binding.)
However, if you've got these lookups in some standard format, it may be simpler simply to load it from a file and create the dictionary from that.
You can arrange your functions as you wish. The only rule about ordering is that the accessed variables must exist at the time the function is called - it's fine if the function has references to variables in the body that don't exist yet, so long as nothing actually tries to use that function. ie:
def foo():
print greeting, "World" # Note that greeting is not yet defined when foo() is created
greeting = "Hello"
foo() # Prints "Hello World"
But:
def foo():
print greeting, "World"
foo() # Gives an error - greeting not yet defined.
greeting = "Hello"
One further thing to note: your getlookup function is very inefficient. Using "if name in lookup.keys()" is actually getting a list of the keys from the dict, and then iterating over this list to find the item. This loses all the performance benefit the dict gives. Instead, "if name in lookup" would avoid this, or even better, use the fact that .get can be given a default to return if the key is not in the dictionary:
def getlookup(name)
return lookup.get(name, "")
I think that keeping the names in a flat text file, and loading them at runtime would be a good alternative. I try to stick to the lowest level of complexity possible with my data, starting with plain text and working up to a RDMS (I lifted this idea from The Pragmatic Programmer).
Dictionaries are very efficient in python. It's essentially what the whole language is built on. 300 items is well within the bounds of sane dict usage.
names.txt:
A = AAA
B = BBB
C = CCC
getname.py:
import sys
FILENAME = "names.txt"
def main(key):
pairs = (line.split("=") for line in open(FILENAME))
names = dict((x.strip(), y.strip()) for x,y in pairs)
return names.get(key, "Not found")
if __name__ == "__main__":
print main(sys.argv[-1])
If you really want to keep it all in one module for some reason, you could just stick a string at the top of the module. I think that a big swath of text is less distracting than a huge mess of dict initialization code (and easier to edit later):
import sys
LINES = """
A = AAA
B = BBB
C = CCC
D = DDD
E = EEE""".strip().splitlines()
PAIRS = (line.split("=") for line in LINES)
NAMES = dict((x.strip(), y.strip()) for x,y in PAIRS)
def main(key):
return NAMES.get(key, "Not found")
if __name__ == "__main__":
print main(sys.argv[-1])