I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site...
The simple function: blah = cv.CvSize(inp.width/2, inp.height/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'.
Is there an equivalent structure? Do I need something more? Thanks.
It seems that they opted to eventually avoid this structure altogether. Instead, it just uses a python tuple (width, height).
To add a bit more to Nathan Keller's answer, in later versions of OpenCV some basic structures are simply implemented as Python Tuples.
For example in OpenCV 2.4:
This (incorrect which will give an error)
image = cv.LoadImage(sys.argv[1]);
grayscale = cv.CreateImage(cvSize(image.width, image.height), 8, 1)
Would instead be written as this
image = cv.LoadImage(sys.argv[1]);
grayscale = cv.CreateImage((image.width, image.height), 8, 1)
Note how we just pass the Tuple directly.
The right call is cv.cvSize(inp.width/2, inp.height/2).
All functions in the python opencv bindings start with a lowercased c even in the highgui module.
Perhaps the documentation is wrong and you have to use cv.cvSize instead of cv.CvSize ?
Also, do a dir(cv) to find out the methods available to you.
Related
I am trying to use MATLAB engine from Python as follows
sinewave = eng.dsp.SineWave("Amplitude",1,"Frequency",fc,"SampleRate",fs,"SamplesPerFrame",nspf,"ComplexOutput",True)
sinewave()
to use the DSP toolbox. However, I get the error "'matlab.object' object is not callable for sinewave()".
Doing the same, directly in MATLAB (without the eng.) works just fine. What is the correct way for calling sinewave from Python?
Ok got it.
For anyone that's looking, once you get your object, add it to your workspace and then call the workspace variable using eval. In this case, this would be
sinewave = eng.dsp.SineWave("Amplitude",1,"Frequency",fc,"SampleRate",fs,"SamplesPerFrame",nspf,"ComplexOutput",True)
eng.workspace['sinewave'] = sinewave
x = eng.eval('sinewave()')
And then since in MATLAB, my x would have been a vector, in Python, my x is a Python list and I can print the first element using x[0] or whatever I want to do. I can also add x into the workspace and keep working like that.
I'm running one of the examples from skimage documentation and struggling to find either the multiscale_basic_features referenced there as an attribute of the skimage.feature module or its current equivalent. Does anyone know what I can substitute it with in the following segment of their code?:
features_func = partial(feature.multiscale_basic_features, intensity=True, edges=False, texture=True,
sigma_min=sigma_min, sigma_max=sigma_max,
multichannel=True)
You probably need to upgrade your scikit-image version. That function was only added in 0.18.1.
I am trying to use the resize function using aliasing exactly as described in the documentation: http://scikit-image.org/docs/dev/auto_examples/transform/plot_rescale.html
from skimage.transform import resize
im_test = resize(im_test, (im_test.shape[0] / 3, im_test.shape[1] / 3),anti_aliasing=True)
However this returns:
Scikit image: resize() got an unexpected keyword argument
'anti_aliasing'
What is the reason for this? Is anti_aliasing on by default? What is the best way to resize an image with anti aliasing if this function can't be used?
Checking the code here with git blame, it seems it was introduced on 19.09.2017.
The only release version supporting this currently should be: v0.13.1, which you will need then!
For checking, what kind of version you are using currently, i recommend opening your interpreter (of your used python-distribution) and do:
import skimage as sk
sk.__version__
# '0.13.0' i would not be able to use it, it seems
there are two sets of documentation
1) http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.resize
2)http://scikit-image.org/docs/0.11.x/api/skimage.transform.html#resize
the second one doesn't accept anti_aliasing as a parameter and is the 0.11 version, the one that accepts anti aliasing is 0.14
looks like older version uses a box filter while resizing,and all pixels have equal weight
I have a working python 2.7 program that calls a DLL. I am trying to port the script to python 3.2. The DLL call seems to work (i.e. there is no error upon calling) but the returned data does not make sense.
Just in case it could be useful:
- The call takes three arguments: two int (input) and a pointer to a ushort array (output).
I have tried using both python and numpy arrays without success.
Can anyone enumerate the differences between Python 2.7 and 3.2 respecting ctypes?
Thanks in advance
EDIT
Here is some example code. The DLL is propietary so I do not have the code. But I do have the C header:
void example (int width, int height, unsigned short* pointer)
The python code is:
width, height = 40, 100
imagearray = np.zeros((width,height), dtype=np.dtype(np.ushort))
image = np.ascontiguousarray(imagearray)
ptrimage = image.ctypes.data_as(ct.POINTER(ct.c_ushort))
DLL.example(width, height, ptrimage)
This works in python 2.7 but not in 3.2.
EDIT 2
If the changes in ctypes are only those pointed out by Cedric, it does not make sense that python 3.2 will not work. So looking again at the code, I have found that there is a preparation function called before the function that I am mentioning. The signature is:
void prepare(char *table)
In python, I am calling by:
table = str(aNumber)
DLL.prepare(table)
Is it possible that the problem is due to the change in the Python string handling?
In Python 2.7, strings are byte-strings by default. In Python 3.x, they are unicode by default. Try explicitly making your string a byte string using .encode('ascii') before handing it to DLL.prepare.
Edit:
#another way of saying table=str(aNumber).encode('ascii')
table = bytes(str(aNumber), 'ascii')
DLL.prepare(table)
In our case, we had code looking like:
addr = clib.some_function()
data = some_struct.from_address(addr)
This worked in python 2 but not in 3. The reason turned out not to be any difference in ctypes, but rather a change in memory layout that unmasked a bug in the code above. In python 2, the address returned was always (by chance) small enough to fit inside a C int (32-bit), which is the default return type for all ctypes function calls. In python 3, the addresses were almost always too large, which caused the pointer address to become corrupted as it was coerced to int.
The solution is to set the function restype to a 64-bit integer type to ensure that it can accommodate the whole address, like:
clib.some_function.restype = c_longlong
or:
clib.some_function.restype = POINTER(some_struct)
According to the python documentation, the only changes between 2.7 and 3.2 is here
A new type, ctypes.c_ssize_t represents the C ssize_t datatype.
In 2.7, there was some other modifications introduced :
The ctypes module now always converts None to a C NULL pointer for
arguments declared as pointers. (Changed by Thomas Heller; issue
4606.) The underlying libffi library has been updated to version
3.0.9, containing various fixes for different platforms. (Updated by
Matthias Klose; issue 8142.)
I'm not sure it will explain the cause of your problem...
Alright, so a couple days ago I decided to try and write a primitive wrapper for the PARI library. Ever since then I've been playing with ctypes library in loading the dll and accessing the functions contained using code similar to the following:
from ctypes import *
libcyg=CDLL("<path/cygwin1.dll") #It needs cygwin to be loaded. Not sure why.
pari=CDLL("<path>/libpari-gmp-2.4.dll")
print pari.fibo #fibonacci function
#prints something like "<_FuncPtr object at 0x00BA5828>"
So the functions are there and they can potentially be accessed, but I always receive an access violation no matter what I try. For example:
pari.fibo(5) #access violation
pari.fibo(c_int(5)) #access violation
pari.fibo.argtypes = [c_long] #setting arguments manually
pari.fibo.restype = long #set the return type
pari.fibo(byref(c_int(5))) #access violation reading 0x04 consistently
and any variation on that, including setting argtypes to receive pointers.
The Pari .dll is written in C and the fibonacci function's syntax within the library is GEN fibo(long x).
Could it be the return type that's causing these errors, as it is not a standard int or long but a GEN type, which is unique to the PARI library? Any help would be appreciated. If anyone is able to successfully load the library and use ANY function from within python, please tell; I've been at this for hours now.
EDIT: Seems as though I was simply forgetting to initialize the library. After a quick pari.pari_init(4000000,500000) it stopped erroring. Now my problem lies in the in the fact that it returns a GEN object; which is fine, but whenever I try to reference the address to which it points, it's always 33554435, which I presume is still an address. I'm trying further commands and I'll update if I succeed in getting the correct value of something.
You have two problems here, one give fibo the correct return type and two convert the GEN return type to the value you are looking for.
Poking around the source code a bit, you'll find that GEN is defined as a pointer to a long. Also, at looks like the library provides some converting/printing GENs. I focused in on GENtostr since it would probably be safer for all the pari functions.
import cytpes
pari = ctypes.CDLL("./libpari.so.2.3.5") #I did this under linux
pari.fibo.restype = ctypes.POINTER(ctypes.c_long)
pari.GENtostr.restype = ctypes.POINTER(ctypes.c_char)
pari.pari_init(4000000,500000)
x = pari.fibo(100)
y = pari.GENtostr(x)
ctypes.string_at(y)
Results in:
'354224848179261915075'