Related
I like to rebuild the libmem_crc32_direct function in python.
I used the crcmod python package before. So I like to setup the crc generator by using it.
the c-code looks like:
uint32_t crc_process_chunk(uint8_t* data, uint32_t len) {
return ~libmem_crc32_direct(data, len, 0xFFFFFFFF);
}
my python code looks so far:
def bit_not(n, numbits=8):
return (1 << numbits) - 1 - n
def getCRC(imageBA):
crcGen = crcmod.mkCrcFun(0x104C11DB7, initCrc=0xFFFFFFFF)
val = crcGen(imageBA)
val = bit_not(val, 32)
return val
The returned value of the python code is not equal of the one in c. So I guess I mad some error.
Any ideas?
Doesn't (1 << numbits) == 0? If this is two's complement math it should work as bit_not could be return 0-1-n. However, this isn't needed, since there is an optional xorOut parameter for crcmod. I'm thinking that since the optional rev parameter for reversed (reflected) input and output defaults to true, it needs to be set to false. I think the call to create the crc generator should be:
crcGen = crcmod.mkCrcFun(0x104C11DB7, initCrc=0xFFFFFFF, rev=False, xorOut=0xFFFFFFFF)
B bit tricky because 64Bit arithmetic on PC vs 32Bit arithmetic on ARM STM32F4, but finally this solution works:
def libmem_crc32_direct_with_xor(im, startAddr, l):
fw = im[startAddr:startAddr+l]
crcGen = crcmod.Crc(0x104C11DB7, initCrc=0xFFFFFFFF, rev = False)
crcGen.update(fw)
return (~crcGen.crcValue ) & 0xFFFFFFFF # 32bit xor
In OpenCV 3.4.2 the option to return the number of votes (accumulator value) for each line returned by HoughLines() was added. In python this seems to be supported as well as read in the python docstring of my OpenCV installation:
"Each line is represented by a 2 or 3 element vector (ρ, θ) or (ρ, θ, votes) ."
It is also included in the docs (with some broken formatting).
However I can find no way to return the 3 element option (ρ, θ, votes) in python.
Here is code demonstrating the problem:
import numpy as np
import cv2
print('OpenCV should be at least 3.4.2 to test: ', cv2.__version__)
image = np.eye(10, dtype='uint8')
lines = cv2.HoughLines(image, 1, np.pi/180, 5)
print('(number of lines, 1, output vector dimension): ', lines.shape)
print(lines)
outputs
OpenCV should be at least 3.4.2 to test: 3.4.2
(number of lines, 1, output vector dimension): (3, 1, 2)
[[[ 0. 2.3212879]]
[[ 1. 2.2340214]]
[[-1. 2.4609141]]]
The desired behavior is an extra column with the amount of votes each line received. With the vote values more advanced options than the standard thresholding can be applied, as such it has been often requested and asked about on SE (here, here, here and here) with sometimes the equivalent for HoughCircles(). However both the questions and answers (such as modifying source and recompiling) are from before it was added officially, and therefore do not apply to the current situation.
As of vanilla OpenCV 3.4.3, you can't use this functionality from Python.
How it Works in C++
First of all in the implementation of HoughLines, we can see code that selects the type of the output array lines:
int type = CV_32FC2;
if (lines.fixedType())
{
type = lines.type();
CV_CheckType(type, type == CV_32FC2 || type == CV_32FC3, "Wrong type of output lines");
}
We can then see this parameter used in implementation of HoughLinesStandard when populating lines:
if (type == CV_32FC2)
{
_lines.at<Vec2f>(i) = Vec2f(line.rho, line.angle);
}
else
{
CV_DbgAssert(type == CV_32FC3);
_lines.at<Vec3f>(i) = Vec3f(line.rho, line.angle, (float)accum[idx]);
}
Similar code can be seen in HoughLinesSDiv.
Based on this, we need to pass in an _OutputArray that is fixed type, and stores 32bit floats in 3 channels. How to make a fixed type (but not fixed size, since the algorithm needs to be able to resize it) _OutputArray? Let's look at the implementation again:
A generic cv::Mat is not fixed type, neither is cv::UMat
One option is std::vector<cv::Vec3f>
Another option is cv::Mat3f (that's a cv::Matx<_Tp, m, n>)
Sample Code:
#include <opencv2/opencv.hpp>
int main()
{
cv::Mat image(cv::Mat::eye(10, 10, CV_8UC1) * 255);
cv::Mat2f lines2;
cv::HoughLines(image, lines2, 1, CV_PI / 180, 4); // runs the actual detection
std::cout << lines2 << "\n";
cv::Mat3f lines3;;
cv::HoughLines(image, lines3, 1, CV_PI / 180, 4); // runs the actual detection
std::cout << lines3 << "\n";
return 0;
}
Console Output:
[0, 2.3212879;
1, 2.2340214;
-1, 2.4609141]
[0, 2.3212879, 10;
1, 2.2340214, 6;
-1, 2.4609141, 6]
How the Python Wrapper Works
Let's look at the autogenerated code wrapping the HoughLines function:
static PyObject* pyopencv_cv_HoughLines(PyObject* , PyObject* args, PyObject* kw)
{
using namespace cv;
{
PyObject* pyobj_image = NULL;
Mat image;
PyObject* pyobj_lines = NULL;
Mat lines;
double rho=0;
double theta=0;
int threshold=0;
double srn=0;
double stn=0;
double min_theta=0;
double max_theta=CV_PI;
const char* keywords[] = { "image", "rho", "theta", "threshold", "lines", "srn", "stn", "min_theta", "max_theta", NULL };
if( PyArg_ParseTupleAndKeywords(args, kw, "Oddi|Odddd:HoughLines", (char**)keywords, &pyobj_image, &rho, &theta, &threshold, &pyobj_lines, &srn, &stn, &min_theta, &max_theta) &&
pyopencv_to(pyobj_image, image, ArgInfo("image", 0)) &&
pyopencv_to(pyobj_lines, lines, ArgInfo("lines", 1)) )
{
ERRWRAP2(cv::HoughLines(image, lines, rho, theta, threshold, srn, stn, min_theta, max_theta));
return pyopencv_from(lines);
}
}
PyErr_Clear();
// Similar snippet handling UMat...
return NULL;
}
To summarize this, it tries to convert the object passed in the lines parameter to a cv::Mat, and then it calls cv::HoughLines with the cv::Mat as the output parameter. (If this fails, then it tries the same thing with cv::UMat) Unfortunately, this means that there is no way to give cv::HoughLines a fixed type lines, so as of 3.4.3 this functionality is inaccessible from Python.
Solutions
The only solutions, as far as I can see, involve modifying the OpenCV source code, and rebuilding.
Quick Hack
This is trivial, edit the implementation of cv::HoughLines and change the default type to be CV_32FC3:
int type = CV_32FC3;
However this means that you will always get the votes (which also means that the OpenCL optimization, if present, won't get used).
Better Patch
Add an optional boolean parameter return_votes with default value false. Modify the code such that when return_votes is true, the type is forced to CV_32FC3.
Header:
CV_EXPORTS_W void HoughLines( InputArray image, OutputArray lines,
double rho, double theta, int threshold,
double srn = 0, double stn = 0,
double min_theta = 0, double max_theta = CV_PI,
bool return_votes = false );
Implementation:
void HoughLines( InputArray _image, OutputArray lines,
double rho, double theta, int threshold,
double srn, double stn, double min_theta, double max_theta,
bool return_votes )
{
CV_INSTRUMENT_REGION()
int type = CV_32FC2;
if (return_votes)
{
type = CV_32FC3;
}
else if (lines.fixedType())
{
type = lines.type();
CV_CheckType(type, type == CV_32FC2 || type == CV_32FC3, "Wrong type of output lines");
}
// the rest...
There is a new python binding (opencv 4.5.1)
doc : cv.HoughLinesWithAccumulator
I'm trying to understand cv2.bitwise_and function of opencv-python. So I tried it as:
import cv2
cv2.bitwise_and(1,1)
above code returns
array([[1.],
[0.],
[0.],
[0.]])
I don't understand why it returns this.
Documentation says :
dst(I) = src1(I) ^ src2(I) if mask(I) != 0
according to this output should be single value 1. where am I going wrong?
The documentation says clearly that the function performs the operations dst(I) = src1(I) ^ src2(I) if mask(I) != 0 if the inputs are two arrays of the same size.
So try:
import numpy as np # Opecv works with numpy arrays
import cv2
a = np.uint8([1])
b = np.uint8([1])
cv2.bitwise_and(a, b)
That code returns:
array([[1]], dtype=uint8)
That is a one dimensional array containing the number 1.
The documentation also mentions that the operation can be done with an array and a scalar, but not with two scalars, so the input cv2.bitwise_and(1,1) is not correct.
The documentation is a bit vague in this aspect, and it will take some digging through both source, as well as docs to properly explain what's happening.
First of all -- scalars. In context of data types, we have a cv::Scalar, which is actually a specialization of template cv::Scalar_. It represents a 4-element vector, and derives from cv::Vec -- a template representing a fixed size vector, which is again a special case of cv::Matx, a class representing small fixed size matrices.
That's scalar the data type, however in the context of the bitwise_and (and related functions), the concept what is and isn't a scalar is much looser -- the function in fact is not aware that gave it an instance of cv::Scalar.
If you look at the signature of the function, you'll notice that the inputs are InputArrays. So the inputs are always arrays, but it's possible that some of their properties differ (kind, element type, size, dimensionality, etc.).
The specific check in the code verifies that size, type and kind match. If that's the case (and in your scenario it is), the operation dst(I) = src1(I) ^ src2(I) if mask(I) != 0 runs.
Otherwise it will check whether one of the input arrays represents a scalar. It uses function checkScalar to do that, and the return statement says most of it:
return sz == Size(1, 1)
|| sz == Size(1, cn) || sz == Size(cn, 1)
|| (sz == Size(1, 4) && sc.type() == CV_64F && cn <= 4);
Anything that has size 1 x 1
Anything that size 1 x cn or cn x 1 (where cn is the number of channels if the other input array).
Anything that has size 1 x 4 and elements are 64bit floating point values, but only when the other input array has 4 or fewer channels.
The last case matches both the default cv::Scalar (which, as we have seen earlier, is a cv::Matx<double,4,1>), as well as cv::Mat(4,1,CF_64F).
As an intermission, let's test some of what we learned above.
Code:
cv::Scalar foo(1), bar(1);
cv::Mat result;
cv::bitwise_and(foo, bar, result);
std::cout << result << '\n';
std::cout << "size : " << result.size() << '\n';
std::cout << "type==CV_64FC1 : " << (result.type() == CV_64FC1 ? "yes" : "no") << '\n';
Output:
[1;
0;
0;
0]
size : [1 x 4]
type==CV_64FC1 : yes
Having covered the underlying C++ API, let's look at the Python bindings. The generator that creates the wrappers for Python API is fairly complex, so let's skip that, and instead inspect a relevant snippet of what it generates for bitwise_and:
using namespace cv;
{
PyObject* pyobj_src1 = NULL;
Mat src1;
PyObject* pyobj_src2 = NULL;
Mat src2;
PyObject* pyobj_dst = NULL;
Mat dst;
PyObject* pyobj_mask = NULL;
Mat mask;
const char* keywords[] = { "src1", "src2", "dst", "mask", NULL };
if( PyArg_ParseTupleAndKeywords(args, kw, "OO|OO:bitwise_and", (char**)keywords, &pyobj_src1, &pyobj_src2, &pyobj_dst, &pyobj_mask) &&
pyopencv_to(pyobj_src1, src1, ArgInfo("src1", 0)) &&
pyopencv_to(pyobj_src2, src2, ArgInfo("src2", 0)) &&
pyopencv_to(pyobj_dst, dst, ArgInfo("dst", 1)) &&
pyopencv_to(pyobj_mask, mask, ArgInfo("mask", 0)) )
{
ERRWRAP2(cv::bitwise_and(src1, src2, dst, mask));
return pyopencv_from(dst);
}
}
PyErr_Clear();
We can see that parameters that correspond to InputArray or OutputArray are loaded into a cv::Mat instance. Let's look at the part of pyopencv_to that corresponds to your scenario:
if( PyInt_Check(o) )
{
double v[] = {static_cast<double>(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};
m = Mat(4, 1, CV_64F, v).clone();
return true;
}
A cv::Mat(4, 1, CV_64F) (recall from earlier that this fits the test for scalar) containing the input integer cast to double, with the remaining 3 position padded with zeros.
Since no destination is provided, a Mat will be allocated automatically, of the same size and type as inputs. On return to Python, the Mat will become a numpy array.
I'm running into an issue while trying to pass a double array from C++ to Python. I run a script to create a binary file with data, then read that data back into an array and am trying to pass the array to Python. I've followed advice here: how to return array from c function to python using ctypes among other pages I have found through google. I can write a generic example that works fine (like a similar array to the link above), but when I try to pass the array read from a binary file (code below), the program crashes with "Unhandled exception at ADDR (ucrtbase.dll) in python.exe: An invalid parameter was passed to a function that considers invalid parameters fatal." So, I'm wondering if anyone has any insight.
A word on methodology:
Right now, I'm just trying to learn - that's why I'm going through the convoluted process of saving to disk, loading, and passing to Python. Eventaully, I will use this in scientific simulations where the data read from disk needs to be generated by distributed computing/a super computer. I would like to use Python for its ease of plotting (matplotlib) and C++ for its speed (iterative calculations, etc).
So, on to my code. This generates the binary file:
for (int zzz = 0; zzz < arraysize; ++zzz)
{
for (int yyy = 0; yyy < arraysize; ++yyy)
{
for (int xxx = 0; xxx < arraysize; ++xxx)
{//totalBatP returns a 3 element std::vector<double> - dblArray3_t is basically that with a few overloaded operators (+,-,etc)
dblArray3_t BatP = B.totalBatP({ -5 + xxx * stepsize, -5 + yyy * stepsize, -5 + zzz * stepsize }, 37);
for (int bbb = 0; bbb < 3; ++bbb)
{
dataarray[loopind] = BatP[bbb];
++loopind;
...(end braces here)
FILE* binfile;
binfile = fopen("MBdata.bin", "wb");
fwrite(dataarray, 8, 3 * arraysize * arraysize * arraysize, binfile);
The code that reads the file:
DLLEXPORT double* readDblBin(const std::string filename, unsigned int numOfDblsToRead)
{
char* buffer = new char[numOfDblsToRead];
std::ifstream binFile;
binFile.open(filename, std::ios::in | std::ios::binary);
binFile.read(buffer, numOfDblsToRead);
double* dataArray = (double*)buffer;
binFile.close();
return dataArray;
}
And the Python Code that receives the array:
def readBDataWrapper(filename, numDblsToRead):
fileIO = ctypes.CDLL('./fileIO.dll')
fileIO.readDblBin.argtypes = (ctypes.c_char_p, ctypes.c_uint)
fileIO.readDblBin.restype = ctypes.POINTER(ctypes.c_double)
return fileIO.readDblBin(filename, numDblsToRead)
One possible problem is here
char* buffer = new char[numOfDblsToRead];
Here you allocate numOfDblsToRead bytes. You probably want numOfDblsToRead * sizeof(double).
Same with the reading from the file, you only read numOfDblsToRead bytes.
I figured it out - at least it appears to be working. The problem was with the binary files that were generated with the first code block. I swapped the c-style writing with ofstream. My assumption is perhaps I was using the code to write to disk wrong somehow. Anyway, it appears to work now.
Replaced:
FILE* binfile;
binfile = fopen("MBdata.bin", "wb");
fwrite(dataarray, 8, 3 * arraysize * arraysize * arraysize, binfile);
With:
std::ofstream binfile;
binfile.open("MBdata.bin", std::ios::binary | std::ios::out);
binfile.write(reinterpret_cast<const char*>(dataarray), std::streamsize(totaliter * sizeof(double)));
binfile.close();
I want to extract data from a file whoose information is stored in big-endian and always unsigned. How does the "cast" from unsigned int to int affect the actual decimal value? Am I correct that the most left bit decides about the whether the value is positive or negative?
I want to parse that file-format with python, and reading and unsigned value is easy:
def toU32(bits):
return ord(bits[0]) << 24 | ord(bits[1]) << 16 | ord(bits[2]) << 8 | ord(bits[3])
but how would the corresponding toS32 function look like?
Thanks for the info about the struct-module. But I am still interested in the solution about my actual question.
I would use struct.
import struct
def toU32(bits):
return struct.unpack_from(">I", bits)[0]
def toS32(bits):
return struct.unpack_from(">i", bits)[0]
The format string, ">I", means read a big endian, ">", unsigned integer, "I", from the string bits. For signed integers you can use ">i".
EDIT
Had to look at another StackOverflow answer to remember how to "convert" a signed integer from an unsigned integer in python. Though it is less of a conversion and more of reinterpreting the bits.
import struct
def toU32(bits):
return ord(bits[0]) << 24 | ord(bits[1]) << 16 | ord(bits[2]) << 8 | ord(bits[3])
def toS32(bits):
candidate = toU32(bits);
if (candidate >> 31): # is the sign bit set?
return (-0x80000000 + (candidate & 0x7fffffff)) # "cast" it to signed
return candidate
for x in range(-5,5):
bits = struct.pack(">i", x)
print toU32(bits)
print toS32(bits)
I would use the struct module's pack and unpack methods.
See Endianness of integers in Python for some examples.
The non-conditional version of toS32(bits) could be something like:
def toS32(bits):
decoded = toU32(bits)
return -(decoded & 0x80000000) + (decoded & 0x7fffffff)
You can pre-compute the mask for any other bit size too of course.