I wrote some python code to estimate a parameter using the Maximum Likelihood. I'm using the Newton-Raphson Method to solve the problem. However, I need to convert it to C++ and integrate it with the rest of the Software.
I am not familiar with C++, how can I convert the following bock in Python to C++:
import numpy as np
x = np.array([-1.94, 0.59, -5.98, -0.08, -0.77])
start = np.median(x)
xhat = start
max_iter =20
epsilon = 0.001
def first_derivative(xhat):
fd = 2*sum((x-xhat)/(1+(x-xhat)**2))
return fd
def second_derivative(xhat):
sd = 2*sum((((x-xhat)**2)-1)/((1+(x-xhat)**2)**2))
return sd
def raphson_newton(xhat):
fdc = first_derivative(xhat)
sdc = second_derivative(xhat)
xhat = start
i = 0
#Iterate until we find the solution within the desired epsilon
while abs(fdc>epsilon or i<max_iter):
i = i+1
x1 = xhat - (fdc/sdc)
xhat = x1
fdc = first_derivative(x)
print('The ML estimate of xhat is', xhat)
return xhat
raphson_newton(xhat)
Given the toy example above, xhat should be around -0.5343967677954681.
I have tried the following but it's not converging to the same value. Not sure where I am getting it wrong.
#include <cmath>
#include <iostream>
#include <vector>
using namespace std;
#include <cmath>
double max_iter = 100;
double start = -0.77;
double xhat = start;
vector<double> y = {-1.94, 0.59, -5.98, -0.08, -0.77};
//Derivative of the function
double first(double y)
{
double tfd = (y - xhat) / (1 + pow(y - xhat, 2));
double fd = 2 * tfd;
return fd;
}
// Second derivative of the function
double second(double y)
{
double tsd = (pow(y - xhat, 2) - 1) / pow(1 + pow(y - xhat, 2), 2);
double sd = 2 * tsd;
return sd;
}
double newton_raphson(double xhat)
{
double tolerance = 0.001;
double x1;
int i = 0;
// Iterate until we find a root within the desired tolerance
do
{
double x1 = xhat - first(xhat) / second(xhat);
xhat = x1;
max_iter= i++;
} while ( i < max_iter);
return double (xhat);
}
int main()
{
double xhat = newton_raphson(1);
cout << "xhat: " << xhat << endl;
return 0;
}
There are several issues in your C++ code:
In first() and second() you need to iterate over elements of the vector y (or x as it is named in your Python code and thus also in my code below).
In newton_raphson(), you are changing max_iter and there is no check if the result is already within the tolerance. Generally, the code can be made to resemble the Python code better.
The iteration is started with the value 1 instead of start and the global variable xhat is not used.
There is still room for improvement, but the following should work:
#include <cmath>
#include <iostream>
#include <vector>
#include <cmath>
unsigned max_iter = 100;
std::vector<double> x = {-1.94, 0.59, -5.98, -0.08, -0.77};
double start = -0.77;
//Derivative of the function
double first(double xhat)
{
double tfd = 0.0;
for(auto &xi: x) tfd += (xi - xhat) / (1 + std::pow(xi - xhat, 2));
double fd = 2 * tfd;
return fd;
}
// Second derivative of the function
double second(double xhat)
{
double tsd = 0.0;
for(auto &xi: x) tsd += (std::pow(xi - xhat, 2) - 1) / std::pow(1 + std::pow(xi - xhat, 2), 2);
double sd = 2 * tsd;
return sd;
}
double newton_raphson(double xhat)
{
double fdc = first(xhat);
double sdc = second(xhat);
double tolerance = 0.001;
unsigned i = 0;
// Iterate until we find a root within the desired tolerance
while(i < max_iter && std::abs(fdc) > tolerance)
{
i++;
xhat -= fdc/sdc;
fdc = first(xhat);
}
return xhat;
}
int main()
{
double xhat = newton_raphson(start);
std::cout << "xhat: " << xhat << std::endl;
return 0;
}
Related
I'm working for some days now on a DirectX 11 version of the Mandelbrot set. What I've done so far is create a quad with a texture on it. I can color the points with a Pixel Shader, but for some reason the Mandelbrot set in the Pixel Shader does not return the expected result. I tested the logic in plain C++ code and I've same eroneous result. Any idea what's wrong with the code? I have a proper version working in Python and I just replicated the code, but it seems something is missing.
The Width of the set is 2.5 (will stretched the image a bit). It assumes a 1024*960 window and max. iteration of 1000. I compiled with Shader Model 5.0. It starts with the default set with
RealStart = -2.0;
ImagStart = -1.25;
Passed via the constant buffer
cbuffer cBuffer
{
double RealStart; 'equals -2.5 from the default view of the set
double ImagStart; 'equals -1.25 from the default view of the set
};
// Pixel Shader
float4 main(float4 position : SV_POSITION) : SV_TARGET
{
double real, imag;
double real2, imag2;
int ite = 0;
float4 CalcColor = { 1.0f , 1.0f, 1.0f, 1.0f };
'position is the position of the pixel from 1.0f to 0.0f
real = RealStart + (double) position.x / 1024 * 2.5;
imag = ImagStart + (double) position.y / 960 * 2.5;
for (int i = 0; i < 1000; i++)
{
'breaking down the complex number by its constituents
real2 = real * real;
imag2 = imag * imag;
if (real2 + imag2 > 4.0)
{
break;
}
else {
imag = 2 * real * imag + ImagStart;
real = real2 - imag2 + RealStart;
ite++;
}
}
CalcColor[0] = (float) (ite % 333) / 333 ;
CalcColor[1] = (float) (ite % 666) / 666 ;
CalcColor[2] = (float) (ite % 1000) / 1000;
return CalcColor;
}
Edit Python version
def Mandelbrot(creal, cimag, maxNumberOfIterations):
real = creal
imag = cimag
for numberOfIterations in range(maxNumberOfIterations):
real2 = real * real
imag2 = imag * imag
if real2 + imag2 > 4.0:
return numberOfIterations
imag = 2 * real * imag + cimag
real = real2 - imag2 + creal
return maxNumberOfIterations
The creal, cimag and are created like that and then just looped through.
realAxis = np.linspace(realStart, realStart + width, dim)
imagAxis = np.linspace(imagStart, imagStart + width, dim)
It return the maxNumberOfIterations to a two-dimsensional array, which is plot to draw the Mandelbrot set.
The error was that the ImagStart and RealStart in the Else need to be scaled as well. The code in the Shader has been modified as follows:
cbuffer cBuffer
{
double2 C;
float2 Param;
float MaxIt;
};
// Pixel Shader
float4 main(float4 position : SV_POSITION, float2 texcoord : TEXCOORD) : SV_TARGET
{
double real, imag;
double real2, imag2;
uint ite = 0;
float4 CalcColor = { 1.0f , 1.0f, 1.0f, 1.0f };
real = C.x + ((double) texcoord.x - 0.5) * 2.0 * 2.5;
imag = C.y + ((double) texcoord.y - 0.5) * 2.0 * 2.5;
for (int i = 0; i < 100; i++)
{
real2 = real * real;
imag2 = imag * imag;
if (real2 + imag2 > 4.0)
{
break;
}
else {
imag = 2 * real * imag + C.y + ((double) texcoord.y - 0.5) * 2.0 * 2.5;
real = real2 - imag2 + C.x + ((double) texcoord.x - 0.5) * 2.0 * 2.5;
ite++;
}
}
if (ite > 100)
ite = 100;
CalcColor[0] = (float)(ite % 33) / 33;
CalcColor[1] = (float)(ite % 66) / 66;
CalcColor[2] = (float)(ite % 100) / 100;
return CalcColor;
}
The Mandelbrot set in drawn correctly.
I would like to reimplement the Qt C++ "Surface" example (Q3DSurface) in PySide2 but QSurfaceDataArray and QSurfaceDataRow are not available.
void SurfaceGraph::fillSqrtSinProxy()
{
float stepX = (sampleMax - sampleMin) / float(sampleCountX - 1);
float stepZ = (sampleMax - sampleMin) / float(sampleCountZ - 1);
QSurfaceDataArray *dataArray = new QSurfaceDataArray;
dataArray->reserve(sampleCountZ);
for (int i = 0 ; i < sampleCountZ ; i++) {
QSurfaceDataRow *newRow = new QSurfaceDataRow(sampleCountX);
// Keep values within range bounds, since just adding step can cause minor drift due
// to the rounding errors.
float z = qMin(sampleMax, (i * stepZ + sampleMin));
int index = 0;
for (int j = 0; j < sampleCountX; j++) {
float x = qMin(sampleMax, (j * stepX + sampleMin));
float R = qSqrt(z * z + x * x) + 0.01f;
float y = (qSin(R) / R + 0.24f) * 1.61f;
(*newRow)[index++].setPosition(QVector3D(x, y, z));
}
*dataArray << newRow;
}
m_sqrtSinProxy->resetArray(dataArray);
}
Is there are way to use a QVector<QSurfaceDataItem> in PySide2?
from PySide2.QtDataVisualization import QtDataVisualization as QDV
data_item = QDV.QSurfaceDataItem()
data_item.setPosition(QVector3D(x, y, z))
The QSurfaceDataItem is available but I can't pass the objects to QSurfaceDataProxy without QVector.
Using HoughTransform I am trying to detect boxes and provide for a distinct color.
So far my understanding is a box is horizontal and vertical line .
my code is
lines = cv2.HoughLines(edges,1,np.pi/180, 50)
# The below for loop runs till r and theta values
# are in the range of the 2d array
for r,theta in lines[0]:
# Stores the value of cos(theta) in a
a = np.cos(theta)
# Stores the value of sin(theta) in b
b = np.sin(theta)
# x0 stores the value rcos(theta)
x0 = a*r
# y0 stores the value rsin(theta)
y0 = b*r
# x1 stores the rounded off value of (rcos(theta)-1000sin(theta))
x1 = int(x0 + 1000*(-b))
# y1 stores the rounded off value of (rsin(theta)+1000cos(theta))
y1 = int(y0 + 1000*(a))
# x2 stores the rounded off value of (rcos(theta)+1000sin(theta))
x2 = int(x0 - 1000*(-b))
# y2 stores the rounded off value of (rsin(theta)-1000cos(theta))
y2 = int(y0 - 1000*(a))
# cv2.line draws a line in img from the point(x1,y1) to (x2,y2).
# (255,255,255) denotes the colour of the line to be.
cv2.line(img,(x1,y1), (x2,y2), (255,255,255),2) `
What could i do so that the boxes can be colored or identified?
You should do vertical and horizontal line detection separately so that you can make them each more specific.
Go through all your lines and compile a list of intersections between horizontal and vertical line combinations
Now you have a list of 2d points that if you draw them should pretty much be on the corners of the boxes. The final step is to collect those points into meaningful sets.
To get those sets, I would start with the point nearest origin (just for the sake of starting somewhere). I would look through all the other points for the closest other point that has a greater x but is withing +-5 (or some configurable range) y of the starting point. Then do the same but in the y direction. You now have the bottom corner of the box. Which you could just complete and start your ocr on, but to be more robust, find the final corner as well.
Once all 4 corners are found, remove all of those points from your intersection array and add however you want to denote box locations into a new array. Rinse and repeat as now a different point will be nearest origin. Without actually testing this, I think it will choke (or need some conditional improvement for missing walls) on the K box but should be pretty generic to variable box shapes and sizes.
Edit 1: In testing, I am finding that it will probably be difficult to separate the close corners of two adjacent boxes. I think a more generic and robust solution would be to after you get collisions, do a point clustering operation at about 1/3 box min side length. This will average corners together with their nearest neighbor. So this will slightly change the strategy as you will need to use every corner twice (box to left and box to right) except for end points.
Wrote up some test code and it is functional, here are the outputs:
Code, sorry for c++ and not at all optimized, happy friday :)
//CPP libaries
#include <stdio.h>
#include <mutex>
#include <thread>
//Included libraries
//Note: these headers have to be before any opencv due to a namespace collision (could probably be fixed)
#include <opencv2/opencv.hpp>
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
// Finds the intersection of two lines, or returns false.
// The lines are defined by (o1, p1) and (o2, p2).
//https://stackoverflow.com/questions/7446126/opencv-2d-line-intersection-helper-function
bool intersection(Point2f o1, Point2f p1, Point2f o2, Point2f p2,
Point2f &r)
{
Point2f x = o2 - o1;
Point2f d1 = p1 - o1;
Point2f d2 = p2 - o2;
float cross = d1.x*d2.y - d1.y*d2.x;
if (abs(cross) < /*EPS*/1e-8)
return false;
double t1 = (x.x * d2.y - x.y * d2.x) / cross;
r = o1 + d1 * t1;
return true;
}
std::vector<Point2f> clusterPts(std::vector<Point2f> inputPts, double clusterRadius_Squared)
{
std::vector<Point2f> outputPts = std::vector<Point2f>();
while(inputPts.size()>0)
{
Point2f clusterCenter = inputPts[0];
while (true)
{
Point2f newClustCenter = Point2f(0, 0);
int averagingCount = 0;
std::vector<int> clusterIndicies = std::vector<int>();
for (int i = 0; i < inputPts.size(); i++)
{
if (clusterRadius_Squared >= pow(inputPts[i].x - clusterCenter.x, 2) + pow(inputPts[i].y - clusterCenter.y, 2))
{
newClustCenter.x += inputPts[i].x;
newClustCenter.y += inputPts[i].y;
averagingCount += 1;
clusterIndicies.push_back(i);
}
}
newClustCenter = newClustCenter / (double)averagingCount;
if (newClustCenter == clusterCenter)
{
//remove all points inside cluster from inputPts, stash cluster center, and break inner while loop
std::vector<Point2f> remainingPts = std::vector<Point2f>();
for (int i = 0; i < inputPts.size(); i++)
{
if (std::find(clusterIndicies.begin(), clusterIndicies.end(), i) == clusterIndicies.end())
{
remainingPts.push_back(inputPts[i]);
}
}
inputPts = remainingPts;
outputPts.push_back(clusterCenter);
break;
}
else
{
clusterCenter = newClustCenter;
}
}
}
return outputPts;
}
std::vector<Rect> findBoxes(std::vector<Point2f> corners, bool shrinkBoxes = false, int boxSideLength_Guess = 50)
{
std::vector<Rect> outBoxes = std::vector<Rect>();
int approxBoxSize = 1000 * boxSideLength_Guess;
while (corners.size()>4)
{
//find point above or below (these points will be removed from array after used)
int secondPtIndex = -1;
for (int i = 1; i < corners.size(); i++)
{
if (abs(corners[i].x - corners[0].x) < boxSideLength_Guess / 2.0)
{
secondPtIndex = i;
break;
}
}
if (secondPtIndex == -1)
{
std::cout << "bad box point tossed" << std::endl;
corners.erase(corners.begin() + 0);
continue;
}
//now search for closest same level point on either side
int thirdIndexRight = -1;
int thirdIndexLeft = -1;
double minDistRight = approxBoxSize;
double minDistLeft = -approxBoxSize;
for (int i = 2; i < corners.size(); i++)
{
if (abs(corners[i].y - corners[secondPtIndex].y) < boxSideLength_Guess / 2.0)
{
double dist = corners[i].x - corners[secondPtIndex].x;
if (dist < 0 && dist > minDistLeft) //check left
{
minDistLeft = dist;
thirdIndexLeft = i;
}
else if(dist > 0 && dist < minDistRight) //check right
{
minDistRight = dist;
thirdIndexRight = i;
}
}
}
if (thirdIndexLeft != -1) { approxBoxSize = 1.5 * abs(minDistLeft); }
if (thirdIndexRight != -1) { approxBoxSize = 1.5 * minDistRight; }
int fourthIndexRight = -1;
int fourthIndexLeft = -1;
for (int i = 1; i < corners.size(); i++)
{
if (i == thirdIndexLeft || i == thirdIndexRight) { continue; }
if (thirdIndexLeft != -1 && abs(corners[i].x - corners[thirdIndexLeft].x) < boxSideLength_Guess / 2.0)
{ fourthIndexLeft = i; }
if (thirdIndexRight != -1 && abs(corners[i].x - corners[thirdIndexRight].x) < boxSideLength_Guess / 2.0)
{ fourthIndexRight = i; }
}
if (!shrinkBoxes)
{
if (fourthIndexRight != -1)
{
outBoxes.push_back(Rect(corners[0], corners[thirdIndexRight]));
}
if (fourthIndexLeft != -1)
{
outBoxes.push_back(Rect(corners[0], corners[thirdIndexLeft]));
}
}
else
{
if (fourthIndexRight != -1)
{
outBoxes.push_back(Rect(corners[0] * 0.90 + corners[thirdIndexRight] *0.10, corners[0] * 0.10 + corners[thirdIndexRight] * 0.90));
}
if (fourthIndexLeft != -1)
{
outBoxes.push_back(Rect(corners[0] * 0.90 + corners[thirdIndexLeft] * 0.10, corners[0] * 0.10 + corners[thirdIndexLeft] * 0.90));
}
}
corners.erase(corners.begin() + secondPtIndex);
corners.erase(corners.begin() + 0);
}
std::cout << approxBoxSize << std::endl;
return outBoxes;
}
int main(int argc, char** argv)
{
Mat image = imread("../../resources/images/boxPic.png", CV_LOAD_IMAGE_GRAYSCALE);
imshow("source", image);
//namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
//imshow("Display window", image); // Show our image inside it.
Mat edges, lineOverlay, cornerOverlay, finalBoxes;
Canny(image, edges, 50, 200, 3);
//edges = image;
//cvtColor(image, edges, COLOR_GRAY2BGR);
cvtColor(image, lineOverlay, COLOR_GRAY2BGR);
cvtColor(image, cornerOverlay, COLOR_GRAY2BGR);
cvtColor(image, finalBoxes, COLOR_GRAY2BGR);
std::cout << image.cols << " , "<<image.rows << std::endl;
std::vector<Vec2f> linesHorizontal;
std::vector<Point> ptsLH;
HoughLines(edges, linesHorizontal, 5, CV_PI / 180, 2 * edges.cols * 0.6, 0.0,0.0, CV_PI / 4, 3 * CV_PI / 4);
std::vector<Vec2f> linesVertical;
std::vector<Point> ptsLV;
HoughLines(edges, linesVertical, 5, CV_PI / 180, 2 * edges.rows * 0.6,0,0,-CV_PI/32,CV_PI/32);
for (size_t i = 0; i < linesHorizontal.size(); i++)
{
float rho = linesHorizontal[i][0], theta = linesHorizontal[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a * rho, y0 = b * rho;
pt1.x = cvRound(x0 + 1000 * (-b));
pt1.y = cvRound(y0 + 1000 * (a));
pt2.x = cvRound(x0 - 1000 * (-b));
pt2.y = cvRound(y0 - 1000 * (a));
ptsLH.push_back(pt1);
ptsLH.push_back(pt2);
line(lineOverlay, pt1, pt2, Scalar(0, 0, 255), 1, LINE_AA);
}
for (size_t i = 0; i < linesVertical.size(); i++)
{
float rho = linesVertical[i][0], theta = linesVertical[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a * rho, y0 = b * rho;
pt1.x = cvRound(x0 + 1000 * (-b));
pt1.y = cvRound(y0 + 1000 * (a));
pt2.x = cvRound(x0 - 1000 * (-b));
pt2.y = cvRound(y0 - 1000 * (a));
ptsLV.push_back(pt1);
ptsLV.push_back(pt2);
line(lineOverlay, pt1, pt2, Scalar(0, 255, 0), 1, LINE_AA);
}
imshow("edged", edges);
imshow("detected lines", lineOverlay);
//look for collisions
std::vector<Point2f> xPts;
for (size_t i = 0; i < linesHorizontal.size(); i++)
{
for (size_t ii = 0; ii < linesVertical.size(); ii++)
{
Point2f xPt;
bool intersectionExists = intersection(ptsLH[2 * i], ptsLH[2 * i + 1], ptsLV[2 * ii], ptsLV[2 * ii + 1], xPt);
if (intersectionExists)
{
xPts.push_back(xPt);
}
}
}
waitKey(1000);
std::vector<Point2f> boxCorners = clusterPts(xPts, 25*25);
for (int i = 0; i < boxCorners.size(); i++)
{
circle(cornerOverlay, boxCorners[i], 5, Scalar(0, 255, 0), 2);
}
imshow("detected corners", cornerOverlay);
//group make boxes for groups of points
std::vector<Rect> ocrBoxes = findBoxes(boxCorners,true);
for (int i = 0; i < ocrBoxes.size(); i++)
{
if (i % 3 == 0) { rectangle(finalBoxes, ocrBoxes[i], Scalar(255, 0, 0), 2); }
else if(i % 3 == 1) { rectangle(finalBoxes, ocrBoxes[i], Scalar(0, 255, 0), 2); }
else if (i % 3 == 2) { rectangle(finalBoxes, ocrBoxes[i], Scalar(0, 0, 255), 2); }
}
imshow("detected boxes", finalBoxes);
waitKey(0); // Wait for a keystroke in the window
return 0;
}
I extracted the contours of an image, that you can see here:
However, it has some noise.
How can I smooth the noise? I did a close up to make clearer what I want to meant
Original image that I've used:
Code:
rMaskgray = cv2.imread('redmask.jpg', cv2.CV_LOAD_IMAGE_GRAYSCALE)
(thresh, binRed) = cv2.threshold(rMaskgray, 50, 255, cv2.THRESH_BINARY)
Rcontours, hier_r = cv2.findContours(binRed,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
r_areas = [cv2.contourArea(c) for c in Rcontours]
max_rarea = np.max(r_areas)
CntExternalMask = np.ones(binRed.shape[:2], dtype="uint8") * 255
for c in Rcontours:
if(( cv2.contourArea(c) > max_rarea * 0.70) and (cv2.contourArea(c)< max_rarea)):
cv2.drawContours(CntExternalMask,[c],-1,0,1)
cv2.imwrite('contour1.jpg', CntExternalMask)
Try an upgrade to OpenCV 3.1.0. After some code adaptations for the new version as shown below, I tried it out with OpenCV version 3.1.0 and did not see any of the effects you are describing.
import cv2
import numpy as np
print cv2.__version__
rMaskgray = cv2.imread('5evOn.jpg', 0)
(thresh, binRed) = cv2.threshold(rMaskgray, 50, 255, cv2.THRESH_BINARY)
_, Rcontours, hier_r = cv2.findContours(binRed,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
r_areas = [cv2.contourArea(c) for c in Rcontours]
max_rarea = np.max(r_areas)
CntExternalMask = np.ones(binRed.shape[:2], dtype="uint8") * 255
for c in Rcontours:
if(( cv2.contourArea(c) > max_rarea * 0.70) and (cv2.contourArea(c)< max_rarea)):
cv2.drawContours(CntExternalMask,[c],-1,0,1)
cv2.imwrite('contour1.jpg', CntExternalMask)
I don't know if is it ok to provide Java code - but I implemented Gaussian smoothing for openCV contour. Logic and theory is taken from here https://www.morethantechnical.com/2012/12/07/resampling-smoothing-and-interest-points-of-curves-via-css-in-opencv-w-code/
package CurveTools;
import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Point;
import java.util.ArrayList;
import java.util.List;
import static org.opencv.core.CvType.CV_64F;
import static org.opencv.imgproc.Imgproc.getGaussianKernel;
class CurveSmoother {
private double[] g, dg, d2g, gx, dx, d2x;
private double gx1, dgx1, d2gx1;
public double[] kappa, smoothX, smoothY;
public double[] contourX, contourY;
/* 1st and 2nd derivative of 1D gaussian */
void getGaussianDerivs(double sigma, int M) {
int L = (M - 1) / 2;
double sigma_sq = sigma * sigma;
double sigma_quad = sigma_sq * sigma_sq;
dg = new double[M];
d2g = new double[M];
g = new double[M];
Mat tmpG = getGaussianKernel(M, sigma, CV_64F);
for (double i = -L; i < L + 1.0; i += 1.0) {
int idx = (int) (i + L);
g[idx] = tmpG.get(idx, 0)[0];
// from http://www.cedar.buffalo.edu/~srihari/CSE555/Normal2.pdf
dg[idx] = -i * g[idx] / sigma_sq;
d2g[idx] = (-sigma_sq + i * i) * g[idx] / sigma_quad;
}
}
/* 1st and 2nd derivative of smoothed curve point */
void getdX(double[] x, int n, double sigma, boolean isOpen) {
int L = (g.length - 1) / 2;
gx1 = dgx1 = d2gx1 = 0.0;
for (int k = -L; k < L + 1; k++) {
double x_n_k;
if (n - k < 0) {
if (isOpen) {
//open curve - mirror values on border
x_n_k = x[-(n - k)];
} else {
//closed curve - take values from end of curve
x_n_k = x[x.length + (n - k)];
}
} else if (n - k > x.length - 1) {
if (isOpen) {
//mirror value on border
x_n_k = x[n + k];
} else {
x_n_k = x[(n - k) - x.length];
}
} else {
x_n_k = x[n - k];
}
gx1 += x_n_k * g[k + L]; //gaussians go [0 -> M-1]
dgx1 += x_n_k * dg[k + L];
d2gx1 += x_n_k * d2g[k + L];
}
}
/* 0th, 1st and 2nd derivatives of whole smoothed curve */
void getdXcurve(double[] x, double sigma, boolean isOpen) {
gx = new double[x.length];
dx = new double[x.length];
d2x = new double[x.length];
for (int i = 0; i < x.length; i++) {
getdX(x, i, sigma, isOpen);
gx[i] = gx1;
dx[i] = dgx1;
d2x[i] = d2gx1;
}
}
/*
compute curvature of curve after gaussian smoothing
from "Shape similarity retrieval under affine transforms", Mokhtarian & Abbasi 2002
curvex - x position of points
curvey - y position of points
kappa - curvature coeff for each point
sigma - gaussian sigma
*/
void computeCurveCSS(double[] curvex, double[] curvey, double sigma, boolean isOpen) {
int M = (int) Math.round((10.0 * sigma + 1.0) / 2.0) * 2 - 1;
assert (M % 2 == 1); //M is an odd number
getGaussianDerivs(sigma, M);//, g, dg, d2g
double[] X, XX, Y, YY;
getdXcurve(curvex, sigma, isOpen);
smoothX = gx.clone();
X = dx.clone();
XX = d2x.clone();
getdXcurve(curvey, sigma, isOpen);
smoothY = gx.clone();
Y = dx.clone();
YY = d2x.clone();
kappa = new double[curvex.length];
for (int i = 0; i < curvex.length; i++) {
// Mokhtarian 02' eqn (4)
kappa[i] = (X[i] * YY[i] - XX[i] * Y[i]) / Math.pow(X[i] * X[i] + Y[i] * Y[i], 1.5);
}
}
/* find zero crossings on curvature */
ArrayList<Integer> findCSSInterestPoints() {
assert (kappa != null);
ArrayList<Integer> crossings = new ArrayList<>();
for (int i = 0; i < kappa.length - 1; i++) {
if ((kappa[i] < 0.0 && kappa[i + 1] > 0.0) || kappa[i] > 0.0 && kappa[i + 1] < 0.0) {
crossings.add(i);
}
}
return crossings;
}
public void polyLineSplit(MatOfPoint pl) {
contourX = new double[pl.height()];
contourY = new double[pl.height()];
for (int j = 0; j < contourX.length; j++) {
contourX[j] = pl.get(j, 0)[0];
contourY[j] = pl.get(j, 0)[1];
}
}
public MatOfPoint polyLineMerge(double[] xContour, double[] yContour) {
assert (xContour.length == yContour.length);
MatOfPoint pl = new MatOfPoint();
List<Point> list = new ArrayList<>();
for (int j = 0; j < xContour.length; j++)
list.add(new Point(xContour[j], yContour[j]));
pl.fromList(list);
return pl;
}
MatOfPoint smoothCurve(MatOfPoint curve, double sigma) {
int M = (int) Math.round((10.0 * sigma + 1.0) / 2.0) * 2 - 1;
assert (M % 2 == 1); //M is an odd number
//create kernels
getGaussianDerivs(sigma, M);
polyLineSplit(curve);
getdXcurve(contourX, sigma, false);
smoothX = gx.clone();
getdXcurve(contourY, sigma, false);
smoothY = gx;
Logger.getRootLogger().info("Smooth curve len: " + smoothX.length);
return polyLineMerge(smoothX, smoothY);
}
}
The same XorShift functions written in C and Python give different results. Can you explain it?
The XorShift function generates numbers in the following way:
x(0) = 123456789
y(0) = 362436069
z(0) = 521288629
w(0) = 88675123
x(n+1) = y(n)
y(n+1) = z(n)
z(n+1) = w(n)
w(n+1) = w(n) ^ (w(n)>>19) ^ (x(n)^(x(n)<<11)) ^ ((x(n)^(x(n)<<11)) >> 8)
I wrote this function in Python to generate subsequent values of w:
X = 123456789
Y = 362436069
Z = 521288629
W = 88675123
def xor_shift():
global X, Y, Z, W
t = X ^ (X << 11)
X = Y
Y = Z
Z = W
W = W ^ (W >> 19) ^ t ^ (t >> 8)
return W
W1 = xor_shift() # 252977563114
W2 = xor_shift() # 646616338854
W3 = xor_shift() # 476657867818
The same code written in C (it can be found on Wikipedia http://en.wikipedia.org/wiki/Xorshift) gives different results:
#include <stdint.h>
uint32_t xor128(void) {
static uint32_t x = 123456789;
static uint32_t y = 362436069;
static uint32_t z = 521288629;
static uint32_t w = 88675123;
uint32_t t;
t = x ^ (x << 11);
x = y; y = z; z = w;
return w = w ^ (w >> 19) ^ t ^ (t >> 8);
}
cout << xor128() <<'\n'; // result W1 = 3701687786
cout << xor128() <<'\n'; // result W2 = 458299110
cout << xor128() <<'\n'; // result W3 = 2500872618
I suppose that there is a problem with my Python code or my use of cout (I am not very good at C++).
EDIT: Working solution:
need to change the return value from uint32_t to uint64_t:
#include <stdint.h>
uint64_t xor128(void) {
static uint64_t x = 123456789;
static uint64_t y = 362436069;
static uint64_t z = 521288629;
static uint64_t w = 88675123;
uint64_t t;
t = x ^ (x << 11);
x = y; y = z; z = w;
return w = w ^ (w >> 19) ^ t ^ (t >> 8);
}
Change all your uint32_t types to uin64_t and you'll get the same result. The difference is the precision between uint32_t and the unlimited precision of python integer types.