scipy.weave.inline not working as expected with math library - python

I have a problem using scipy.weave.inline. I want to program a unitstep function centered around lcenter, and with a with of width_nm. I have two versions: The python version, called pm and an optimized function, called pm_weave, but it looks like abs is not working properly. See the code below. If you run it, you'll get a window of size 1 for the weave variety, no matter what the input is, so it looks like abs doesn't work. If you remove the abs, for example, it works exactly like you expect
How can I fix this?
def pm_weave(w,lcenter,width_nm):
""" Return a unitstep function that is centered around lcenter with height 1.0 and width width_nm """
lcenter = float(lcenter)
w = float(w)
width_nm = float(width_nm)
code = """
#include <math.h>
float wl = 1.88495559215387594307758602997E3/w;
if(abs(lcenter-wl) < width_nm) {
return_val = 1.0;
}
else {
return_val = 0.0;
}
"""
res = weave.inline(code,['w','lcenter','width_nm'],
type_converters = weave.converters.blitz,
compiler = "gcc", headers=["<math.h>"]
)
return res
def pm(w,lcenter,width_nm):
"""
Return a unitstep function centered around lcenter [nm] with width width_nm. w
should be a radial frequency.
"""
return abs(600*np.pi/w - lcenter) < width_nm/2. and 1. or 0.
plot(wavelength_list,map(lambda w:pm(toRadialFrequency(w),778,1),wavelength_list),label="Desired behaviour")
plot(wavelength_list,map(lambda w:pm_weave(toRadialFrequency(w),778,1),wavelength_list),'^',label="weave.inline behaviour")
ylim(0,1.5)
show()

I think you might need to use fabs() instead of abs() in the C code. abs() will truncate the result, while fabs() will work for floating-point arithmetic.

Related

Assigned a complex value in cupy RawKernel

I am a beginner learning how to exploit GPU for parallel computation using python and cupy. I would like to implement my code to simulate some problems in physics and require to use complex number, but don't know how to manage it. Although there are examples in Cupy's official document, it only mentions about include complex.cuh library and how to declare a complex variable. I can't find any example about how to assign a complex number correctly, as well ass how to call the function in the complex.cuh library to do calculation.
I am stuck in line 11 of this code. I want to make a complex number value equal x[tIdx]+j*y[t_Idx], j is the imaginary number. I tried several ways and no one works, so I left this one here.
import cupy as cp
import time
add_kernel = cp.RawKernel(r'''
#include <cupy/complex.cuh>
extern "C" __global__
void test(double* x, double* y, complex<float>* z){
int tId_x = blockDim.x*blockIdx.x + threadIdx.x;
int tId_y = blockDim.y*blockIdx.y + threadIdx.y;
complex<float>* value = complex(x[tId_x],y[tId_y]);
z[tId_x*blockDim.y*gridDim.y+tId_y] = value;
}''',"test")
x = cp.random.rand(1,8,4096,dtype = cp.float32)
y = cp.random.rand(1,8,4096,dtype = cp.float32)
z = cp.zeros((4096,4096), dtype = cp.complex64)
t1 = time.time()
add_kernel((128,128),(32,32),(x,y,z))
print(time.time()-t1)
What is the proper way to assign a complex number in the RawKernel?
Thank you for answering this question!
#plaeonix, thank you very much for your hint. I find out the answer.
This line:
complex<float>* value = complex(x[tId_x],y[tId_y])
should be replaced to:
complex<float> value = complex<float>(x[tId_x],y[tId_y])
Then the assignment of a complex number works.

Transform feedback with tesselation under Python/ModernGL

I'm using Python/ModernGL, I'm trying to capture an output from a tessellation evaluation shader by making a vao.transform call. I have no problems if I set my output in a vertex shader or a geometry shader, but if any tessellation is involved, as in the case below, I get garbage number output.
The idea is to keep some useful glsl code shared through my project and keep automated tests for it. Some of that code touches tessellation.
I've fiddled as much as I could with things like ctx.patch_vertices, a tessellation control stage, a geometry shader stage (geom+vertex shader transforms work fine), quads/triangles/isolines/point_mode in the TES input layout, but I'm really just guessing. I realize I won't get out the same number of vertices I put in, but I should still be able to read the first ten of them and see what they are.
In the minimal example below, which is modified from the ModernGL docs, commenting out the TES causes the program to output ten 3.0s. I'd like to see ten 5.0s.
import struct
import moderngl
ctx = moderngl.create_context(standalone=True)
ctx.patch_vertices = 1
program = ctx.program(
vertex_shader=
'''#version 430
out float a_out;
void main() {
a_out = 3;
}
''',
tess_evaluation_shader=
'''#version 430
layout(triangles, equal_spacing, ccw) in;
out float a_out;
void main() {
a_out = 5;
}
''',
varyings=["a_out"],
)
NUM_VERTICES = 10
vao = ctx.vertex_array(program, [])
buffer = ctx.buffer(reserve=NUM_VERTICES * 4)
vao.transform(buffer, vertices=NUM_VERTICES)
data = struct.unpack(f"{NUM_VERTICES}f", buffer.read())
for i, datum in enumerate(data):
print(f"data[{i}] = {datum}")

Rpy2: set a R formulat from python

I am little confused by R syntax formula
I created the following python function with Rpy2:
objects.r('''
project_var <- function(grid,points) {
coordinates(points) = ~X + Y
gridded(grid) = ~X+Y
grid = idw(Z~1, points,grid)
grid <- as.data.frame(grid)
return(grid)
}
''')
Then I import it
project_var = robjects.globalenv['project_var']
Then I call it:
test = project_var(model,points_top)
And it works as expected!
I would like to'Z' to be set by an argument of my function, something like this:
project_var <- function(grid,points,feature_name) {
...
grid = idw(feature_name~1, points,grid)
My Problem :
idw(feature_name~1, points,grid)
I do not really understand this line and what is really feature name (because it is not a string nor known variable at this point, but the name of a column as a formula).
for info idw comes from gstat library... and I do not know R...
here is the doc:
idw.locations(formula, locations, data, newdata, nmax = Inf, nmin = 0,
omax = 0, maxdist = Inf, block, na.action = na.pass, idp = 2.0,
debug.level = 1)
https://cran.r-project.org/web/packages/gstat/gstat.pdf
So what should I put for feature_name in the python side ? or how to build it in R so it would transform the string feature_name into something that would work ?
Any help would be appreciate.
Thank you for reading so far.
I do not really understand this line and what is really feature name (because it is not a string nor known variable at this point, but the name of a column).
R differs from Python as expressions in a function call (here idw(Z~1, points,grid)) will only be evaluated within the function, and the unevaluated expression itself is available to the code in the body of the function.
In addition to that, Z~1 is itself a special thing: it is an R formula. You could write fml <- Z ~ 1 in R and the object fml will be a "formula". The constructor for the formula is somewhat hidden as <something> ~ <something> is considered a language construct in R, but in fact you have something like build_formula(<left_side_expression>, <right_side_expression>). You can try in R fml <- get("~")(Z, 1) and see that this is exactly that happening.
okay, just need to use as.formula to convert a string to a formula :-)
idw(as.formula(feature_name), points,grid)

Python curve fit with change point

As I'm really struggleing to get from R-code, to Python code, I would like to ask some help. The code I want to use has been provided to my from withing the mathematics forum of stackexchange.
https://math.stackexchange.com/questions/2205573/curve-fitting-on-dataset
I do understand what is going on. But I'm really having a hard time trying to solve the R-code, as I have never seen anything of it. I have written the function to return the sum of squares. But I'm stuck at how I could use a function similar to the optim function. And also I don't really like the guesswork at the initial values. I would like it better to run and re-run a type of optim function untill I get the wanted result, because my needs for a nearly perfect curve fit are really high.
def model (par,x):
n = len(x)
res = []
for i in range(1,n):
A0 = par[3] + (par[4]-par[1])*par[6] + (par[5]-par[2])*par[6]**2
if(x[i] == par[6]):
res[i] = A0 + par[1]*x[i] + par[2]*x[i]**2
else:
res[i] = par[3] + par[4]*x[i] + par[5]*x[i]**2
return res
This is my model function...
def sum_squares (par, x, y):
ss = sum((y-model(par,x))^2)
return ss
And this is the sum of squares
But I have no idea on how to convert this:
#I found these initial values with a few minutes of guess and check.
par0 <- c(7,-1,-395,70,-2.3,10)
sol <- optim(par= par0, fn=sqerror, x=x, y=y)$par
To Python code...
I wrote an open source Python package (BSD license) that has a genetic algorithm (Differential Evolution) front end to the scipy Levenberg-Marquardt solver, it functions similarly to what you describe in your question. The github URL is:
https://github.com/zunzun/pyeq3
It comes with a "user-defined function" example that's fairly easy to use:
https://github.com/zunzun/pyeq3/blob/master/Examples/Simple/FitUserDefinedFunction_2D.py
along with command-line, GUI, cluster, parallel, and web-based examples. You can install the package with "pip3 install pyeq3" to see if it might suit your needs.
Seems like I have been able to fix the problem.
def model (par,x):
n = len(x)
res = np.array([])
for i in range(0,n):
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
if(x[i] <= par[5]):
res = np.append(res, A0 + par[0]*x[i] + par[1]*x[i]**2)
else:
res = np.append(res,par[2] + par[3]*x[i] + par[4]*x[i]**2)
return res
def sum_squares (par, x, y):
ss = sum((y-model(par,x))**2)
print('Sum of squares = {0}'.format(ss))
return ss
And then I used the functions as follow:
parameter = sy.array([0.0,-8.0,0.0018,0.0018,0,200])
res = least_squares(sum_squares, parameter, bounds=(-360,360), args=(x1,y1),verbose = 1)
The only problem is that it doesn't produce the results I'm looking for... And that is mainly because my x values are [0,360] and the Y values only vary by about 0.2, so it's a hard nut to crack for this function, and it produces this (poor) result:
Result
I think that the range of x values [0, 360] and y values (which you say is ~0.2) is probably not the problem. Getting good initial values for the parameters is probably much more important.
In Python with numpy / scipy, you would definitely want to not loop over values of x but do something more like
def model(par,x):
res = par[2] + par[3]*x + par[4]*x**2
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
res[np.where(x <= par[5])] = A0 + par[0]*x + par[1]*x**2
return res
It's not clear to me that that form is really what you want: why should A0 (a value independent of x added to a portion of the model) be so complicated and interdependent on the other parameters?
More importantly, your sum_of_squares() function is actually not what least_squares() wants: you should return the residual array, you should not do the sum of squares yourself. So, that should be
def sum_of_squares(par, x, y):
return (y - model(par, x))
But most importantly, there is a conceptual problem that is probably going to plague this model: Your par[5] is meant to represent a breakpoint where the model changes form. This is going to be very hard for these optimization routines to find. These routines generally make a very small change to each parameter value to estimate to derivative of the residual array with respect to that variable in order to figure out how to change that variable. With a parameter that is essentially used as an integer, the small change in the initial value will have no effect at all, and the algorithm will not be able to determine the value for this parameter. With some of the scipy.optimize algorithms (notably, leastsq) you can specify a scale for the relative change to make. With leastsq that is called epsfcn. You may need to set this as high as 0.3 or 1.0 for fitting the breakpoint to work. Unfortunately, this cannot be set per variable, only per fit. You might need to experiment with this and other options to least_squares or leastsq.

F# library or .Net Numerics equivalent to Python Numpy function

I have the following python Numpy function; it is able to take X, an array with an arbitrary number of columns and rows, and output a Y value predicted by a least squares function.
What is the Math.Net equivalent for such a function?
Here is the Python code:
newdataX = np.ones([dataX.shape[0],dataX.shape[1]+1])
newdataX[:,0:dataX.shape[1]]=dataX
# build and save the model
self.model_coefs, residuals, rank, s = np.linalg.lstsq(newdataX, dataY)
I think you are looking for the functions on this page: http://numerics.mathdotnet.com/api/MathNet.Numerics.LinearRegression/MultipleRegression.htm
You have a few options to solve :
Normal Equations : MultipleRegression.NormalEquations(x, y)
QR Decomposition : MultipleRegression.QR(x, y)
SVD : MultipleRegression.SVD(x, y)
Normal equations are faster but less numerically stable while SVD is the most numerically stable but the slowest.
You can call numpy from .NET using pythonnet (C# CODE BELOW IS COPIED FROM GITHUB):
The only "funky" part right now with pythonnet is passing numpy arrays. It is possible to convert them to Python lists at the interface, though this reduces performance for some situations.
https://github.com/pythonnet/pythonnet/tree/develop
static void Main(string[] args)
{
using (Py.GIL()) {
dynamic np = Py.Import("numpy");
dynamic sin = np.sin;
Console.WriteLine(np.cos(np.pi*2));
Console.WriteLine(sin(5));
double c = np.cos(5) + sin(5);
Console.WriteLine(c);
dynamic a = np.array(new List<float> { 1, 2, 3 });
dynamic b = np.array(new List<float> { 6, 5, 4 }, Py.kw("dtype", np.int32));
Console.WriteLine(a.dtype);
Console.WriteLine(b.dtype);
Console.WriteLine(a * b);
Console.ReadKey();
}
}
outputs:
1.0
-0.958924274663
-0.6752620892
float64
int32
[ 6. 10. 12.]
Here is example using F# posted on github:
https://github.com/pythonnet/pythonnet/issues/112
open Python.Runtime
open FSharp.Interop.Dynamic
open System.Collections.Generic
[<EntryPoint>]
let main argv =
//set up for garbage collection?
use gil = Py.GIL()
//-----
//NUMPY
//import numpy
let np = Py.Import("numpy")
//call a numpy function dynamically
let sinResult = np?sin(5)
//make a python list the hard way
let list = new Python.Runtime.PyList()
list.Append( new PyFloat(4.0) )
list.Append( new PyFloat(5.0) )
//run the python list through np.array dynamically
let a = np?array( list )
let sumA = np?sum(a)
//again, but use a keyword to change the type
let b = np?array( list, Py.kw("dtype", np?int32 ) )
let sumAB = np?add(a,b)
let SeqToPyFloat ( aSeq : float seq ) =
let list = new Python.Runtime.PyList()
aSeq |> Seq.iter( fun x -> list.Append( new PyFloat(x)))
list
//Worth making some convenience functions (see below for why)
let a2 = np?array( [|1.0;2.0;3.0|] |> SeqToPyFloat )
//--------------------
//Problematic cases: these run but don't give good results
//make a np.array from a generic list
let list2 = [|1;2;3|] |> ResizeArray
let c = np?array( list2 )
printfn "%A" c //gives type not value in debugger
//make a np.array from an array
let d = np?array( [|1;2;3|] )
printfn "%A" d //gives type not value in debugger
//use a np.array in a function
let sumD = np?sum(d) //gives type not value in debugger
//let sumCD = np?add(d,d) // this will crash
//can't use primitive f# operators on the np.arrays without throwing an exception; seems
//to work in c# https://github.com/tonyroberts/pythonnet //develop branch
//let e = d + 1
//-----
//NLTK
//import nltk
let nltk = Py.Import("nltk")
let sentence = "I am happy"
let tokens = nltk?word_tokenize(sentence)
let tags = nltk?pos_tag(tokens)
let taggedWords = nltk?corpus?brown?tagged_words()
let taggedWordsNews = nltk?corpus?brown?tagged_words(Py.kw("categories", "news") )
printfn "%A" taggedWordsNews
let tlp = nltk?sem?logic?LogicParser(Py.kw("type_check",true))
let parsed = tlp?parse("walk(angus)")
printfn "%A" parsed?argument
0 // return an integer exit code

Categories

Resources