I am solving a ODE as follows:
import numpy as np
import scipy as sp
import math
from math import *
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def g(y, x):
y0 = y[0]
return x #formula##
# Initial conditions on y, y' at x=0
init = 0 #value##
# First integrate from 0 to 100
xplotval=np.linspace(4,8,4) #linspacefunction
print(xplotval)
I am getting output as:
[[ 7. ]
[ 5.76455273 ]
[ 5.41898906 ]
[ 6.49185668 ]]
I'd like to output a single dimensional array as follows:
[7., 5.76455273, 5.41898906, 6.49185668]
How can I?
Maybe you want flatten:
print(xplotval.flatten())
Unless you actually want the transposed vector, which you would get with numpy.transpose:
print(np.transpose(xplotval))
You can simply use list comprehension, something like:
oneD = [l[0] for l in xplotval]
Related
I've been doing a code trying to calculate y=ax+b with values for x and y. So I want to determine a and b.
For example, I have y=2x+1 so
xexp =[ 0., 0.1111111, 0.2222222, 0.3333333, 0.4444444, 0.5555556, 0.6666667, 0.7777778, 0.8888889, 1.] and
yexp=[1., 1.2222222, 1.4444444, 1.6666667, 1.8888889, 2.1111111, 2.3333333, 2.5555556, 2.7777778, 3.]
I need to create a loop in my code to calculate every ycalc[i] and return as my objective function the mean error of each iteration using all the given data.
from scipy.optimize import differential_evolution
import numpy as np
from matplotlib import pyplot
from scipy.optimize import LinearConstraint
from scipy.optimize import NonlinearConstraint
#2*x+1 // a*x+b --> have xexp and yexp need to calc a and b when xcalc = xexp
errototal=[]
ycalc=[]
erro=[]
#experimental values
xexp =[ 0., 0.1111111, 0.2222222, 0.3333333, 0.4444444, 0.5555556, 0.6666667, 0.7777778, 0.8888889, 1.]
yexp=[1., 1.2222222, 1.4444444, 1.6666667, 1.8888889, 2.1111111, 2.3333333, 2.5555556, 2.7777778, 3.]
#obj function
def func(xcalc):
return np.mean(sum(errototal))
bounds = [(-5, 5), (-5, 5)]
plot1=[]
def condition(xk, convergence):
for i in range(0,9):
ycalc.append(xk[0]*xexp[i]+xk[1])
# ycalc.append(xk[0]*xk[2]+xk[1])
erro.append(abs(yexp[i]-ycalc[i]))
errototal.append(erro[i])
plot1.append(func(sum(errototal)))
print(xk,errototal,ycalc, yexp)
print(sum(errototal))
result = differential_evolution(func, bounds, disp=True, callback=condition)
# line plot of best objective function values
pyplot.plot(plot1, '.-')
pyplot.xlabel('Improvement Number')
pyplot.ylabel('Evaluation f(x)')
pyplot.show()
My code stops in the first step. Any tips?
I am using Python 3.7 and Numpy 1.16.5.
I tried to use the following code:
import numpy as np
M = [[np.eye(3), np.zeros((3,3))],[temp4, np.eye(3)]]
FTp = [[-0.0003],[0.0008],[0.0008],[0.0055],[0.0020],[0.0044]]
FT = np.linalg.solve(M,FTp)
The purpose of this code is to get a left division between M and FTp (FT = M\FTp).
temp4 is a custom-valued 3x3 matrix. Whatever value temp4 has, the matrix M should be full-ranked.
However, when I tried to run this code, I got the following message:
LinAlgError: Singular matrix
What caused this error and how can I fix it?
Using np.block to create a 2d-array starting from 2d-blocks:
import numpy as np
tmp = np.random.rand(3,3)
M = np.block([[np.eye(3), np.zeros((3,3))],[tmp, np.eye(3)]])
M.shape # (6, 6)
FTp = [[-0.0003],[0.0008],[0.0008],[0.0055],[0.0020],[0.0044]]
FT = np.linalg.solve(M,FTp)
FT
array([[-0.0003 ],
[ 0.0008 ],
[ 0.0008 ],
[ 0.00477519],
[ 0.0014083 ],
[ 0.00380704]])
I'm wondering how the following code could be faster. At the moment, it seems unreasonably slow, and I suspect I may be using the autograd API wrong. The output I expect is each element of timeline evaluated at the jacobian of f, which I do get, but it takes a long time:
import numpy as np
from autograd import jacobian
def f(params):
mu_, log_sigma_ = params
Z = timeline * mu_ / log_sigma_
return Z
timeline = np.linspace(1, 100, 40000)
gradient_at_mle = jacobian(f)(np.array([1.0, 1.0]))
I would expect the following:
jacobian(f) returns an function that represents the gradient vector w.r.t. the parameters.
jacobian(f)(np.array([1.0, 1.0])) is the Jacobian evaluated at the point (1, 1). To me, this should be like a vectorized numpy function, so it should execute very fast, even for 40k length arrays. However, this is not what is happening.
Even something like the following has the same poor performance:
import numpy as np
from autograd import jacobian
def f(params, t):
mu_, log_sigma_ = params
Z = t * mu_ / log_sigma_
return Z
timeline = np.linspace(1, 100, 40000)
gradient_at_mle = jacobian(f)(np.array([1.0, 1.0]), timeline)
From https://github.com/HIPS/autograd/issues/439 I gathered that there is an undocumented function autograd.make_jvp which calculates the jacobian with a fast forward mode.
The link states:
Given a function f, vectors x and v in the domain of f, make_jvp(f)(x)(v) computes both f(x) and the Jacobian of f evaluated at x, right multiplied by the vector v.
To get the full Jacobian of f you just need to write a loop to evaluate make_jvp(f)(x)(v) for each v in the standard basis of f's domain. Our reverse mode Jacobian operator works in the same way.
From your example:
import autograd.numpy as np
from autograd import make_jvp
def f(params):
mu_, log_sigma_ = params
Z = timeline * mu_ / log_sigma_
return Z
timeline = np.linspace(1, 100, 40000)
gradient_at_mle = make_jvp(f)(np.array([1.0, 1.0]))
# loop through each basis
# [1, 0] evaluates (f(0), first column of jacobian)
# [0, 1] evaluates (f(0), second column of jacobian)
for basis in (np.array([1, 0]), np.array([0, 1])):
val_of_f, col_of_jacobian = gradient_at_mle(basis)
print(col_of_jacobian)
Output:
[ 1. 1.00247506 1.00495012 ... 99.99504988 99.99752494
100. ]
[ -1. -1.00247506 -1.00495012 ... -99.99504988 -99.99752494
-100. ]
This runs in ~ 0.005 seconds on google collab.
Edit:
Functions like cdf aren't defined for the regular jvp yet but you can use another undocumented function make_jvp_reversemode where it is defined. Usage is similar except that the output is only the column and not the value of the function:
import autograd.numpy as np
from autograd.scipy.stats.norm import cdf
from autograd.differential_operators import make_jvp_reversemode
def f(params):
mu_, log_sigma_ = params
Z = timeline * cdf(mu_ / log_sigma_)
return Z
timeline = np.linspace(1, 100, 40000)
gradient_at_mle = make_jvp_reversemode(f)(np.array([1.0, 1.0]))
# loop through each basis
# [1, 0] evaluates first column of jacobian
# [0, 1] evaluates second column of jacobian
for basis in (np.array([1, 0]), np.array([0, 1])):
col_of_jacobian = gradient_at_mle(basis)
print(col_of_jacobian)
Output:
[0.05399097 0.0541246 0.05425823 ... 5.39882939 5.39896302 5.39909665]
[-0.05399097 -0.0541246 -0.05425823 ... -5.39882939 -5.39896302 -5.39909665]
Note that make_jvp_reversemode will be slightly faster than make_jvp by a constant factor due to it's use of caching.
I would like to calculate the p-value of the fit I got from numpy.linalg.lstsq. Here a toy example:
import numpy as np
x = np.array([[ 58295.62187335],[ 45420.95483714],[ 3398.64920064],[ 977.22166306],[ 5515.32801851],[ 14184.57621022],[ 16027.2803392 ],[ 15313.01865824],[ 6443.2448182 ]])
y = np.array([ 143547.79123381, 22996.69597427, 2591.56411049, 661.93115277, 8826.96549102, 17735.13549851, 11629.13003263, 14438.33177173, 6997.89334741])
a, res, rank, s = np.linalg.lstsq(x, y)
from previous question (get the R^2 value from scipy.linalg.lstsq) I know got to get R², however I would also like to compute the p-value.
Thanks in advance.
you could use scipy.stats
import numpy as np
from scipy.stats import pearsonr
x = np.array([ 58295.62187335, 45420.95483714, 3398.64920064, 977.22166306, 5515.32801851, 14184.57621022, 16027.2803392 , 15313.01865824, 6443.2448182 ])
y = np.array([ 143547.79123381, 22996.69597427, 2591.56411049, 661.93115277, 8826.96549102, 17735.13549851, 11629.13003263, 14438.33177173, 6997.89334741])
r, p = pearsonr(x,y)
If I define a function whit two array, for instance like this:
from numpy import *
x = arange(-10,10,0.1)
y = x**3
How can I extract the value of y(5.05) interpolating the value of the two closer point y(5) and y(5.1)? Now if I want find that value, I use this method:
y0 = y[x>5][0]
And I should obtain the value of y for x=5.1, but I think that exist better methods, and probably they are the correct ones.
There's numpy.interp, if linear interpolation will suffice:
>>> import numpy as np
>>> x = np.arange(-10, 10, 0.1)
>>> y = x**3
>>> np.interp(5.05, x, y)
128.82549999999998
>>> 5.05**3
128.787625
And there are a bunch of tools in scipy for interpolation [docs]:
>>> import scipy.interpolate
>>> f = scipy.interpolate.UnivariateSpline(x, y)
>>> f
<scipy.interpolate.fitpack2.LSQUnivariateSpline object at 0xa85708c>
>>> f(5.05)
array(128.78762500000025)
There's a function for this in numpy/scipy..
import numpy as np
np.interp(5.05, x, y)