so I have a function that takes some constants and the value of my c variable and returns the value of my x and y variable like this:
def fun(*constants, c):
#Calculates some stuf to get x and y
return x, y
(x,y) = fun(constants, c)
All variables are real numbers
c belongs between 0 and a positive value cmax
The x,y points are ordered with respect to c
The function produces a curve that is continuous in the x-y plane
What is the best way to approximate the value of c given a specific value of y?
[Edited]
Tim Roberts suggests to use scipy.optimize.fsolve and this almost works for me. Is there a way to tell the fsolve to look only for roots specified in a range of c, in my case between 0 and cmax?
from scipy.optimize import fsolve
def fun(*constants, c):
#Calculates some stuf to get x and y
return x, y
def func(c):
return fun(*constants, c)[1]-y_objective
gess0 = cmax/2
y_objective = 10
c_wanted = fsolve(func, [gess0])
print(c_wanted)
The question as stated is quite broad and can delve into some deep mathematical results. I will attempt to answer your question as reasonably as possible below.
The set of assumptions you listed are AFAICT not general enough for an inverse to exist, even in a neighborhood around some region of interest.
However, let us instead assume that the conditions required of the inverse function theorem hold (see https://en.wikipedia.org/wiki/Inverse_function_theorem). The IFT gives a formula for the inverse derivative within a region where the conditions hold. You can then utilize the fundamental theorem of calculus to compute the inverse function in this region . See https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus.
The integration will need to be either done symbolically (very advanced) or can be approximated using quadrature. See https://en.wikipedia.org/wiki/Numerical_integration
I'm currently using sympy to check my algebra on some nasty equations involving second order derivatives and complex numbers.
import sympy
from sympy.abc import a, e, i, h, t, x, m, A
# define a wavefunction
Psi = A * sympy.exp(-a * ((m*x**2 /h)+i*t))
# take the first order time derivative
Psi_dt = sympy.diff(Psi, t)
# take the second order space derivative
Psi_d2x = sympy.diff(Psi, x, 2)
# write an expression for energy potential (rearrange Schrodingers Equation)
V = 1/Psi * (i*h*Psi_dt + (((h**2)/2*m) * Psi_d2x))
# symplify potential function
sympy.simplify(V)
Which yields this nice thing:
a*(-h*i**2 + m**2*(2*a*m*x**2 - h))
It would be really nice if sympy simplified i^2 to -1.
So how do I tell it that i represents the square root of -1?
On a related note, it would also be really nice to tell sympy that e is eulers number, so if I call sympy.diff(e**x, x) I get e**x as output.
You need to use the SymPy built-ins, rather than treating those symbols as free variables. In particular:
from sympy import I, E
I is sqrt(-1); E is Euler's number.
Then use the complexes methods to manipulate the complex numbers as needed.
I need to solve an equation AX = B using Python where A, X, B are matrices and all values of X must be non-negative.
The best solution I've found is
X = np.linalg.lstsq(A, B, rcond=None)
but as a result X contains negative values. Is it possible to get a solution without negative values? Thanks in advance!
In general, this is not mathematically possible. Given the basic requirements of A and B being invertible, X is a unique matrix. If you don't like the elements that X has, you can't simply ask for another solution: there isn't one. You'll have to change A or B to get a different result.
You could solve it with cvxpy:
import cvxpy
def solve(A, B):
"""
Minimizes |AX - B|**2, assuming A and B are
square matrices for simplicity. If this optimized
error is zero, this corresponds to solving AX = B.
"""
n = A.shape[0]
X = cvxpy.Variable((n,n))
# Set objective
obj_fun = cvxpy.sum_squares(A*X - B)
objective = cvxpy.Minimize(obj_fun)
# Set constraints
constraints = [X >= 0]
prob = cvxpy.Problem(objective, constraints)
result = prob.solve(solver = "ECOS")
return X.value
EDIT: The answer by Prune is correct I believe. You can check if the error in the numerical solver is non-zero by inspecting results.
I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.
I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.
I appreciate any solution, trick or advice.
What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.
Let's consider a specific function
[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
The following code uses polyfit and poly1d to approximate func over the range of interest (-10<x<10) by a polynomial function f_poly of order 10.
x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
As the following plot shows, f_poly is indeed a good approximation of func. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by fsolve
The roots of the polynomial approximation can be simply obtained as
roots = np.roots(pfit)
roots
array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])
As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval [-10,10]. These can be extracted as follows:
x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
array([-4.9269, 4.7486, 2.9158])
Array x0 can serve as the initialization for fsolve:
fsolve(func, x0)
array([-4.9848, 4.5462, 2.7192])
Remark: The pychebfun package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via fsolve).
This simple code gives the same roots as those by fsolve
import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
Between two stationary points (i.e., df/dx=0), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:
[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its sy.nsolve() allows to robustly find a zero in an interval:
import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function
I am new to python. I have looked into scipy modules to find this but unsuccessful. Her is a my problem. I have an equation t is the unknown 0.3 is the initial guess. However I know area is given, c and x is given. I need to find for a given area, c and x(0to1) what is the value of t. I am not sure how to do this? any ideas?
from scipy import integrate
import numpy as np
c=1
t=0.3
## x = 0 to 1
def evalfunction(x, c, t): return 5*t*c*(0.2969*float(np.sqrt(x/c))+(-0.1260)*(x/c)+(-0.3516)*(x/c)**2+(0.2843)*(x/c)**3+(-0.1015)*(x/c)**4)
x3 = lambda x:evalfunction(x, c, t)
#x3 = lambda x: 5*t*c*(0.2969*float(np.sqrt(x/c))+(-0.1260)*(x/c)+(-0.3516)*(x/c)**2+(0.2843)*(x/c)**3+(-0.1015)*(x/c)**4)
area = integrate.quad(x3, 0, 1)
enter code here
Try something like http://en.wikipedia.org/wiki/Golden_section_search, i.e. guess an upper and lower bound for t, and then minimize the difference of area as gotten by integrate.quad(x3,0,1) and your expected area. When this difference is at its minimum up to some tolerance that you determine, you have found your t.