x,y = sympy.symbols("x y")
f = sympy.erf(x+y) - 1
sympy.plot_implicit(f)
Is it possible to plot this in Sympy 1.9 and NumPy 1.21.4, or an upgrade path for either that allows me to plot this, or a workaround I can use?
Related
I'm trying to do some calculations in sympy but keep getting an "Invalid limits given" error when I try and plot my function. I'm new to python and sympy so I'm sure it's an obvious mistake but I just cant understand how the limits are invalid.
import sympy
x = sympy.symbols('x')
min_x, max_x = -6.0, 6.0
func = x * sympy.integrate(x * sympy.tanh(x), (x, min_x, max_x))
p = sympy.plot(func)
In a few words, the plotting module converts the symbolic expression to a numerical function, evaluates it and create the plot. However, the current plotting module is using out-of-date code to convert the symbolic expression to a numerical function. That error simply means that the plotting module can't plot the expression.
You can plot your expression in two different ways. The first: convert the symbolic expression to a numerical function, use numpy and matplotlib:
import matplotlib.pyplot as plt
import numpy as np
f = sympy.lambdify(x, func)
plt.figure()
xx = np.linspace(-10, 10)
yy = f(xx)
plt.plot(xx, yy)
The second one: you install the new and more advanced plotting module, Sympy Plot Backend. Then:
from spb import *
plot(func)
We have two lists (vectors) of data, y and x, we can imagine x being time steps (0,1,2,...) and y some system property computed at value each value of x.
I'm interested in calculating the derivative of log of y with respect to log of x, and the question is how to perform such calculations in Python?
We can start off by using numpy to calculate the logs: logy = np.log(y) and logx = np.log(x). Then what method do we use for the differentiation dlog(y)/dlog(x)?
One option that comes to mind is using np.gradient() in the following way:
deriv = np.gradient(logy,np.gradient(logx)).
Is this a valid way of going about this calculation?
Are there better (or equivalent) alternatives without using np.gradient?
Right after looking at the source of np.gradient here and looking around you can see it changed in numpy version 1.14, hence why the docs change.
I have version 1.11. So I think that gradient is defined as def gradient(y, x) -> dy/dx if isinstance(x, np.ndarray) now but isn't in version 1.11. Doing np.gradient(y, np.array(...)) is actually, I think, undefined behaviour!
However, np.gradient(y) / np.gradient(x) works for all numpy versions. Use that!
Proof:
import numpy as np
import matplotlib.pyplot as plt
x = np.sort(np.random.random(10000)) * 2 * np.pi
y = np.sin(x)
dy_dx = np.gradient(y) / np.gradient(x)
plt.plot(x, dy_dx)
plt.show()
Looks an awful lot like a cos wave
Why does Sympy throw a Type error when I use the scipy.stats.norm? How can I solve the equation?
from sympy import Eq, Symbol, solve, Piecewise
from scipy.stats import norm
import numpy as np
x = Symbol('x')
eqn = Eq((x-0.2)/0.3, norm.cdf((np.log(100/110) + x**2/2)/x))
print(solve(eqn))
Output:
TypeError: cannot determine truth value of Relational
Symbolic setup
If you are looking for symbolic solutions, use symbolic functions: e.g., SymPy's log not NumPy's log. The normal CDF is also available from SymPy's stats module as cdf(Normal("x", 0, 1)). The correct SymPy setup would be this:
from sympy import Eq, Rational, Symbol, log
from sympy.stats import cdf, Normal
eqn = Eq((x-Rational('0.2'))/Rational('0.3'), cdf(Normal("x", 0, 1))(log(Rational(100, 110)) + x**2/2)/x)
Notice that I put Rational('0.2') where you had 0.2. The distinction between rationals and floats is important for symbolic math. The equation now looks good from the formal point of view:
Eq(10*x/3 - 2/3, (erf(sqrt(2)*(x**2/2 - log(11) + log(10))/2)/2 + 1/2)/x)
Unfortunately it also looks hopeless: there's no closed form solution for things like that, involving a transcendental function equated to a polynomial. Naturally, solve(eqn) will fail. So all of the above does is demonstrate correct use of SymPy, but it doesn't change the fact that there is no symbolic solution.
Numeric solution
To solve this numerically, do the opposite: drop the SymPy parts and import fsolve from SciPy.
from scipy.stats import norm
from scipy.optimize import fsolve
import numpy as np
f = lambda x: (x-0.2)/0.3 - norm.cdf((np.log(100/110) + x**2/2)/x)
print(fsolve(f, 1)) # 1 is a random initial guess
The answer is 0.33622392.
I'm working on a project for my multivariable calculus class, My objective is to graph an arbitrary function f(x,y), use contour plots to graph the partial derivatives (df/dx, df/dy) and a quiver plot for the gradient of the function, but I have an issue while graphing more complicated functions.
For function inputs like f(x,y) = (x+y)**2 the program works fine an outputs the graph, but when I use an input that requires a more complicated mathematical concept (i.e: f(x,y) = sin(x*y). I get an error:
TypeError: only length-1 arrays can be converted to Python scalars.
There are a lot of cases of this on stackoverflow, but they all seem to be isolated incidents involving numpy/sympy conflicts. In my program, I am reliant on sympy for creating arbitrary functions, and numpy for array computation, so I'm not sure how to get around this problem.
'''
Imports
'''
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import numpy as np
from sympy import *
x = Symbol('x')
y = Symbol('y')
lims = [-10, 10]
function = sin(x+y)
lam_function = lambdify((x,y),function)
fig = plt.figure()
ax = fig.gca(projection='3d')
gX, gY = np.meshgrid(np.arange(lims[0], lims[1], 0.05),
np.arange(lims[0], lims[1], 0.05))
z = lam_function(gX, gY)
plot = ax.plot_surface(gX, gY, z, cmap=plt.cm.jet, linewidth=0)
plt.colorbar(plot, cmap=plt.cm.jet)
plt.show()
The solution is easy: You need to specify the which package to use with lambdify
lam_function = sym.lambdify((x,y), function, "numpy")
This will ensure that the resulting functions will be numpy compatible. This works for basic functions like sin, cos, atan, log but may fail for more complicated ones such as sympy.lowergamma.
Now as to why polinamials work without specifiying "numpy" is easy to understand if we have a close look at sympy's documentation. If no package is specified sympy will try python-math, numpy and mpmath in exactly this order. Now a python-math x is not really different from a numpy x, but python-math sin is a lot different as it can not handle numpy arrays.
One last thing: Newer versions of sympy (1.1.1+) behave differently. In these new versions sympy.lambdify will try to use numpy as default if installed if not resort to math, mpmath, sympy.
I have a lognorm distribution from scipy and its parameters are known.
import scipy
log_norm_obj = scipy.stats.lognorm([log_mu], shape=sigma)
I need to solve for a x which satisfies the following equation:
x = (1 - log_norm_obj.cdf(x)) / log_norm_obj.pdf(x)
How could I do this using numpy/scipy? Thanks!
You use scipy.optimize. From scipy 0.11 and later, you can use the new functions minimize or minimize_scalar. Assuming your x is a scalar, here's some example code on how to do it:
from scipy.optimize import minimize_scalar
def f(x):
return (1 - log_norm_obj.cdf(x)) / log_norm_obj.pdf(x) - x
result = minimize_scalar(f)
print result.x
# this would print your result
The above uses Brent's method, the default. You can also the Golden method, or a bounded version of Brent's method. The latter could be useful if your function is only defined in a given domain or you want a solution in a specific interval. An example of this:
result = minimize_scalar(f, bounds=(0, 10.), method='bounded')
If your function takes a vector instead of a scalar, a similar approach can be taken using minimize. If your scipy is older than version 0.11, just use a flavour of fmin.