using sympy solver and numpy dot doesn't work - python

I'm trying to solve a long equation using sympy solve. This is a simplified version of the equation but the issue is the same.
This code works fine:
import numpy as np
import sympy as sy
coupons = [0.504452818664, 0.486892427806, 0.47758800215, 100.468050176]
rate = sy.Symbol('rate')
rate_final = (sy.solve(100 - (rate*coupons[0]+rate*coupons[1]+rate*coupons[2]+rate*coupons[3]),rate))
print rate_final
rate-final is [0.980998226948197].
But when Ι try to use numpy.dot inside the equation, it gives an empty list as a result.
import numpy as np
import sympy as sy
coupons = [0.504452818664, 0.486892427806, 0.47758800215, 100.468050176]
rate = sy.Symbol('rate')
rate_final = (sy.solve(100 - np.dot(rate,coupons[:]),rate))
print rate_final
rate_final is [].
Is there something wrong with my code or sympy.solve won't work if np.dot() is inside the equation?

A dot product of scalar rate and vector coupons hardly makes sense. You only get an element-wise multiplication of rate and each element. However, you can do this:
import numpy as np
import sympy as sy
coupons = np.array([0.504452818664, 0.486892427806, 0.47758800215, 100.468050176])
rate = sy.Symbol('rate')
rate_final = sy.solve(100 - np.sum(rate * coupons), rate)
print(rate_final)

Related

How could the energy and cos function be in different shapes?

I am trying to write a code that calculates an integral from zero to pi. But it gives an error which I do not understand how to fix. Thank you for your time.
import numpy as np
from math import pi,cos
vtheta=np.linspace(0.0,pi,1000)
def my_function(x):
Energy = np.arange(2.1,300.1,0.1)
return ((1.0)/(Energy-1+np.cos(x)))
print (my_function(vtheta).sum())
As is pointed out in the top comment:
Energy.shape == (2980,), but x.shape == (1000,)
so reduce the number of elements in Energy or increase np.cos(x).
Since energy is just a numpy arrage i reduced it to size=1000.
In order to fix this they need to be the same size, so this ,for example, works:
import numpy as np
from math import pi,cos
vtheta=np.linspace(0.0,pi,1000)
def my_function(x):
Energy = np.arange(2.1,102.1,0.1) #<-- changed 300.1 to 102.1
return ((1.0)/(Energy-1+np.cos(x)))
print (my_function(vtheta).sum())
This is the result (with the above):
39.39900748229355

Why sympy can't calculate?

import sympy as sp
import matplotlib.pyplot as plt
# set
Cd = 0.25
g = 9.81
pf = 10**(-6) # Perturbation Fraction
t = 4
v = 36
xr = [int(input('initial guess : '))]
i = 0
Ea = 1
Es = 0.01
# 함수 정의
def f(m):
return sp.sqrt(g * m / Cd) * sp.tanh(sp.sqrt(g * Cd / m) * t) - v
# real root
x = sp.Symbol('x')
ans = sp.solve(f(x)) # sp.solve()로 해 구하기
print(ans)
i want to get real root of f(x).
but this code have some problem in line for # real root
i can't figure out
You should not expect sympy to do miracles. Beyond relatively simple symbolic manipulations, sympy just get stuck, sometimes even returning wrong answers. You have to turn to commercial tools such as Maple or Mathematica in order to crack tough nuts.
Your alternative in most practical cases is to use scipy and get a good numeric solution, which is what you want most of the time rather than a closed form solution.
Sympy is a symbolic math library, trying to find exact symbolic solutions. As such, it doesn't work well with floats, as they are necessarily imprecise.
If your equations are fully numeric, it is usually recommended to employ numeric libraries such as numpy and scipy. If you're already doing symbolic manipulations (e.g. calculating differentials), sympy provides nsolve which calls a numeric solver. As such, it also needs a seed to start its numeric search. In your case it would look like:
# ....
xr = 1
ans = sp.nsolve(f(x), xr)
Result: 142.737633108449
Sympy also has a way to convert a sympy function to numpy format (in numpy things work much faster, but there are no symbolic expressions). sp.lambdify(x, f(x)) creates such a numpy function. Here is how it would look like with your example:
import matplotlib.pyplot as plt
import numpy as np
f_np = sp.lambdify(x, f(x))
xi = np.linspace(1, 1000, 2000)
plt.plot(xi, f_np(xi))
In an interactive environment, you can add a question mark to display the numpy source of the function:
>>> f_np?
Signature: f_np(x)
Docstring:
Created with lambdify. Signature:
func(x)
Expression:
6.26418390534633*sqrt(x)*tanh(6.26418390534633*sqrt(1/x)) - 36
Source code:
def _lambdifygenerated(x):
return (6.26418390534633*sqrt(x)*tanh(6.26418390534633*sqrt(x**(-1.0))) - 36)
If you look at your expression for f(x) you will see that it is highly non-linear (as also shown in the plot that JoahnC showed you):
6.26418390534633*sqrt(x)*tanh(6.26418390534633*sqrt(1/x)) - 36
SymPy cannot give an analytical solution for something that has no such solution. It can, however, give numerical approximations for univariate expressions. That's what nsolve is for. It needs an initial guess for the solution (as you anticipated by asking for xr).
>>> sp.nsolve(f(x), 100)
142.737633108449

Evaluate/calculate math equation with every entry in array

I have an array and an equation.
I want to plug in all array values into the equation and save it.
What I've tried until now:
import math
import numpy as np
Z_F0=376.73
Epsilon=3.66
wl_range = [np.arange(0.1, 50, 0.1)]
wl_array = np.array(wl_range)
multiplied_array = 6+(2*math.pi*6)*math.exp(-1*(30.666/wl_array)**0.7528)
print(multiplied_array)
Or I've tried multiplied_array = np.vectorize(6+(2*math.pi*6)...)
But I get the
only size-1 arrays can be converted to Python scalars
error.
You don't need math, numpy has pi and exp. pi would have worked with math, as it is just a constant. But the content of your exponential is a vector, so you need to use numpy for that.
There are situations where math is faster (when you don't vectorize), as numpy has higher overhead checking for the dimensions of your input.
import numpy as np
Z_F0=376.73
Epsilon=3.66
wl_range = [np.arange(0.1, 50, 0.1)]
wl_array = np.array(wl_range)
multiplied_array = 6+(2*np.pi*6)*np.exp(-1*(30.666/wl_array)**0.7528)
print(multiplied_array)
output:
[ 6. 6. 6. 6. 6.00000001 6.00000015
6.00000127 6.00000656 6.00002459 6.00007283 6.00018111 6.00039369
6.0007699 6.00138329 6.00231952 6.00367362 6.00554684 6.00804354
6.01126831 6.01532348 6.020307 6.02631085 6.03341981 6.04171066
6.0512517 6.06210249 6.07431393 6.08792837 6.10297998 6.11949516
6.13749297 6.15698571 6.17797939 6.20047433 6.22446568 6.24994393
6.27689541 6.30530278 6.33514546 6.36640006 6.39904075 6.43303963
6.46836702 6.50499181 6.54288169 6.58200341 6.62232297 6.66380586
6.70641722 6.75012196 6.79488494 6.84067111 6.88744554 6.93517359
6.98382097 7.03335377 7.08373859 7.13494255 7.18693331 7.23967918
7.29314908 7.34731259 7.40213999 7.45760222 7.51367097 7.5703186 ...
math.exp() only works with a scalar argument x. If you use numpy.exp() then the equation should work.

Adding powers of exponential equation in python

I'm new to Python, just started two days back. I was trying to perform division on two exponential expressions but the output won't show the powers added.
Following is my code.
from sympy import integrate, symbols, exp
from IPython.display import display, Markdown
x, y = symbols('x,y', positive=True)
fxy = y*exp(-y*(x+1))
fy= exp(-y)
sol = fxy/fy
sol
I guess you're looking for simplify:
>>> sympy.simplify(y * exp(-y*(x +1)) / exp(-y))
y*exp(-x*y)

How to use well scipy.optimize.fmin

Hello I'm trying tu use scipy.optimize.fmin to minimize a function. But things aren't going well since my computation seems diverging instead of converging and i got an error. I tried to fixed a tolerance but it is not working.
Here is my code (Main program):
import sys,os
import numpy as np
from math import exp
import scipy
from scipy.optimize import fmin
from carlo import *
A=real()
x_r=0.11245
x_i=0.14587
#C=A.minim
part_real=0.532
part_imag=1.2
R_0 = fmin(A.minim,[part_real,part_imag],xtol=0.0001)
And the class:
import sys,os
import numpy as np
import random, math
import matplotlib.pyplot as plt
import cmath
#import pdb
#pdb.set_trace()
class real:
def __init__(self):
self.nmodes = 4
self.L_ch = 1
self.w = 2
def minim(self,p):
x_r=p[0]
x_i=p[1]
x=complex(x_r,x_i)
self.a=complex(3,4)*(3*np.exp(1j*self.L_ch))
self.T=np.array([[0.0,2.0*self.a],[(0.00645+(x)**2), 4.3*x**2]])
self.Id=np.array([[1,0],[0,1]])
self.disp=np.linalg.det(self.T-self.Id)
print self.disp
return self.disp
The error is:
(-2.16124712985-8.13819476595j)
/usr/local/lib/python2.7/site-packages/scipy/optimize/optimize.py:438: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[0] = func(x0)
(-1.85751684826-8.95377303768j)
/usr/local/lib/python2.7/site-packages/scipy/optimize/optimize.py:450: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[k + 1] = f
(-2.79592712985-8.13819476595j)
(-3.08484130014-7.36240080015j)
(-3.68788935914-6.62639114029j)
/usr/local/lib/python2.7/site-packages/scipy/optimize/optimize.py:475: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[-1] = fxe
(-2.62046851255e+87-1.45013007728e+88j)
(-4.037931857e+87-2.2345341712e+88j)
(-7.45017628087e+87-4.12282179854e+88j)
(-1.14801242605e+88-6.35293780534e+88j)
(-2.11813751435e+88-1.17214723347e+89j)
Warning: Maximum number of function evaluations has been exceeded.
Actually I don't undersatnd why the computation is diverging, maybe I have to use something else instead of using fmin for minimizing?
Someone got an idea?
Thank you very much.
Try to optimize the absolute value instead of the complex value. That gave decent result for me.
f = lambda x: abs(A.minim(x))
R_0 = fmin(f,[part_real,part_imag],xtol=0.0001)
I guess fmin don't work well with complex values.

Categories

Resources