Float is not callable - python

I'm getting a float object is not callable error when trying to do a calculation for force of gravity on a mass.
def grav():
mass = float(input('Enter a mass[kg]: '))
dist = float(input('Enter a distance from the surface of earth [m]: '))
rad = 6.3781*10**6
me = 5.97219*10**24
gr = 6.67300*10**-11
f = ((gr((me)*(mass)))/((rad)+(dist)**2))
return f
grav()
It's giving the float error on the part where everything is being calculated

gr((me)*(mass))
The above tries to call gr like it's a function. It's just a constant. There's no need for all of those parentheses, anyway.
gr * me * mass / (...)
You do have a bug in the denominator though. You need to divide by (rad+dist)**2, not rad + (dist**2) (which you're doing now).
Altogether
f = gr * me * mass / (rad + dist)**2
That's all you need.
And if I can make a suggestion, make your variable names more self-documenting. I can understand your code because I recognize the formula, but not everyone is going to have that advantage.
force = G * MASS_OF_EARTH * mass / (EARTH_RADIUS + distance_from_earth)**2
Easier to read, no? You don't have to be this verbose, but too much is better than too little. It can be tempting to over-abbreviate in scientific computing but I really advise against it. Be terse when necessary but always be self-documenting.

change the line 7 like
f = ((gr*((me)*(mass)))/((rad)+(dist)**2))
or you can simplify your formula with
f = gr*(me*mass)/(rad+dist)**2
it will work.

Related

Define a function that is a derivative of a function

I was wondering if is there is a way to define a function that is a derivative of a function. I'm new to python so I don't no much, I tired looking up stuff that might be similar but nothing has worked so far. This is what I have for my code right now.
import sympy as sp
import math
x = sp.Symbol('x')
W = 15 #kN/m
E = 70 # Gpa
I = 52.9*10**(-6) #m**4
L = 3 #m
e = 0.01
xi = 1.8
y = 9
def f(x):
return ( ( y*3*(math.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*math.cos(math.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (math.pi**3)*(x**3) ) )/(3*L*(math.pi**3))**(1/2)
def derv(f,x):
return sp.diff(f)
print (derv(f,x))
Also, I don't understand whatx = sp.Symbol('x') does, so if someone could explain that, that would be awesome.
Any help is appreciated.
You are conflating two different things: python functions like f and math functions, which you can express with sympy like y = π * x/3. f is a python function that returns a sympy expression. sympy lets you stay in the world of symbolic math functions by defining variables like x = sp.Symbol('x') So calling f() produces a symbolic math function like:
You can use sympy to find the derivative of the symbolic function returned by f() but you need to define it with the sympy versions of the cos() function (and sp.pi if you want to keep it symbolic).
For example:
import sympy as sp
x = sp.Symbol('x')
W = 15 #kN/m
E = 70 # Gpa
I = 52.9*10**(-6) #m**4
L = 3 #m
e = 0.01
xi = 1.8
y = 9
def f(x):
return ( ( y*3*(sp.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*sp.cos(sp.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (sp.pi**3)*(x**3) ) )/(3*L*(sp.pi**3))**(1/2)
def derv(f,x):
return sp.diff(f(x)) # pass the result of f() which is a sympy function
derv(f,x)
You've programmed the function. it appears to be a simple function of two independent variables x and y.
Could be that x = sp.Symbol('x') is how SymPy defines the independent variable x. I don't know if you need one or another one for y.
You know enough about calculus to know that you need a derivative. Do you know how to differentiate a function of a single independent variable? It helps to know the answer before you start coding.
y*3*(math.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*math.cos(math.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (math.pi**3)*(x**3) ) )/(3*L*(math.pi**3))**(1/2)
Looks simple.
There's only one term with y in it. The partial derivative w.r.t. y leaves you with 3*(math.pi**4)*E*I/(W*L) )
There's only one term with Cx**3 in it. That's easy to differentiate: 3C*x**2.
What's so hard? What's the problem?
In traditional programming, each function you write is translated to a series of commands that are then sent to the CPU and the result of the calculation is returned. Therefore, symbolic manipulation, like what we humans do with algebra and calculus, doesn't make any sense to the computer. Sympy gets around this by overriding Python's normal arithmetic operators, allowing you to do generate algebraic functions that can be manipulated similarly to how we humans do math. That's what sp.Symbols('x') is doing: providing you with a symbolic variable you can work with (you're also naming it in sympy).
If you want to evaluate your derivative, simply call evalf with the numerical value you want to assign to x.

Two consecutive parentheses after name in Matlab

Please excuse me for this amateurish question,
I'm trying to convert some Matlab code to Python, I don't have any experience with Matlab but so far I have been able to deduce what the Matlab code does and the conversion so far has been successful.
But now I am stuck at these lines of code;
SERAstart = []
for n=1:length(bet)
alf = -bet(n) * 1e-2;
SERAstart = [SERAstart(alf - i * bet(n))(alf + i * bet(n))];
end
What I don't understand is this line;
SERAstart(alf - i * bet(n))(alf + i * bet(n))
Two consecutive parentheses after "SERAstart", are they nested array indexing? are they indexing and then function call to return of indexing? are they function call and then function call again to return value of first call?
Please help me understand what is going on here so that I can convert it to Python.
I realize that it might not be possible just from the piece of code I have posted to say definitively what it does, but otherwise if you can help guide me to how to figure it out (without the use of Matlab) then I would also be very grateful.
Thanks a lot for any help!.
EDIT:
This is my own attempt at a conversion, but I don't know if this makes sense;
# SERAstart = [];
SERAstart = [[]]
# for n=1:length(bet)
for n in range(len(bet)):
# alf = -bet(n) * 1e-2;
alf = -bet[n] * 1e-2
# SERAstart = [SERAstart(alf - i * bet(n))(alf + i * bet(n))];
SERAstart = [SERAstart[alf - 1j * bet[n]][alf + 1j * bet[n]]]
# end
EDIT2:
I just noticed this line in the documentation relating to SERAstart:
% SERAstart(N,1) : vector of starting poles [rad/sec]

Python curve fit with change point

As I'm really struggleing to get from R-code, to Python code, I would like to ask some help. The code I want to use has been provided to my from withing the mathematics forum of stackexchange.
https://math.stackexchange.com/questions/2205573/curve-fitting-on-dataset
I do understand what is going on. But I'm really having a hard time trying to solve the R-code, as I have never seen anything of it. I have written the function to return the sum of squares. But I'm stuck at how I could use a function similar to the optim function. And also I don't really like the guesswork at the initial values. I would like it better to run and re-run a type of optim function untill I get the wanted result, because my needs for a nearly perfect curve fit are really high.
def model (par,x):
n = len(x)
res = []
for i in range(1,n):
A0 = par[3] + (par[4]-par[1])*par[6] + (par[5]-par[2])*par[6]**2
if(x[i] == par[6]):
res[i] = A0 + par[1]*x[i] + par[2]*x[i]**2
else:
res[i] = par[3] + par[4]*x[i] + par[5]*x[i]**2
return res
This is my model function...
def sum_squares (par, x, y):
ss = sum((y-model(par,x))^2)
return ss
And this is the sum of squares
But I have no idea on how to convert this:
#I found these initial values with a few minutes of guess and check.
par0 <- c(7,-1,-395,70,-2.3,10)
sol <- optim(par= par0, fn=sqerror, x=x, y=y)$par
To Python code...
I wrote an open source Python package (BSD license) that has a genetic algorithm (Differential Evolution) front end to the scipy Levenberg-Marquardt solver, it functions similarly to what you describe in your question. The github URL is:
https://github.com/zunzun/pyeq3
It comes with a "user-defined function" example that's fairly easy to use:
https://github.com/zunzun/pyeq3/blob/master/Examples/Simple/FitUserDefinedFunction_2D.py
along with command-line, GUI, cluster, parallel, and web-based examples. You can install the package with "pip3 install pyeq3" to see if it might suit your needs.
Seems like I have been able to fix the problem.
def model (par,x):
n = len(x)
res = np.array([])
for i in range(0,n):
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
if(x[i] <= par[5]):
res = np.append(res, A0 + par[0]*x[i] + par[1]*x[i]**2)
else:
res = np.append(res,par[2] + par[3]*x[i] + par[4]*x[i]**2)
return res
def sum_squares (par, x, y):
ss = sum((y-model(par,x))**2)
print('Sum of squares = {0}'.format(ss))
return ss
And then I used the functions as follow:
parameter = sy.array([0.0,-8.0,0.0018,0.0018,0,200])
res = least_squares(sum_squares, parameter, bounds=(-360,360), args=(x1,y1),verbose = 1)
The only problem is that it doesn't produce the results I'm looking for... And that is mainly because my x values are [0,360] and the Y values only vary by about 0.2, so it's a hard nut to crack for this function, and it produces this (poor) result:
Result
I think that the range of x values [0, 360] and y values (which you say is ~0.2) is probably not the problem. Getting good initial values for the parameters is probably much more important.
In Python with numpy / scipy, you would definitely want to not loop over values of x but do something more like
def model(par,x):
res = par[2] + par[3]*x + par[4]*x**2
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
res[np.where(x <= par[5])] = A0 + par[0]*x + par[1]*x**2
return res
It's not clear to me that that form is really what you want: why should A0 (a value independent of x added to a portion of the model) be so complicated and interdependent on the other parameters?
More importantly, your sum_of_squares() function is actually not what least_squares() wants: you should return the residual array, you should not do the sum of squares yourself. So, that should be
def sum_of_squares(par, x, y):
return (y - model(par, x))
But most importantly, there is a conceptual problem that is probably going to plague this model: Your par[5] is meant to represent a breakpoint where the model changes form. This is going to be very hard for these optimization routines to find. These routines generally make a very small change to each parameter value to estimate to derivative of the residual array with respect to that variable in order to figure out how to change that variable. With a parameter that is essentially used as an integer, the small change in the initial value will have no effect at all, and the algorithm will not be able to determine the value for this parameter. With some of the scipy.optimize algorithms (notably, leastsq) you can specify a scale for the relative change to make. With leastsq that is called epsfcn. You may need to set this as high as 0.3 or 1.0 for fitting the breakpoint to work. Unfortunately, this cannot be set per variable, only per fit. You might need to experiment with this and other options to least_squares or leastsq.

Implied volatility calculation in Python

With the comments from the answer, I rewrote the code below (math.1p(x)->math.log(x)), which now should work and give a good approximation of the volatility.
I am trying to create a short code to calculate the implied volatility of a European Call option. I wrote the code below:
from scipy.stats import norm
import math
norm.cdf(1.96)
#c_p - Call(+1) or Put(-1) option
#P - Price of option
#S - Strike price
#E - Exercise price
#T - Time to expiration
#r - Risk-free rate
#C = SN(d_1) - Ee^{-rT}N(D_2)
def implied_volatility(Price,Stock,Exercise,Time,Rf):
P = float(Price)
S = float(Stock)
E = float(Exercise)
T = float(Time)
r = float(Rf)
sigma = 0.01
print (P, S, E, T, r)
while sigma < 1:
d_1 = float(float((math.log(S/E)+(r+(sigma**2)/2)*T))/float((sigma*(math.sqrt(T)))))
d_2 = float(float((math.log(S/E)+(r-(sigma**2)/2)*T))/float((sigma*(math.sqrt(T)))))
P_implied = float(S*norm.cdf(d_1) - E*math.exp(-r*T)*norm.cdf(d_2))
if P-(P_implied) < 0.001:
return sigma
sigma +=0.001
return "could not find the right volatility"
print implied_volatility(15,100,100,1,0.05)
This yields: 0.595 volatility which should be somewhere 0.3203. That is a huge difference...
I know this is not a fast method by any means, I just want to demonstrate how the principle works, but I am not able to calculate a good approximation.
For some reason when I call the function it gives me really bad approximation of the actual implied volatility which I calculated using a Matlab Program and the following webpage: Implied Volatility. Could anyone please help me to figure out where I made the mistake?
There are two problems I see, none of which are directly python related:
You are using log1p(x), which is the natural logarithm of 1+x, while you actually want log(x), which is the natural logarithm of x (cf. Wikipedia).
An option price of 100 is way to high considering the other parameters. Try to calculate the implied volatility for a price of 10 - which should be about 0.18 both by your program and the calculator you linked.
In Python2, the result of 5 / 2 is 2. It uses floor division. To fix that, make every number a float. In your implied_volatility function, change P = Price to P = float(Price), S = Stock to S = float(Stock), etc.

Is vectorizing this triple for loop in Python / Numpy possible?

I am trying to speed up my code which currently takes a little over an hour to run in Python / Numpy. The majority of computation time occurs in the function pasted below.
I'm trying to vectorize Z, but I'm finding it rather difficult for a triple for loop. Could I possible implement the numpy.diff function somewhere? Take a look:
def MyFESolver(KK,D,r,Z):
global tdim
global xdim
global q1
global q2
for k in range(1,tdim):
for i in range(1,xdim-1):
for j in range (1,xdim-1):
Z[k,i,j]=Z[k-1,i,j]+r*q1*Z[k-1,i,j]*(KK-Z[k-1,i,j])+D*q2*(Z[k-1,i-1,j]-4*Z[k-1,i,j]+Z[k-1,i+1,j]+Z[k-1,i,j-1]+Z[k-1,i,j+1])
return Z
tdim = 75 xdim = 25
I agree, it's tricky because the BCs on all four sides, ruin the simple structure of the Stiffness matrix. You can get rid of the space loops as such:
from pylab import *
from scipy.sparse.lil import lil_matrix
tdim = 3; xdim = 4; r = 1.0; q1, q2 = .05, .05; KK= 1.0; D = .5 #random values
Z = ones((tdim, xdim, xdim))
#Iterate in time
for k in range(1,tdim):
Z_prev = Z[k-1,:,:] #may need to flatten
Z_up = Z_prev[1:-1,2:]
Z_down = Z_prev[1:-1,:-2]
Z_left = Z_prev[:-2,1:-1]
Z_right = Z_prev[2:,1:-1]
centre_term = (q1*r*(Z_prev[1:-1,1:-1] + KK) - 4*D*q2)* Z_prev[1:-1,1:-1]
Z[k,1:-1,1:-1]= Z_prev[1:-1,1:-1]+ centre_term + q2*(Z_up+Z_left+Z_right+Z_down)
But I don't think you can get rid of the time loop...
I think the expression:
Z_up = Z_prev[1:-1,2:]
makes a copy in numpy, whereas what you want is a view - if you can figure out how to do this - it should be even faster (how much?)
Finally, I agree with the rest of the answerers - from experience, this kind of loops are better done in C and then wrapped into numpy. But the above should be faster than the original...
This looks like an ideal case for Cython. I'd suggest writing that function in Cython, it'll probably be hundreds of times faster.

Categories

Resources