I need help making a program that calculates the Gaussian function f(x)=1/(sqrt(2*pi)s)*exp[-.5*((x-m)/s)**2] when m=0, s=2, and x=1.
Would it be just:
def Gaussian(m,s,x):
return 1/(sqrt(2*pi)s)*exp[-.5*((x-m)/s)**2]
print Gaussian(0,2,1)
I think you are missing from math import sqrt, from math import exp and from math import pi unless you didn't show it in your code.
Related
I'm trying to calculate the gamma function of a number and it is going to infinity. When I use maple (for instance) it returns the correct answer (2.57e1133 (yes, it's huge)). I tried to use DECIMAL but 0 success. Is there a solution? Thanks in advance.
the code
import scipy as sp
from scipy.special import gamma
from decimal import Decimal
from scipy import special
def teste(k):
Bk2 = Decimal(gamma((1/(2*k))+(3/4)))
return Bk2
print(teste(0.001))
Result
Infinity
I used gammaln to prevent overflow, then I cast the result to Decimal. Lastly, I use the exponential function to recover the intended result.
import scipy as sp
from scipy.special import gammaln
from decimal import Decimal
from scipy import special
from numpy import exp
def teste(k):
Bk2 = exp(Decimal(gammaln((1/(2*k))+(3/4))))
return Bk2
print(teste(0.001))
I am implementing Andrew Ng's Machine Learning course on Python, but I got stuck because the scipy's optimize functions keep giving me a hard time by not working/giving me dimension errors
The goal is to find the minimum of the cost function (a scalar function that takes theta (dimension (1,401)), X (dimension (5000,401)), and y (dimension (5000,1)) as inputs). I have defined such cost function and its gradient wrt parameters. When running one of the optimize functions (I have tried fmin_tnc, minimize, Nelder-Mead and others, all not working), either they run for ages or keep giving me errors saying that the array dimension is wrong, or that they find a division by 0... errors that I am not able to spot.
weirdest thing is that this problem has popped up at first when I was doing exercise 2 on logistic regression, and then magically disappeared without me changing anything. Now, Implementing multi-classification logistic regression, it has appeared again, and it won't fix even though I have literally copied and pasted the code of exercise 2!
The code is the following:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.io import loadmat
import scipy.misc
import matplotlib.cm as cm
from scipy.optimize import minimize,fmin_tnc
import random
def sigmoid(z):
return 1/(1+np.exp(-z))
def J(theta,X,y):
theta_t=np.transpose(theta)
prod=np.matmul(X,theta_t)
sigm=sigmoid(prod)
vec=y*np.log(sigm)+(1-y)*np.log(1-sigm)
return -np.sum(vec)/len(y)
def grad(theta,X,y):
theta_t=np.transpose(theta)
prod=np.matmul(X,theta_t)
sigm=sigmoid(prod)
one=sigm-y
return np.matmul(np.transpose(one),X)/len(y)
data=loadmat('/home/marco/Desktop/MLang/mlex3/ex3/ex3data1.mat')
X,y = data['X'],data['y']
X=np.column_stack((np.ones(len(X[:,0])),X))
initial_theta=np.zeros((1,len(X[0,:])))
res=fmin_tnc(func=J, x0=initial_theta.flatten(), args=(X,y.flatten()), fprime=grad)
theta_opt=res[0]
Instead of returning the value of theta that minimizes the function as theta_opt, it says:
/home/marco/anaconda3/lib/python3.6/site-packages ipykernel_launcher.py:8: RuntimeWarning: divide by zero encountered in log
I have no clue where this divide by zero occurs, given that there is literally no division in the whole code, except for the division by len(y), which is 5000, and the division in the sigmoid function (1/(1+exp(-z)), which can never be 0!
Any suggestions?
I want to minimize a function in order to obtain some parameters' value of : a,e,I,Omega,om,tp.
I use this "module" : docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html.
My function has 13 parameters:
I imported : from scipy.optimize import minimize. Then I try to minimize it.And the error occurs:
Would someone help to solve this problem?
PS: I started python one week ago that may explain this syntax of the program, however I'm willing to improve myself.
from numpy import *
import numpy as np
import scipy as sp
from scipy.optimize import minimize
import matplotlib.pyplot as plt
from pylab import *
from os import chdir
chdir("/Users/benjaminjaillant/Desktop")
def Chi_VLT(a,e,I,tp,Omega,om,Mbh,R0,Vr_bh,alpha_bh,V_alp_bh,delta_bh,V_del_bh):
return sum(((Vr_etoile(t_vr_VLT*365*24*3600,a,e,I,tp,om,Mbh,Vr_bh)/1000)-vr_VLT)**2/vr_error_VLT**2) + sum(((alpha_etoile_IR(t_orbit_VLT*365*24*3600,a,e,I,tp,Omega,om,Mbh,alpha_bh,V_alp_bh,R0)*206264806.246)-Ra_VLT)**2/Ra_error_VLT**2) + sum(((delta_etoile_IR(t_orbit_VLT*365*24*3600,a,e,I,tp,Omega,om,Mbh,delta_bh,V_del_bh,R0)*206264806.246)-Dec_VLT)**2/Dec_error_VLT**2)
x0 = [1.5e14,0.8,2.5,63.10e9,4,1,8.5e36,2.5e20,2000,1.3e-8,-10e-18,2e-9,1.5e-17]
res = minimize(Chi_VLT, x0 , method='nelder-mead',options={'xtol': 1e-4,'maxiter':50 ,'disp': True})
print res.message
print res.x
I guess you are messing with whole thing here.
your function scipy.optimize.minimize takes two required positional argument,
fun and x0.
you need ndarray as x0
In your case your fun Chi_VLT requires 13 arguments, you need to pass that using args=(tuple, containing, 13, items)
Then only you will be able to minimize your fun.
The minimize routine expects an ndarray, say guess, as initial function arguments and accepts an additional tuple of arguments x0 which might constitute the coefficients in your cost function. If you rewrite CHI_VLT to take as first arg an ndarray and then the remaining arguments accordingly it should work.
res = minimize(CHI_VLT, guess, args=x0,...)
I am new to Python, and I keep getting the following error
..., line 27, in <module>
eq=(p**2+2)/p/sqrt(p**2+4)
AttributeError: sqrt
I tried to add math.sqrt or numpy.sqrt but neither of these work. Does anyone know where I'm going wrong?
My code is:
from numpy import *
from matplotlib import *
from pylab import *
from sympy import Symbol
from sympy.solvers import solve
p=Symbol('p')
eq=(p**2+2)/p/sqrt(p**2+4)
solve(eq,1.34,set=True)
sqrt is defined in the math module, import it this way. This should remove the errors!
from math import sqrt
You are using a sympy symbol: either you wanted to do numerical sqrt (in which case use numpy.sqrt on an actual number) or you wanted symbolic sqrt (in which case use sympy.sqrt). Each of the imports replaces the definition of sqrt in the current namespace, from math, sympy or numpy. It's best to be explicit and not use "import *".
I suspect from the line which follows, you want sympy.sqrt here.
In a project using SciPy and NumPy, should I use scipy.pi, numpy.pi, or math.pi?
>>> import math
>>> import numpy as np
>>> import scipy
>>> math.pi == np.pi == scipy.pi
True
So it doesn't matter, they are all the same value.
The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.
One thing to note is that not all libraries will use the same meaning for pi, of course, so it never hurts to know what you're using. For example, the symbolic math library Sympy's representation of pi is not the same as math and numpy:
import math
import numpy
import scipy
import sympy
print(math.pi == numpy.pi)
> True
print(math.pi == scipy.pi)
> True
print(math.pi == sympy.pi)
> False
If we look its source code, scipy.pi is precisely math.pi; in fact, it's defined as
import math as _math
pi = _math.pi
In their source codes, math.pi is defined to be equal to 3.14159265358979323846 and numpy.pi is defined to be equal to 3.141592653589793238462643383279502884; both are well above the 15 digit accuracy of a float in Python, so it doesn't matter which one you use.
That said, if you're not already using numpy or scipy, importing them just for np.pi or scipy.pi would add unnecessary dependency while math is a Python standard library, so there's not dependency issues. For example, for pi in tensorflow code in python, one could use tf.constant(math.pi).