Store values of integration from array, then use that new array - python

I'm new to Python so I'm really struggling with this. I want to define a function, have a certain calculation done to it for an array of different values, store those newly calculated values in a new array, and then use those new values in another calculation. My attempt is this:
import numpy as np
from scipy.integrate import quad
radii = np.arange(10) #array of radius values
def rho(r):
return (r**2)
for i in range(len(radii)):
def M[r]: #new array by integrating over values from 0 to radii
scipy.integrate.quad(rho(r), 0, radii[i])
def P(r):
return (5*M[r]) #make new array using values from M[r] calculated above

Alright, this script is a bit of a mess, so let's unpack this. I've never used scipy.integrate.quad but I looked it up, and along with testing it have determined that those are valid arguments for quad. There are more efficient ways to do this, but in the interests of preservation, I'll try to keep the overall structure of your script, just fixing the bugs and errors. So, as I understand it, you want to write this:
import numpy as np
from scipy.integrate import quad
# Here's where we start to make changes. First, we're going to define the function, taking in two parameters, r and the array radii.
# We don't need to specify data types, because Python is a strongly-typed language.
# It is good practice to define your functions before the start of the program.
def M(r, radii):
# The loop goes _inside_ the function, otherwise we're just defining the function M(r) over and over again to a slightly different thing!
for i in range(len(radii)):
# Also note: since we imported quad from scipy.integrate, we only need to reference quad, and in fact referencing scipy.integrate.quad just causes an error!
output[i] = quad(r, 0, radii[i])
# We can also multiply by 5 in this function, so we really only need one. Hell, we don't actually _need_ a function at all,
# unless you're planning to reference it multiple times in other parts of a larger program.
output[i] *= 5
return output
# You have a choice between doing the maths _inside_ the main function or in maybe in a lambda function like this, which is a bit more pythonic than a 1-line normal function. Use like so:
rho = lambda r: r**2
# Beginning of program (this is my example of what calling the function with a list called radii might be)
radii = np.arange(10)
new_array = M(rho, radii)
If this solution is correct, please mark it as accepted.
I hope this helps!

Related

Partial derivatives of a function found using interp2d in python/sagemath

I have a function of two variables, R(t,r), that has been constructed using a list of values for R, t, and r. This function cannot be written down, the values are found from solving a differential equation (d R(t,r)/dt). I require to take the derivatives of the function, in particular, I need
dR(t,r)/dr, d^2R(t,r)/drdt. I have tried using this answer to do this, but I cannot seem to get an answer that makes sense. (note that all derivatives should be partials). Any help would be appreciated.
Edit:
my current code. I understand getting anything to work without the `Rdata' file is impossible but the file itself is 160x1001. Really, any data could be made up to get the rest to work. Z_t does not return answers that seem like the derivative of my original function based on what I know, therefore, I know it is not differentiating my function as I'd expect.
If there are numerical routines for using the array of data I do not mind, I simply need some way of figuring out the derivatives.
import numpy as np
from scipy import interpolate
data = np.loadtxt('Rdata.txt')
rvals = np.linspace(1,160,160)
tvals = np.linspace(0,1000,1001)
f = interpolate.interp2d(tvals, rvals, data)
Z_t = interpolate.bisplev(tvals, rvals, f.tck, dx=0.8, dy=0)

Solving a complicated multivariable function with Python

I have a very complicated function of two variables, let's call them x and y. I want to create a Python program where the user can input two values, a and b, where a is the value of that complicated function of x and y, and b = math.atan(y/x). This program should then output the values of x and y.
I am clueless as to where to start. I have tried to make the function into that of just one variable, then generate many random values for x and pick the closest one, but I have learnt that this is horribly inefficient and produces a result which is only accurate to about 2 significant figures, which is pretty horrible. Is there a better way to do this? Many thanks!
(P.S. I did not reveal the function here due to copyright issues. For the sake of example, you can consider the function
a = 4*math.atan(math.sqrt(math.tan(x)*math.tan(y)/math.tan(x+y)))
where y = x * math.tan(b).)
Edit: After using the approach of the sympy library, it appears as though the program ignores my second equation (the complicated one). I suspect it is too complicated for sympy to handle. Thus, I am asking for another approach which does not utilise sympy.
You could use sympy and import the trigonometric functions from sympy.
from sympy.core.symbol import symbols
from sympy.solvers.solveset import nonlinsolve
from sympy import sqrt, tan, atan
y = symbols('y', real=True)
a,b = 4,5 # user-given values
eq2 = a - 4*atan(sqrt(tan(y/tan(b))*tan(y)/tan((y/tan(b))+y)))
S = nonlinsolve( [eq2], [y] )
print(S)
It'll return you a series of conditions ( ConditionSet object ) for possible adequate results.
If that wasn't clear enough, you can read the docs for nonlinsolve.

How to cache the function that is returned by scipy interpolation

Trying to speed up a potential flow aerodynamic solver. Instead of calculating velocity at an arbitrary point using a relatively expensive formula I tried to precalculate a velocity field so that I could interpolate the values and (hopefully) speed up the code. Result was a slow-down due (I think) to the scipy.interpolate.RegularGridInterpolator method running on every call. How can I cache the function that is the result of this call? Everything I tried gets me hashing errors.
I have a method that implements the interpolator and a second 'factory' method to reduce the argument list so that it can be used in an ODE solver.
x_panels and y_panels are 1D arrays/tuples, vels is a 2D array/tuple, x and y are floats.
def _vol_vel_factory(x_panels, y_panels, vels):
# Function factory method
def _vol_vel(x, y, t=0):
return _volume_velocity(x, y, x_panels, y_panels, vels)
return _vol_vel
def _volume_velocity(x, y, x_panels, y_panels, vels):
velfunc = sp_int.RegularGridInterpolator(
(x_panels, y_panels), vels
)
return velfunc(np.array([x, y])).reshape(2)
By passing tuples instead of arrays as inputs I was able to get a bit further but converting the method output to a tuple did not make a difference; I still got the hashing error.
In any case, caching the result of the _volume_velocity method is not really what I want to do, I really want to somehow cache the result of _vol_vel_factory, whose result is a function. I am not sure if this is even a valid concept.
scipy.interpolate.RegularGridInterpolator returns a numpy array. This is not cacheable because it doesn't implement hash.
You can store other representations of the numpy array and cache that and then convert it back to a numpy array though. For details on how to do that look at the following.
How to hash a large object (dataset) in Python?

Mapping function to numpy array, varying a parameter

First, let me show you the codez:
a = array([...])
for n in range(10000):
func_curry = functools.partial(func, y=n)
result = array(map(func_curry, a))
do_something_else(result)
...
What I'm doing here is trying to apply func to an array, changing every time the value of the func's second parameter. This is SLOOOOW (creating a new function every iteration surely does not help), and I also feel I missed the pythonic way of doing it. Any suggestion?
Could a solution that gives me a 2D array be a good idea? I don't know, but maybe it is.
Answers to possible questions:
Yes, this is (using a broad definition), an optimization problem (do_something_else() hides this)
No, scipy.optimize hasn't worked because I'm dealing with boolean values and it never seems to converge.
Did you try numpy.vectorize?
...
vfunc_curry = vectorize(functools.partial(func, y=n))
result = vfunc_curry(a)
...
If a is of significant size the bottleneck should not be the creation of the function, but the duplication of the array.
Can you rewrite the function? If possible, you should write the function to take two numpy arrays a and numpy.arange(n). You may need to reshape to get the arrays to line up for broadcasting.

scipy 'Minimize the sum of squares of a set of equations'

I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:
optimized parameters: (array([10., 10.]), 2)
which is one of the many correct answers for the minimum of the absolute difference.
If you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).
The issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a "best fit". On the other hand fmin finds the minimum value of a scalar function.
Obviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.
Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):
x = optimize.fmin(func, Init)
correctly works!
In fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).
Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.
If you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.

Categories

Resources