Maths ratios in Python - python

I wrote this a year ago, and while it serves its purpose, I wondered if someone much cleverer than me could suggest ways to improve its efficiency.
def tempcolor(mintemp=0,maxtemp=32,mincolor=44000,maxcolor=3200,ctemp=10,c=0):
tempdiff=(mincolor-maxcolor) / (maxtemp-mintemp)
ccolor=(ctemp-mintemp) * tempdiff
ctouse=(mincolor-ccolor)
#print ctouse
return ctouse;
There's a range of numbers (mintemp to maxtemp). When ctouse is called, we calculate the ratio, then apply that same ratio to the other range of numbers (mincolor and maxcolor).
I'm using it in another script, and just wondered if anyone had any advice on making it neater. Or more accurate!
Thanks
Will

I am going to assume that you rarely or never change the given values for mintemp, maxtemp, mincolor, maxcolor.
The only efficiency improvement I can see would be to precalculate the ratio - something like
def make_linear_interpolator(x0, x1, y0, y1):
"""
Return a function to convert x in (x0..x1) to y in (y0..y1)
"""
dy_dx = (y1 - y0) / float(x1 - x0)
def y(x):
return y0 + (x - x0) * dy_dx
return y
color_to_temp = make_linear_interpolator(0, 32, 44000, 3200)
color_to_temp(10) # => 32150.0

Related

Gradient function evaluation

Below is some code I wrote to evaluate the position of a point moving towards the minimum of a gradient/3d function (defined at the beginning as "eq"). roll.roll() does this by repeatedly evaluating the equation at point (x,y), moving it in the direction of the gradient, then repeating with the new point.
It is very very slow to run though. I think this is because either calculate() is inefficient, or sympy's symbolic equation manipulation in roll.roll is really slow. Does anyone have any ideas on how to speed this up? Is there another libray other than SymPy that is faster?
import sympy as smp
x, y = smp.symbols('x y')
eq = 1*smp.exp(-((x-5)/5)**2 - ((y-1)/2)**2) + \
2*smp.exp(-((x+3)/2)**2 - ((y-3)/2)**2) + \
3*smp.exp(-((x-4)/2)**2 - ((y-7)/2)**2)
# Evaluates the 2 input sympy symbolic function "expression" at points (x1,y1)
def calculate(expression,x1,y1):
EQ = smp.lambdify((x,y), expression, 'numpy')
return EQ(x1,y1)
class roll:
xDiff = smp.diff(eq,x)
yDiff = smp.diff(eq,y)
normalize = eq/smp.sqrt(xDiff**2 + yDiff**2)
def roll(x,y,duration):
(x,y) = (x,y)
for i in range(0,duration):
(x,y) = (
x-calculate((roll.normalize*roll.xDiff),x,y),
y-calculate((roll.normalize*roll.yDiff),x,y)
)
return (x,y)
print(roll.roll(1,2,10))
Here is a visual to help see what this program is doing; the bigger the colored dots are, the greater the function f(x) is evaluated at that point. The draggable point represents what the program is attempting to find. https://www.desmos.com/calculator/c8mq2rijqn
I've tried to figure out if it's possible to pre-calculate normalize*xDiff not inside of roll.roll, but idk if thats possible.
Also, I believe that it is actually pretty easy to do this if the step size isn't dependent on the value of the function at the current point. I do need it to move faster when it's at a high point on the graph though (not just a point with a steep slope) so that has really been hard to figure out too.
The question is you want to do gradient-descent.
If you define a function about grad value, it's the most efficient way.
def eq(x,y):
return y*x**2+2*x*y
def grad_x(x,y):
return 2*x*y+2*y
def grad_y(x,y):
return x**2+2*x
But when you confused about how to calculate the grad of your function, you can use other package which can autograd (i.e. numpy, pytorch).
Here is an example of pytorch:
import torch
def eq(x, y):
return (
1 * torch.exp(-(((x - 5) / 5) ** 2) - ((y - 1) / 2) ** 2)
+ 2 * torch.exp(-(((x + 3) / 2) ** 2) - ((y - 3) / 2) ** 2)
+ 3 * torch.exp(-(((x - 4) / 2) ** 2) - ((y - 7) / 2) ** 2)
)
def roll(x, y, duration):
x, y = (
torch.tensor(x).requires_grad_(True),
torch.tensor(y).requires_grad_(True),
)
for _ in range(0, duration):
func_value = eq(x, y)
xDiff = torch.autograd.grad(func_value, x, retain_graph=True)[0]
yDiff = torch.autograd.grad(func_value, y)[0]
normalize = func_value / torch.sqrt(xDiff ** 2 + yDiff ** 2)
x = x - normalize * xDiff
y = y - normalize * yDiff
return (x, y)
print(roll(1.0, 2.0, 10))
You are calling lambdify inside the loop. The point of lambdify is that it returns a fast function but lambdify itself is a lot slower than the function that it returns. You should call lambdify once and then in the loop repeatedly use the function that was returned by it.
This code is equivalent to yours and returns the exact same result but the loop is 500x times faster:
import sympy as smp
x, y = smp.symbols('x y')
eq = 1*smp.exp(-((x-5)/5)**2 - ((y-1)/2)**2) + \
2*smp.exp(-((x+3)/2)**2 - ((y-3)/2)**2) + \
3*smp.exp(-((x-4)/2)**2 - ((y-7)/2)**2)
# Evaluates the 2 input sympy symbolic function "expression" at points (x1,y1)
def calculate(expression,x1,y1):
EQ = smp.lambdify((x,y), expression, 'numpy')
return EQ(x1,y1)
class roll:
xDiff = smp.diff(eq,x)
yDiff = smp.diff(eq,y)
normalize = eq/smp.sqrt(xDiff**2 + yDiff**2)
# call lambdify once
fxy = smp.lambdify((x, y), (x - normalize*xDiff, y - normalize*yDiff))
def roll(x,y,duration):
(x,y) = (x,y)
for i in range(0,duration):
# in the loop call the function that was returned by lambdify
x, y = roll.fxy(x, y)
return (x,y)
print(roll.roll(1,2,10))

Evaluating a function with a well-defined value at x,y=0

I am trying to write a program that uses an array in further calculations. I initialize a grid of equally spaced points with NumPy and assign a value at each point as per the code snippet provided below. The function I am trying to describe with this array gives me a division by 0 error at x=y and it generally blows up around it. I know that the real part of said function is bounded by band_D/(2*math.pi)
at x=y and I tried manually assigning this value on the diagonal, but it seems that points around it are still ill-behaved and so I am not getting any right values. Is there a way to remedy this? This is how the function looks like with matplotlib
gamma=5
band_D=100
Dt=1e-3
x = np.arange(0,1/gamma,Dt)
y = np.arange(0,1/gamma,Dt)
xx,yy= np.meshgrid(x,y)
N=x.shape[0]
di = np.diag_indices(N)
time_fourier=(1j/2*math.pi)*(1-np.exp(1j*band_D*(xx-yy)))/(xx-yy)
time_fourier[di]=band_D/(2*math.pi)
You have a classic 0 / 0 problem. It's not really Numpy's job to figure out to apply De L'Hospital and solve this for you... I see, as other have commented, that you had the right idea with trying to set the limit value at the diagonal (where x approx y), but by the time you'd hit that line, the warning had already been emitted (just a warning, BTW, not an exception).
For a quick fix (but a bit of a fudge), in this case, you can try to add a small value to the difference:
xy = xx - yy + 1e-100
num = (1j / 2*np.pi) * (1 - np.exp(1j * band_D * xy))
time_fourier = num / xy
This also reveals that there is something wrong with your limit calculation... (time_fourier[0,0] approx 157.0796..., not 15.91549...).
and not band_D / (2*math.pi).
For a correct calculation:
def f(xy):
mask = xy != 0
limit = band_D * np.pi/2
return np.where(mask, np.divide((1j/2 * np.pi) * (1 - np.exp(1j * band_D * xy)), xy, where=mask), limit)
time_fourier = f(xx - yy)
You are dividing by x-y, that will definitely throw an error when x = y. The function being well behaved here means that the Taylor series doesn't diverge. But python doesn't know or care about that, it just calculates one step at a time until it reaches division by 0.
You had the right idea by defining a different function when x = y (ie, the mathematically true answer) but your way of applying it doesn't work because the correction is AFTER the division by 0, so it never gets read. This, however, should work
def make_time_fourier(x, y):
if np.isclose(x, y):
return band_D/(2*math.pi)
else:
return (1j/2*math.pi)*(1-np.exp(1j*band_D*(x-y)))/(x-y)
time_fourier = np.vectorize(make_time_fourier)(xx, yy)
print(time_fourier)
You can use np.divide with where option.
import math
gamma=5
band_D=100
Dt=1e-3
x = np.arange(0,1/gamma,Dt)
y = np.arange(0,1/gamma,Dt)
xx,yy = np.meshgrid(x,y)
N = x.shape[0]
di = np.diag_indices(N)
time_fourier = (1j / 2 * np.pi) * (1 - np.exp(1j * band_D * (xx - yy)))
time_fourier = np.divide(time_fourier,
(xx - yy),
where=(xx - yy) != 0)
time_fourier[di] = band_D / (2 * np.pi)
You can reformulate your function so that the division is inside the (numpy) sinc function, which handles it correctly.
To save typing I'll use D for band_D and use a variable
z = D*(xx-yy)/2
Then
T = (1j/2*pi)*(1-np.exp(1j*band_D*(xx-yy)))/(xx-yy)
= (2/D)*(1j/2*pi)*( 1 - cos( 2*z) - 1j*sin( 2*z))/z
= (1j/D*pi)* (2*sin(z)*sin(z) - 2j*sin(z)*cos(z))/z
= (2j/D*pi) * sin(z)/z * (sin(z) - 1j*cos(z))
= (2j/D*pi) * sinc( z/pi) * (sin(z) - 1j*cos(z))
numpy defines
sinc(x) to be sin(pi*x)/(pi*x)
I can't run python do you should chrck my calculations
The steps are
Substitute the definition of z and expand the complex exp
Apply the double angle formulae for sin and cos
Factor out sin(z)
Substitute the definition of sinc

Is there a way to easily integrate a set of differential equations over a full grid of points?

The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".

drawing a jagged mountain curve using turtle-graphics and recursion

I am trying to create a function for a homework assignment which draws a jagged mountain curve using turtles and recursion. The function is called jaggedMountain(x,y,c,t) where x x,y are end coordinates, c is a complexity constant, and t is the turtle object. I am trying to create an image like this:
def jaggedCurve(x,y,c,t):
t.pendown()
x1 = t.xcor() + x / 2
y1 = t.ycor() + y / 2
y1 = y + (random.uniform(0,c)-0.5) * (t.xcor() - x)
if (x1,y1) == (x,y):
return None
else:
jaggedCurve(x1,y1,c,t)
This crashes quickly as the base case never executes, the function is called 993 times, and the recursion depth is exceeded. I have been scratching my head with this for quite some time, are there any suggestions?
Initially, I see two issues with your code. The first is:
if (x1,y1) == (x,y):
Turtles wander a floating point plane, the odds of these being exactly equal is small. You're likely better off doing something like:
def distance(x1, y1, x2, y2):
return ((x2 - x1) ** 2 + (y2 - y1) ** 2) ** 0.5
...
if distance(x1, y1, x, y) < 1.0:
The second issue is that jaggedCurve() draws nothing nor returns anything that can be used for drawing. Somewhere you need to actually move the turtle to cause something to be drawn.
Finally, though it's hard to be certain without a value for c, my guess is even with the above changes you won't get you what you want. Good luck.
Very interesting problem!
My solution is to make a recursive function that draws a mountain curve given two end points. Randomly pick a x coordinate value that lies in between two end points and compute the range of possible y coordinate given the maximum possible slope and randomly pick a y value in between this range and do this recursively. When to end points are close enough, just draw the line between them. Here is the code:
MAX_SLOPE = 45
MIN_SLOPE = -45
MIN_HEIGHT = 0
def dist_squared(P1,P2):
return (P1[0]-P2[0])**2 + (P1[1]-P2[1])**2
def mountain(P1,P2):
if dist_squared(P1,P2) < 1:
turtle.goto(P2)
return
x1,y1 = P1
x2,y2 = P2
x3 = random.uniform(x1,x2)
y3_max = min((x3-x1)*math.tan(math.radians(MAX_SLOPE)) + y1, (x2-x3)*math.tan(-math.radians(MIN_SLOPE)) + y2)
y3_min = max((x3-x1)*math.tan(math.radians(MIN_SLOPE)) + y1, (x2-x3)*math.tan(-math.radians(MAX_SLOPE)) + y2)
y3_min = max(y3_min, MIN_HEIGHT)
y3 = random.uniform(y3_min,y3_max)
P3 = (x3, y3)
mountain(P1,P3)
mountain(P3,P2)
return
turtle.up()
turtle.goto(-400,0)
turtle.down()
mountain((-400,0),(400,0))
I know this was posted like 3 months ago, but hopefully this is helpful to someone that was also assigned this terrible problem 5 days before finals! Ha!
The struggle I had with this problem was not realizing that you only need to pass in one point. To get the point the turtle is starting at, you just use .xcor() and .ycor() that are included in the turtle library.
import turtle
import random
def mountain (x, y, complexity, turtleName):
if complexity == 0:
turtleName.setposition(x, y)
else:
x1 = (turtleName.xcor() + x)/2
y1 = (turtleName.ycor() + y)/2
y1 = y1 + (random.uniform(0, complexity) - 0.5) * (turtleName.xcor() - x)
complexity = complexity - 1
mountain(x1, y1, complexity, turtleName)
mountain(x, y, complexity, turtleName)
def main ():
#Gets input for first coordinate pair, splits, and assigns to variables
coordinate = str(input("Enter the coordinate pair, separated by a comma: "))
x, y = coordinate.split(',')
x = int(x)
y = int(y)
complexity = int(input("Enter the complexity: "))
while complexity < 0:
complexity = int(input("Input must be positive. Enter the complexity: "))
Bob = turtle.Turtle()
mountain(x, y, complexity, Bob)
main ()

Overflow Error when using Newton's Method in Python

I am trying to carry out the Newton's method in Python to solve a problem. I have followed the approach of some examples but I am getting an Overflow Error. Do you have any idea what is causing this?
def f1(x):
return x**3-(2.*x)-5.
def df1(x):
return (3.*x**2)-2.
def Newton(f, df, x, tol):
while True:
x1 = f(x) - (f(x)/df(x))
t = abs(x1-x)
if t < tol:
break
x = x1
return x
init = 2
print Newton(f1,df1,init,0.000001)
Newton's method is
so x1 = f(x) - (f(x)/df(x))
should be
x1 = x - (f(x)/df(x))
There is a bug in your code. It should be
def Newton(f, df, x, tol):
while True:
x1 = x - (f(x)/df(x)) # it was f(x) - (f(x)/df(x))
t = abs(x1-x)
if t < tol:
break
x = x1
return x
The equation you're solving is cubic, so there are two values of x where df(x)=0. Dividing by zero or a value close to zero will cause an overflow, so you need to avoid doing that.
One practical consideration for Newton's algorithm is how to handle values of x near local maxima or minima. Overflow is likely caused by dividing by something near zero. You can show this by adding a print statement before your x= line -- print x and df(x). To avoid this problem, you can calculate df(x) before dividing, and if it's below some threshold, bump the value of x up or down a small amount and try again.

Categories

Resources