numpy strange results in solving an inequality - python

I have written an inequality by two formats the first shape is converting to a polynomial
the polynomial and the matrix multiplication are exactly the same
V0j=yj.T # P # yj=12137.5 * (y1**2) + 11438.7 * (y2**2) +(26.381 * 2)*y1*y2
and also I have the next the constant value
lambdaprimezero*nrmedoe0+minelement=8.920678
this means that polynomial inequality and the next inequality are the same and must have the same answer
yj.T # P # yj-lambdaprimezero*nrmedoe0<minelement
and with the help of a friend I could extract the points in which polynomial inequality is valid (first program)
The problem occurs when I use the answer of the firs program (polynomial inequality) in the matrix one
As the are the same the send program must return small enough at these points but it does not
#first part
import numpy as np
# 12137.5x^2 + 11438.7y^2 + (26.381*2)xy = 0.000731
Y1 = np.linspace(-0.003, 0.003, num=100)
Y2 = np.linspace(-0.003, 0.003, num=100)
pointsInsideEllipse = []
for y1 in Y1:
for y2 in Y2:
if 12137.5 * (y1**2) + 11438.7 * (y2**2) +(26.381 * 2)*y1*y2 < 8.920678:
pointsInsideEllipse.append([y1, y2])
#print(pointsInsideEllipse)
y=[y1, y2]
P=np.array([[12137.5, 26.381], [26.381,11438.7]])
yj=np.array(y)
pointset=np.array(pointsInsideEllipse)
def msquarefunc(yj):
VALUE=yj.T # P # yj
return VALUE
point=pointsInsideEllipse
for point in pointset:
if msquarefunc(point)<8.920678:
############################################
###########################
# second program(matrix form)
V0j=yj.T # P # yj
testfeaturej=V0j-lambdaprimezero*nrmedoe0
#print(V0j)
#print(lambdaprimezero*nrmedoe0)
cj=abs(testfeaturej)
print(cj)
if cj<=minelement:
print('small enough')
What is the problem?

You used y=[y1, y2] while it should be y=[Y1, Y2]. Moreover, note that point=pointsInsideEllipse is useless since point is set by the following loop. The biggest issue comes from the fact that the first hypothesis is actually wrong. Indeed, yj.T # P # yj is not equal to 12137.5 * (y1**2) + 11438.7 * (y2**2) +(26.381 * 2)*y1*y2. I do not see why it would be actually true. The y1 values multiplied seems not the same in one case while they are with the second.
Note that the rest of the code is not clear and incomplete and so hard to test/run.
You can use vectorized calls to make your code faster and more clear with Numpy. More over you can easily test that the equality hypothesis is wrong using the following code:
Y1 = np.linspace(-0.003, 0.003, num=100)
Y2 = np.linspace(-0.003, 0.003, num=100)
yj = np.vstack([Y1, Y2])
V0j_1 = yj.T # P # yj
V0j_2 = 12137.5 * (Y1.reshape(-1,1)**2) + 11438.7 * (Y2**2) + (26.381*2)*np.outer(Y1,Y2)
# Another (less efficient) way of computing V0j_2:
#V0j_2_bis = np.array([[12137.5 * (y1**2) + 11438.7 * (y2**2) +(26.381 * 2)*y1*y2 for y2 in Y2] for y1 in Y1])
#print(np.allclose(V0j_2, V0j_2_bis)) # True
distance = (V0j_1 - V0j_2)**2
print(np.allclose(V0j_1, V0j_2)) # False
print(distance)
# This shows the two matrices have different symmetries (and so the assumption is wrong)
print(np.allclose(V0j_1, V0j_1.T)) # True
print(np.allclose(V0j_2, V0j_2.T)) # False
The distance matrix show huge difference (that are unrelated with floating-point errors). Actually some values of V0j_1 and V0j_2 are even note of the same sign.

Related

Evaluating a function with a well-defined value at x,y=0

I am trying to write a program that uses an array in further calculations. I initialize a grid of equally spaced points with NumPy and assign a value at each point as per the code snippet provided below. The function I am trying to describe with this array gives me a division by 0 error at x=y and it generally blows up around it. I know that the real part of said function is bounded by band_D/(2*math.pi)
at x=y and I tried manually assigning this value on the diagonal, but it seems that points around it are still ill-behaved and so I am not getting any right values. Is there a way to remedy this? This is how the function looks like with matplotlib
gamma=5
band_D=100
Dt=1e-3
x = np.arange(0,1/gamma,Dt)
y = np.arange(0,1/gamma,Dt)
xx,yy= np.meshgrid(x,y)
N=x.shape[0]
di = np.diag_indices(N)
time_fourier=(1j/2*math.pi)*(1-np.exp(1j*band_D*(xx-yy)))/(xx-yy)
time_fourier[di]=band_D/(2*math.pi)
You have a classic 0 / 0 problem. It's not really Numpy's job to figure out to apply De L'Hospital and solve this for you... I see, as other have commented, that you had the right idea with trying to set the limit value at the diagonal (where x approx y), but by the time you'd hit that line, the warning had already been emitted (just a warning, BTW, not an exception).
For a quick fix (but a bit of a fudge), in this case, you can try to add a small value to the difference:
xy = xx - yy + 1e-100
num = (1j / 2*np.pi) * (1 - np.exp(1j * band_D * xy))
time_fourier = num / xy
This also reveals that there is something wrong with your limit calculation... (time_fourier[0,0] approx 157.0796..., not 15.91549...).
and not band_D / (2*math.pi).
For a correct calculation:
def f(xy):
mask = xy != 0
limit = band_D * np.pi/2
return np.where(mask, np.divide((1j/2 * np.pi) * (1 - np.exp(1j * band_D * xy)), xy, where=mask), limit)
time_fourier = f(xx - yy)
You are dividing by x-y, that will definitely throw an error when x = y. The function being well behaved here means that the Taylor series doesn't diverge. But python doesn't know or care about that, it just calculates one step at a time until it reaches division by 0.
You had the right idea by defining a different function when x = y (ie, the mathematically true answer) but your way of applying it doesn't work because the correction is AFTER the division by 0, so it never gets read. This, however, should work
def make_time_fourier(x, y):
if np.isclose(x, y):
return band_D/(2*math.pi)
else:
return (1j/2*math.pi)*(1-np.exp(1j*band_D*(x-y)))/(x-y)
time_fourier = np.vectorize(make_time_fourier)(xx, yy)
print(time_fourier)
You can use np.divide with where option.
import math
gamma=5
band_D=100
Dt=1e-3
x = np.arange(0,1/gamma,Dt)
y = np.arange(0,1/gamma,Dt)
xx,yy = np.meshgrid(x,y)
N = x.shape[0]
di = np.diag_indices(N)
time_fourier = (1j / 2 * np.pi) * (1 - np.exp(1j * band_D * (xx - yy)))
time_fourier = np.divide(time_fourier,
(xx - yy),
where=(xx - yy) != 0)
time_fourier[di] = band_D / (2 * np.pi)
You can reformulate your function so that the division is inside the (numpy) sinc function, which handles it correctly.
To save typing I'll use D for band_D and use a variable
z = D*(xx-yy)/2
Then
T = (1j/2*pi)*(1-np.exp(1j*band_D*(xx-yy)))/(xx-yy)
= (2/D)*(1j/2*pi)*( 1 - cos( 2*z) - 1j*sin( 2*z))/z
= (1j/D*pi)* (2*sin(z)*sin(z) - 2j*sin(z)*cos(z))/z
= (2j/D*pi) * sin(z)/z * (sin(z) - 1j*cos(z))
= (2j/D*pi) * sinc( z/pi) * (sin(z) - 1j*cos(z))
numpy defines
sinc(x) to be sin(pi*x)/(pi*x)
I can't run python do you should chrck my calculations
The steps are
Substitute the definition of z and expand the complex exp
Apply the double angle formulae for sin and cos
Factor out sin(z)
Substitute the definition of sinc

Minimizing this error function, using NumPy

Background
I've been working for some time on attempting to solve the (notoriously painful) Time Difference of Arrival (TDoA) multi-lateration problem, in 3-dimensions and using 4 nodes. If you're unfamiliar with the problem, it is to determine the coordinates of some signal source (X,Y,Z), given the coordinates of n nodes, the time of arrival of the signal at each node, and the velocity of the signal v.
My solution is as follows:
For each node, we write (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 = (v(t_i - T)**2
Where (x_i, y_i, z_i) are the coordinates of the ith node, and T is the time of emission.
We have now 4 equations in 4 unknowns. Four nodes are obviously insufficient. We could try to solve this system directly, however that seems next to impossible given the highly nonlinear nature of the problem (and, indeed, I've tried many direct techniques... and failed). Instead, we simplify this to a linear problem by considering all i/j possibilities, subtracting equation i from equation j. We obtain (n(n-1))/2 =6 equations of the form:
2*(x_j - x_i)*X + 2*(y_j - y_i)*Y + 2*(z_j - z_i)*Z + 2 * v**2 * (t_i - t_j) = v**2 ( t_i**2 - t_j**2) + (x_j**2 + y_j**2 + z_j**2) - (x_i**2 + y_i**2 + z_i**2)
Which look like Xv_1 + Y_v2 + Z_v3 + T_v4 = b. We try now to apply standard linear least squares, where the solution is the matrix vector x in A^T Ax = A^T b. Unfortunately, if you were to try feeding this into any standard linear least squares algorithm, it'll choke up. So, what do we do now?
...
The time of arrival of the signal at node i is given (of course) by:
sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v
This equation implies that the time of arrival, T, is 0. If we have that T = 0, we can drop the T column in matrix A and the problem is greatly simplified. Indeed, NumPy's linalg.lstsq() gives a surprisingly accurate & precise result.
...
So, what I do is normalize the input times by subtracting from each equation the earliest time. All I have to do then is determine the dt that I can add to each time such that the residual of summed squared error for the point found by linear least squares is minimized.
I define the error for some dt to be the squared difference between the arrival time for the point predicted by feeding the input times + dt to the least squares algorithm, minus the input time (normalized), summed over all 4 nodes.
for node, time in nodes, times:
error += ( (sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v) - time) ** 2
My problem:
I was able to do this somewhat satisfactorily by using brute-force. I started at dt = 0, and moved by some step up to some maximum # of iterations OR until some minimum RSS error is reached, and that was the dt I added to the normalized times to obtain a solution. The resulting solutions were very accurate and precise, but quite slow.
In practice, I'd like to be able to solve this in real time, and therefore a far faster solution will be needed. I began with the assumption that the error function (that is, dt vs error as defined above) would be highly nonlinear-- offhand, this made sense to me.
Since I don't have an actual, mathematical function, I can automatically rule out methods that require differentiation (e.g. Newton-Raphson). The error function will always be positive, so I can rule out bisection, etc. Instead, I try a simple approximation search. Unfortunately, that failed miserably. I then tried Tabu search, followed by a genetic algorithm, and several others. They all failed horribly.
So, I decided to do some investigating. As it turns out the plot of the error function vs dt looks a bit like a square root, only shifted right depending upon the distance from the nodes that the signal source is:
Where dt is on horizontal axis, error on vertical axis
And, in hindsight, of course it does!. I defined the error function to involve square roots so, at least to me, this seems reasonable.
What to do?
So, my issue now is, how do I determine the dt corresponding to the minimum of the error function?
My first (very crude) attempt was to get some points on the error graph (as above), fit it using numpy.polyfit, then feed the results to numpy.root. That root corresponds to the dt. Unfortunately, this failed, too. I tried fitting with various degrees, and also with various points, up to a ridiculous number of points such that I may as well just use brute-force.
How can I determine the dt corresponding to the minimum of this error function?
Since we're dealing with high velocities (radio signals), it's important that the results be precise and accurate, as minor variances in dt can throw off the resulting point.
I'm sure that there's some infinitely simpler approach buried in what I'm doing here however, ignoring everything else, how do I find dt?
My requirements:
Speed is of utmost importance
I have access only to pure Python and NumPy in the environment where this will be run
EDIT:
Here's my code. Admittedly, a bit messy. Here, I'm using the polyfit technique. It will "simulate" a source for you, and compare results:
from numpy import poly1d, linspace, set_printoptions, array, linalg, triu_indices, roots, polyfit
from dataclasses import dataclass
from random import randrange
import math
#dataclass
class Vertexer:
receivers: list
# Defaults
c = 299792
# Receivers:
# [x_1, y_1, z_1]
# [x_2, y_2, z_2]
# [x_3, y_3, z_3]
# Solved:
# [x, y, z]
def error(self, dt, times):
solved = self.linear([time + dt for time in times])
error = 0
for time, receiver in zip(times, self.receivers):
error += ((math.sqrt( (solved[0] - receiver[0])**2 +
(solved[1] - receiver[1])**2 +
(solved[2] - receiver[2])**2 ) / c ) - time)**2
return error
def linear(self, times):
X = array(self.receivers)
t = array(times)
x, y, z = X.T
i, j = triu_indices(len(x), 1)
A = 2 * (X[i] - X[j])
b = self.c**2 * (t[j]**2 - t[i]**2) + (X[i]**2).sum(1) - (X[j]**2).sum(1)
solved, residuals, rank, s = linalg.lstsq(A, b, rcond=None)
return(solved)
def find(self, times):
# Normalize times
times = [time - min(times) for time in times]
# Fit the error function
y = []
x = []
dt = 1E-10
for i in range(50000):
x.append(self.error(dt * i, times))
y.append(dt * i)
p = polyfit(array(x), array(y), 2)
r = roots(p)
return(self.linear([time + r for time in times]))
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
myVertexer = Vertexer([[x_1, y_1, z_1],[x_2, y_2, z_2],[x_3, y_3, z_3],[x_4, y_4, z_4]])
solution = myVertexer.find([t_1, t_2, t_3, t_4])
print(solution)
It seems like the Bancroft method applies to this problem? Here's a pure NumPy implementation.
# Implementation of the Bancroft method, following
# https://gssc.esa.int/navipedia/index.php/Bancroft_Method
M = np.diag([1, 1, 1, -1])
def lorentz_inner(v, w):
return np.sum(v * (w # M), axis=-1)
B = np.array(
[
[x_1, y_1, z_1, c * t_1],
[x_2, y_2, z_2, c * t_2],
[x_3, y_3, z_3, c * t_3],
[x_4, y_4, z_4, c * t_4],
]
)
one = np.ones(4)
a = 0.5 * lorentz_inner(B, B)
B_inv_one = np.linalg.solve(B, one)
B_inv_a = np.linalg.solve(B, a)
for Lambda in np.roots(
[
lorentz_inner(B_inv_one, B_inv_one),
2 * (lorentz_inner(B_inv_one, B_inv_a) - 1),
lorentz_inner(B_inv_a, B_inv_a),
]
):
x, y, z, c_t = M # np.linalg.solve(B, Lambda * one + a)
print("Candidate:", x, y, z, c_t / c)
My answer might have mistakes (glaring) as I had not heard the TDOA term before this afternoon. Please double check if the method is right.
I could not find solution to your original problem of finding dt corresponding to the minimum error. My answer also deviates from the requirement that other than numpy no third party library had to be used (I used Sympy and largely used the code from here). However I am still posting this thinking that somebody someday might find it useful if all one is interested in ... is to find X,Y,Z of the source emitter. This method also does not take into account real-life situations where white noise or errors might be present or curvature of the earth and other complications.
Your initial test conditions are as below.
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
My solution is as below.
import sympy as sym
X,Y,Z = sym.symbols('X,Y,Z', real=True)
f = sym.Eq((x_1 - X)**2 +(y_1 - Y)**2 + (z_1 - Z)**2 , (c*t_1)**2)
g = sym.Eq((x_2 - X)**2 +(y_2 - Y)**2 + (z_2 - Z)**2 , (c*t_2)**2)
h = sym.Eq((x_3 - X)**2 +(y_3 - Y)**2 + (z_3 - Z)**2 , (c*t_3)**2)
i = sym.Eq((x_4 - X)**2 +(y_4 - Y)**2 + (z_4 - Z)**2 , (c*t_4)**2)
print("Solved coordinates are ", sym.solve([f,g,h,i],X,Y,Z))
print statement from your initial condition gave.
Actual: 111 553 110
and the solution that almost instantly came out was
Solved coordinates are [(111.000000000000, 553.000000000000, 110.000000000000)]
Sorry again if something is totally amiss.

Intersections for 3D lines

I have written a function that should calculate all intersectionpoints between a line and all lines that are given to it, and this in 3D. I have these lines parametrized because that seemed to be the easiest way to work with things. The problem is that when I input the variables "t1" and "t2" back into the functions of the lines, there seems to be an inaccuracy that is too big to be acceptable for the thing that I need.
t1 is the parameter for the line of which you would like to know all intersections, so it's written in this form:
x = xo + t1 * dx
y = yo + t1 * dy
z = zo + t1 * dz
Where [xo, yo, zo] represent a point on the line that I call the "origin" and [dx, dy, dz] represents the direction of that line. The other lines are given in the same form and the function I wrote basically solves the following equation:
xo1 + t1 * dx1 = xo2 + t2 * dx2
yo1 + t1 * dy1 = yo2 + t2 * dy2
zo1 + t1 * dz1 = zo2 + t2 * dz2
Where everything is given except for t1 and t2, that's what I'm looking for here. However, I don't think actually finding t1 and t2 is the problem, I do have a solution that gives me some kind of result. As mentioned earlier, the problem is really that when I feed t1 and t2 back into these formulas to get the actual intersectionpoints, that they differ slightly from eachother. I'm talking about differences that are mostly 0.005-0.05 away from eachother in euclidean distance. But in extreme cases it could be up to 0.5 inaccuracy. I am aware that most lines in 3D do not intersect and therefore do not have a solution to these equations, but for the tests that I'm doing right now, I am 100% sure that all of the lines are within the same plane, but some might be parallel to each other. However, these inaccuracies occur for all lines, and I'm really just looking for a solution that gets it accuratly when they do intersect.
Here's the code I have for this:
def lineIntersection(self, lines):
origins = np.zeros((len(lines), 3), dtype=np.float32)
directions = np.zeros((len(lines), 3), dtype=np.float32)
for i in range(0, len(lines)):
origins[i] = lines[i].origin
directions[i] = lines[i].direction
ox = origins[:, 0]
oy = origins[:, 1]
dx = self.origin[0]
dy = self.origin[1]
x1 = directions[:, 0]
y1 = directions[:, 1]
x2 = self.direction[0]
y2 = self.direction[1]
t2 = np.divide((np.subtract(np.add(oy, np.multiply(np.divide(np.subtract(dx, ox), x1), y1)), dy)), np.subtract(y2, np.multiply(np.divide(x2, x1), y1)))
t1 = np.divide((np.add(dx, np.subtract(np.multiply(t2, x2), ox))), x1)
testx1 = np.add(ox, np.multiply(t1, x1))
testx2 = np.add(dx, np.multiply(t2, x2))
testy1 = np.add(oy, np.multiply(t1, y1))
testy2 = np.add(dy, np.multiply(t2, y2))
testz1 = np.add(origins[:, 2], np.multiply(t1, directions[:, 2]))
testz2 = np.add(self.origin[2], np.multiply(t2, self.direction[2]))
arr1 = np.array([testx1, testy1, testz1]).T
arr2 = np.array([testx2, testy2, testz2]).T
diff = np.linalg.norm(np.subtract(arr1, arr2), axis=1)
narr = arr1[diff < 0.05] #Filtering out points that aren't actually intersecting
nt2 = t2[diff < 0.05]
return narr, nt2
This function is located in the "Line" class and has an origin and direction as explained earlier. The input it takes, is an array of objects from the "Line" class.
So to be clear, I'm asking why this doesn't seem to work as precise as I want it to be and how I can fix it. Or, if there are alternatives to calculating intersectionpoints that are really accurate, I would love to hear about it.
Inaccuracy is common case for intersection of lines forming small angle.
I did not checked your algo correctness, but seems you just solve system of three equation with linalg solver.
In case of almost parallel lines intermediate values (determinant) might be small causing significant errors.
Have you tried more robust numeric algorithms like SVD?
But perhaps you really don't need them:
Note that when you are sure that all lines lie in the same plane, you can exploit 2D algorithm - just check what component of dx,dy,dz have the smallest magnitude (check for some distinct lines) and ignore corresponding component - it is similar to projecting of lines onto OXY or OXZ or OYZ plane. 2D code should be much simpler.
For true 3D case there is well-tested vector approach intended to find distance (the shortest line segment) between two skew lines - it is just zero length for intersecting ones. Example here.
Note that det (determinant) magnitude is evaluated to check for parallel (and almost parallel) lines too.

Is there a way to easily integrate a set of differential equations over a full grid of points?

The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".

How to get the coefficient vector of a PolyBoRi polynomial in Sage

I am trying to use SageMath for something that involves a lot of manipulation of boolean polynomials.
Here are some examples of what a coefficient vector is:
x0*x1*x2 + 1 has coefficient vector 10000001
x1 + x0 + 1 has coefficient vector 11100000
x1 + x0 has coefficient vector 01100000
(x0 is the least significant bit.)
The problem is that Sage's API doesn't seem to encourage direct manipulation of monomials or coefficient vectors, probably because the data structures it uses internally are ZDDs instead of bit arrays.
sage: P
x0*x1*x2 + x0*x1 + x0*x2 + x0 + x1*x2 + x1 + x2 + 1
sage: list(P.set())
[x0*x1*x2, x0*x1, x0*x2, x0, x1*x2, x1, x2, 1]
sage: P.terms()
[x0*x1*x2, x0*x1, x0*x2, x0, x1*x2, x1, x2, 1]
In this code, it appears that the problem is that the endianness is the opposite of what one may expect; which would be with x0 being the least significant bit.
The problem is actually more than that. For example:
#!/usr/bin/env python
from sage.all import *
from sage.crypto.boolean_function import BooleanFunction
# "input_str" is a truth table.
# Returns a polynomial that has that truth table.
def truth_str_to_poly(input_str):
# assumes that the length of input_str is a power of two
num_vars = int(log(len(input_str),2))
truth_table = []
# convert string to list of ints expected by BooleanFunction
for i in list(input_str):
truth_table.append(int(i))
B = BooleanFunction(truth_table)
P = B.algebraic_normal_form()
return P
# Return the polynomial with coefficient vector 1,1,...,1.
# (This is neccessary because we can't directly manipulate coef. vectors.)
def super_poly(num_vars):
input_str = ["0"]*(2**num_vars)
input_str[0] = "1"
input_str = "".join(input_str)
return truth_str_to_poly(input_str)
# Return the coefficient vector of P.
# This is the function that is broken.
def poly_to_coef_str(P, num_vars):
res = ""
#for i in super_poly(num_vars).terms():
for i in list(super_poly(num_vars).set()):
res += str(P.monomial_coefficient(i))
return res
num_vars = 3
print(super_poly(num_vars).monomials())
# This should have coefficient vector "01000000" = x0
input_poly = "01010101"
P = truth_str_to_poly(input_poly) # Gives correct result
res = poly_to_coef_str(P, 3)
print(" in:"+input_poly)
print("out:"+res)
Output:
[x0*x1*x2, x0*x1, x0*x2, x0, x1*x2, x1, x2, 1]
in:01010101
out:00010000
This means that it's not just getting the endianness wrong, it's somehow treating the place value of variables inconsistently.
Is there a better way to do this?

Categories

Resources