I want to perform calculations with a binary operation (Tensor) that takes two non-commutative arguments, converts them into something like a pair, and then does funny things when I multiply these pairs.
# a, b, c, d are non-commutative
Tensor(a, b) * Tensor(c, d) == Tensor(a*c, d*b) # yes, in this order
Furthermore, I want all integer constants to be taken modulo 2.
-Tensor(a, b) == Tensor(a, b)
2*Tensor(a, b) == 0
Tensor(2*a, b) == 0
My shot at doing this:
import sympy as sp
from sympy.core.expr import Expr
class Tensor(Expr):
__slots__ = ['is_commutative']
def __new__(cls, l, r):
l = sp.sympify(l)
r = sp.sympify(r)
obj = Expr.__new__(cls, l, r)
obj.is_commutative = False
return obj
def __neg__(self):
return self
def __mul__(self, other):
if isinstance(other, Tensor):
return Tensor(self.args[0] * other.args[0], other.args[1] * self.args[1])
elif other.is_number:
if other % 2 == 0:
return 0
else:
return self
else:
return sp.Mul(self, other)
x, y = sp.symbols('x, y', commutative=False)
Ym = Tensor(y, 1) - Tensor(1, y)
Yp = Tensor(y, 1) + Tensor(1, y)
Xm = Tensor(x, 1) - Tensor(1, x)
d1 = Ym * Yp + Xm * 0
print(d1)
print(sp.expand(d1))
d2 = Xm * Ym
print(d2)
print(sp.expand(d2))
Output:
(Tensor(1, y) + Tensor(y, 1))**2
Tensor(1, y**2) + 2*Tensor(y, y) + Tensor(y**2, 1)
(Tensor(1, x) + Tensor(x, 1))*(Tensor(1, y) + Tensor(y, 1))
Tensor(1, x)*Tensor(1, y) + Tensor(1, x)*Tensor(y, 1) + Tensor(x, 1)*Tensor(1, y) + Tensor(x, 1)*Tensor(y, 1)
Test #1 is correct.
Test #2 has a term 2*Tensor(y, y) which should be zero (since I'm doing all calculations modulo 2, and, 2 % 2 == 0). How do I enforce that?
Test #3 is correct.
Test #4 does not multiply different Tensors at all. Tensor(1, x)*Tensor(1, y) should be Tensor(1, y*x), for example. How do I enforce that?
Context (if you're interested in why I'm doing this):
This is for calculating a bimodule resolution of an algebra over a char=2 field. A bimodule resolution of an algebra R over a field K is a minimal projective resolution of the same algebra R over its enveloping algebra R⊗R^op. Here op means "multiplication works in the opposite direction". The enveloping algebra has both left and right action on the algebra:
(a⊗b)*r == a*r*b
r*(a⊗b) == b*r*a
There is a theorem that simplifies calculations in the case where a minimal projective resolution of the algebra over the field is known. Still, they are quite tedious and I want to stop doing them manually.
Related
I am trying to numerically compute in python integrals of the form
To that aim, I first define two discrete sets of x and t values, let's say
x_samples = np.linspace(-10, 10, 100)
t_samples = np.linspace(0, 1, 100)
dx = x_samples[1]-x_samples[0]
dt = t_samples[1]-t_samples[0]
declare symbolically that the function g(x,t) is equal to 0 if t<0 and discretise the two functions to integrate as
discretG = g(x_samples[None, :], t_samples[:, None])
discretH = h(x_samples[None, :], t_samples[:, None])
I have then tried to run
discretF = signal.fftconvolve(discretG, discretH, mode='full') * dx * dt
Yet, on basic test functions such as
g(x,t) = lambda x,t: np.exp(-np.abs(x))+t
h(x,t) = lambda x,t: np.exp(-np.abs(x))-t
I don't find an agreement between the the numerical integration and the convolution using scipy and I would like to have a fairly fast way of computing these integrals, especially when I only have access to discretised representations of the functions rather than their symbolic one.
According to your code, I assume you want to conduct convolution on two function g and h that are non-zero only on [a, b]*[m,n].
Of course you can use signal.fftconvolve to compute the convolution. The key is don't forget the transformation between the indices inside discretF and the real coordinates. Here I use interpolation to compute for arbitrary (x,t).
import numpy as np
from scipy import signal, interpolate
a = -1
b = 2
m = -10
n = 15
samples_num = 1000
x_eval_index = 200
t_eval_index = 300
x_samples = np.linspace(a, b, samples_num)
t_samples = np.linspace(m, n, samples_num)
dx = x_samples[1]-x_samples[0]
dt = t_samples[1]-t_samples[0]
g = lambda x,t: np.exp(-np.abs(x))+t
h = lambda x,t: np.exp(-np.abs(x))-t
discretG = g(x_samples[None, :], t_samples[:, None])
discretH = h(x_samples[None, :], t_samples[:, None])
discretF = signal.fftconvolve(discretG, discretH, mode='full')
def compute_f(x, t):
if x < 2*a or x > 2*b or t < 2*m or t > 2*n:
return 0
# use interpolation t get data on new point
x_samples_for_conv = np.linspace(2*a, 2*b, 2*samples_num-1)
t_samples_for_conv = np.linspace(2*m, 2*n, 2*samples_num-1)
f = interpolate.RectBivariateSpline(x_samples_for_conv, t_samples_for_conv, discretF.T)
return f(x, t)[0, 0] * dx * dt
Note: you can extend my codes to compute convolution on a meshgrid defined by x and y, where x and y are 1D array. (In my code, x and y are float now)
You can use the following code to explore the "agreement" between "the numerical integration" and "the convolution using scipy" (and also, the correctness of compute_f function above):
# how the convolve work
# for 1D f[i]=sigma_{j} g[j]h[i-j]
sum = 0
for y_idx, y in enumerate(x_samples[0:]):
for s_idx, s in enumerate(t_samples[0:]):
if x_eval_index - y_idx < 0 or t_eval_index - s_idx < 0:
continue
if t_eval_index - s_idx >= len(x_samples[0:]) or x_eval_index - y_idx >= len(t_samples[0:]):
continue
sum += discretG[t_eval_index - s_idx, x_eval_index - y_idx] * discretH[s_idx, y_idx] * dx * dt
print("Do discrete convolution manually, I get: %f" % sum)
print("Do discrete convolution using scipy, I get: %f" % (discretF[t_eval_index, x_eval_index] * dx * dt))
# numerical integral
# the x_val and t_val
# take 1D convolution as example, function defined on [a, b], and index of your samples range from [0, samples_num-1]
# after convolution, function defined on [2a, 2b], index of your samples range from [0, 2*samples_num-2]
dx_prime = (b-a) / (samples_num-1)
dt_prime = (n-m) / (samples_num-1)
x_eval = 2*a + x_eval_index * dx_prime
t_eval = 2*m + t_eval_index * dt_prime
sum = 0
for y in x_samples[:]:
for s in t_samples[:]:
if x_eval - y < a or x_eval - y > b:
continue
if t_eval - s < m or t_eval - s > n:
continue
if y < a or y >= b:
continue
if s < m or s >= n:
continue
sum += g(x_eval - y, t_eval - s) * h(y, s) * dx * dt
print("Do numerical integration, I get: %f" % sum)
print("The convolution result of 'compute_f' is: %f" % compute_f(x_eval, t_eval))
Which gives:
Do discrete convolution manually, I get: -154.771369
Do discrete convolution using scipy, I get: -154.771369
Do numerical integration, I get: -154.771369
The convolution result of 'compute_f' is: -154.771369
So im tasked with using the 4th order Runge Kutta Meathod to solve the 2nd order differential equation of a damped occilator.
my function for the runge-kutta meathod looks as such
def RungeKutta(f,y0,x):
y=np.zeros((len(x),len(y0)))
y[0,:]=np.array(y0)
h=x[1]-x[0]
for i in range(0,len(x)-1):
k1=h*np.array(f(y[i,:],x[i]))
k2=h*np.array(f(y[i,:]+k1/2,x[i]+h/2))
k3=h*np.array(f(y[i,:]+k2/2,x[i]+h/2))
k4=h*np.array(f(y[i,:]+k3,x[i]+h))
y[i+1,:]=y[i,:]+k1/6+k2/3+k3/3+k4/6
return y
the rungeKutta function works fine, and I have tested it with a list of example inputs so that doesnt seem to be the problem
im given
question parameters
and have to make a class to solve the problem
class harmonicOscillator:
def __init__(self,m,c,k):
if((m>0) and ((type(m) == int) or (type(m) == float))):
self.m = m
else:
raise ValueError
if((c>0) and ((type(c) == int) or (type(c) == float))):
self.c = c
else:
raise ValueError
if((k>0) and ((type(k) == int) or (type(k) == float))):
self.k = k
else:
raise ValueError
def period(self):
self.T = 2 * np.pi * (self.m / self.k)**(0.5)
return(self.T)
def solve(self, func, y0):
m = self.m
c = self.c
k = self.k
T = self.T
t = np.linspace(0,10*T,1000)
but im unsure where to really progress. ive tried turning the 2nd order differential equation into a lambda function like such
F = lambda X,t: [X[1], (-c) * X[1] + (-k) * X[0] + func(t)]
and then passing that into my RungeKutta function
result = RungeKutta(F, y0, t, func)
return(result)
but im not really well versed in lambda functions and am clearly going wrong somewhere.
an example input that it should pass would be something like this
####### example inputs #######
m=1
c=0.5
k=2
a = harmonicOscillator(m,c,k)
a.period()
x0 = [0,0]
tHO,xHO= a.solve(lambda t: omega0**2,x0)
would really appreciate some help. the requirments for the questions are that I have to use the above rungeKutta function, but im just kind of lost at this point
thanks.
I think there may be some confusion over the external forcing term and the Runge Kutta derivative helper function F. The F in RK4 returns the derivative dX/dt of the system of first order differential equations X. The forcing term in a damped oscillator is unfortunately also called F but it is a function of t.
One of your issues is that the arity (number of parameters) of your RungeKutta() function and your call to that function do not match: you tried to do RungeKutta(F, y0, t, func), but the RungeKutta() function only takes arguments (f, y0, x) in that order.
In other words, the f parameter in your current RungeKutta() function should encapsulate the forcing function F(t).
You can do this with helpers:
# A constant function in your case, but this can be any function of `t`
def applied_force(t):
# Note, you did not provide a value for `omega0`
return omega0 ** 2
def rk_derivative_factory(c, k, F):
return lambda X, t: np.array([X[1], -c * X[1] - k * X[0] + F(t)])
The rk_derivative_factory() is a function which takes a damping coefficient, a spring constant, and a forcing function F(t), and returns a function which takes a system X and a time step t as arguments (because this is what is demanded of you by the implementation of RungeKutta()).
Then,
omega0 = 0.234
m, c, k = 1, 0.25, 2
oscillator = HarmonicOscillator(m, c, k)
f = rk_derivative_factory(oscillator, applied_force)
x_osc = oscillator.solve(f, [1, 0])
Where solve() is defined like so:
def solve(self, func, y0):
t = np.linspace(0,10 * self.period(), 1000)
return RungeKutta(f, y0, t)
As an aside, I strongly recommend being more consistent about your variable names. You named the initial state of your oscillator x0, and were passing it to RungeKutta() as the argument for the parameter y0, and the x parameter of RungeKutta() represents time... Gets pretty confusing.
Full solution
Lastly, your implementation of RK4 wasn't quite correct, so I've fixed that and made some other slight improvements.
Note that one thing you might want to consider is making the HarmonicOscillator.solve() function take a solver. Then you could play around with different integrators.
import numpy as np
def RungeKutta(f, y0, x):
y = np.zeros((len(x), len(y0)))
y[0, :] = np.array(y0)
h = x[1] - x[0]
for i in range(0, len(x) - 1):
# Many slight changes below
k1 = np.array(f(y[i, :], x[i]))
k2 = np.array(f(y[i, :] + h * k1 / 2, x[i] + h / 2))
k3 = np.array(f(y[i, :] + h * k2 / 2, x[i] + h / 2))
k4 = np.array(f(y[i, :] + h * k3, x[i] + h))
y[i + 1, :] = y[i, :] + (h / 6) * (k1 + 2 * k2 + 2 * k3 + k4)
return y
# A constant function in your case, but this can be any function of `t`
def applied_force(t):
# Note, you did not provide a value for `omega0`
return omega0 ** 2
def rk_derivative_factory(osc, F):
return lambda X, t: np.array([X[1], (F(t) - osc.c * X[1] - osc.k * X[0]) / osc.m])
class HarmonicOscillator:
def __init__(self, m, c, k):
if (type(m) in (int, float)) and (m > 0):
self.m = m
else:
raise ValueError("Parameter 'm' must be a positive number")
if (type(c) in (int, float)) and (c > 0):
self.c = c
else:
raise ValueError("Parameter 'c' must be a positive number")
if (type(k) in (int, float)) and (k > 0):
self.k = k
else:
raise ValueError("Parameter 'k' must be a positive number")
self.T = 2 * np.pi * (self.m / self.k)**(0.5)
def period(self):
return self.T
def solve(self, func, y0):
t = np.linspace(0, 10 * self.period(), 1000)
return RungeKutta(func, y0, t)
Demo:
import plotly.graph_objects as go
omega0 = 0.234
m, c, k = 1, 0.25, 2
oscillator = HarmonicOscillator(m, c, k)
f = rk_derivative_factory(oscillator, applied_force)
x_osc = oscillator.solve(f, [1, 0])
x, dx = x_osc.T
t = np.linspace(0, 10 * oscillator.period(), 1000)
fig = go.Figure(go.Scatter(x=t, y=x, name="x(t)"))
fig.add_trace(go.Scatter(x=t, y=dx, name="x'(t)"))
Output:
I want to use the Riemann method to evaluate numerically an partial integral in Python. I would like to integrate with respect to x and find a function of t, but i don't know how do this
My fonction : f(x) = cos(2*pi*x*t) its primitive between [-1/2,1/2]: f(t) = sin(pi*t)/t
def riemann(a, b, dx):
if a > b:
a,b = b,a
n = int((b - a) / dx)
s = 0.0
x = a
for i in range(n):
f_i[k] = np.cos(2*np.pi*x)
s += f_i[k]
x += dx
f_i = s * dx
return f_i,t
There's nothing too horrible about your approach. The result does come out close to the true value:
import numpy as np
def riemann(a, b, dx):
if a > b:
a, b = b, a
n = int((b - a) / dx)
s = 0.0
x = a
for i in range(n):
s += np.cos(2 * np.pi * x)
x += dx
return s * dx
print(riemann(0.0, 0.25, 1.0e-3))
print(1 / (2 * np.pi))
0.15965441949277526
0.15915494309189535
Some remarks:
You wouldn't call this Riemann method. It's the midpoint method (of numerical integration).
Pay a little more attention at the boundaries of your domain. Right now, your numerical domain is [a - dx, b + dx].
If you're looking for speed, best collect all your x values (perhaps with linspace), evaluate the function once with all the points, and then np.sum the values up. (Loops in Python are slow.)
This code is not passing all the test cases, can somebody help? I only pass the straight forward test then it loses precision.
import math
import unittest
class IntegerMultiplier:
def multiply(self, x, y):
if x < 10 or y < 10:
return x * y
x = str(x)
y = str(y)
m_max = min(len(x), len(y))
x = x.rjust(m_max, '0')
y = y.rjust(m_max, '0')
m = math.floor(m_max / 2)
x_high = int(x[:m])
x_low = int(x[m:])
y_high = int(y[:m])
y_low = int(y[m:])
z1 = self.multiply(x_high, y_high)
z2 = self.multiply(x_low, y_low)
z3 = self.multiply((x_low + x_high), (y_low + y_high))
z4 = z3 - z1 - z2
return z1 * (10 ** m_max) + z4 * (10 ** m) + z2
class TestIntegerMultiplier(unittest.TestCase):
def test_easy_cases(self):
integerMultiplier = IntegerMultiplier()
case2 = integerMultiplier.multiply(2, 2)
self.assertEqual(case2, 4)
case3 = integerMultiplier.multiply(2, 20000)
self.assertEqual(case3, 40000)
case4 = integerMultiplier.multiply(2000, 2000)
self.assertEqual(case4, 4000000)
def test_normal_cases(self):
intergerMultiplier = IntegerMultiplier()
case1 = intergerMultiplier.multiply(1234, 5678)
self.assertEqual(case1, 7006652)
if __name__ == '__main__':
unittest.main()
for the first test case, 'test_easy_cases' all are passing for the other two cases, I get error e.g. AssertionError: 6592652 != 7006652
In choosing m, you choose a base for all following decompositions and compositions. I recommend one with a representation of length about the average of the factors' lengths.
I have "no" idea why time and again implementing Karatsuba multiplication is attempted using operations on decimal digits - there are two places you need to re-inspect:
when splitting a factor f into high and low, low needs to be f mod m, high f // m
in the composition (last expression in IntegerMultiplier.multiply()), you need to stick with m (and 2×m) - using m_max is wrong every time m_max isn't even.
I took a cryptography course this semester in graduate school, and once of the topics we covered was NTRU. I am trying to code this in pure Python, purely as a hobby. When I attempt to find a polynomial's inverse modulo p (in this example p = 3), SymPy always returns negative coefficients, when I want strictly positive coefficients. Here is the code I have. I'll explain what I mean.
import sympy as sym
from sympy import GF
def make_poly(N,coeffs):
"""Create a polynomial in x."""
x = sym.Symbol('x')
coeffs = list(reversed(coeffs))
y = 0
for i in range(N):
y += (x**i)*coeffs[i]
y = sym.poly(y)
return y
N = 7
p = 3
q = 41
f = [1,0,-1,1,1,0,-1]
f_poly = make_poly(N,f)
x = sym.Symbol('x')
Fp = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(p))
Fq = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(q))
print('\nf =',f_poly)
print('\nFp =',Fp)
print('\nFq =',Fq)
In this code, f_poly is a polynomial with degree at most 6 (its degree is at most N-1), whose coefficients come from the list f (the first entry in f is the coefficient on the highest power of x, continuing in descending order).
Now, I want to find the inverse polynomial of f_poly in the convolution polynomial ring Rp = (Z/pZ)[x]/(x^N - 1)(Z/pZ)[x] (similarly for q). The output of the print statements at the bottom are:
f = Poly(x**6 - x**4 + x**3 + x**2 - 1, x, domain='ZZ')
Fp = Poly(x**6 - x**5 + x**3 + x**2 + x + 1, x, modulus=3)
Fq = Poly(8*x**6 - 15*x**5 - 10*x**4 - 20*x**3 - x**2 + 2*x - 4, x, modulus=41)
These polynomials are correct in modulus, but I would like to have positive coefficients everywhere, as later on in the algorithm there is some centerlifting involved, so I need to have positive coefficients. The results should be
Fp = x^6 + 2x^5 + x^3 + x^2 + x + 1
Fq = 8x^6 + 26x^5 + 31x^4 + 21x^3 + 40x^2 + 2x + 37
The answers I'm getting are correct in modulus, but I think that SymPy's invert is changing some of the coefficients to negative variants, instead of staying inside the mod.
Is there any way I can update the coefficients of this polynomial to have only positive coefficients in modulus, or is this just an artifact of SymPy's function? I want to keep the SymPy Poly format so I can use some of its embedded functions later on down the line. Any insight would be much appreciated!
This seems to be down to how the finite field object implemented in GF "wraps" integers around the given modulus. The default behavior is symmetric, which means that any integer x for which x % modulo <= modulo//2 maps to x % modulo, and otherwise maps to (x % modulo) - modulo. So GF(10)(5) == 5, whereas GF(10)(6) == -4. You can make GF always map to positive numbers instead by passing the symmetric=False argument:
import sympy as sym
from sympy import GF
def make_poly(N, coeffs):
"""Create a polynomial in x."""
x = sym.Symbol('x')
coeffs = list(reversed(coeffs))
y = 0
for i in range(N):
y += (x**i)*coeffs[i]
y = sym.poly(y)
return y
N = 7
p = 3
q = 41
f = [1,0,-1,1,1,0,-1]
f_poly = make_poly(N,f)
x = sym.Symbol('x')
Fp = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(p, symmetric=False))
Fq = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(q, symmetric=False))
print('\nf =',f_poly)
print('\nFp =',Fp)
print('\nFq =',Fq)
Now you'll get the polynomials you wanted. The output from the print(...) statements at the end of the example should look like:
f = Poly(x**6 - x**4 + x**3 + x**2 - 1, x, domain='ZZ')
Fp = Poly(x**6 + 2*x**5 + x**3 + x**2 + x + 1, x, modulus=3)
Fq = Poly(8*x**6 + 26*x**5 + 31*x**4 + 21*x**3 + 40*x**2 + 2*x + 37, x, modulus=41)
Mostly as a note for my own reference, here's how you would get Fp using Mathematica:
Fp = PolynomialMod[Algebra`PolynomialPowerMod`PolynomialPowerMod[x^6 - x^4 + x^3 + x^2 - 1, -1, x, x^7 - 1], 3]
output:
1 + x + x^2 + x^3 + 2 x^5 + x^6