I have this line of MATLAB code:
a/b
I am using these inputs:
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9]
b = ones(25, 18)
This is the result (a 1x25 matrix):
[5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
What is MATLAB doing? I am trying to duplicate this behavior in Python, and the mrdivide documentation in MATLAB was unhelpful. Where does the 5 come from, and why are the rest of the values 0?
I have tried this with other inputs and receive similar results, usually just a different first element and zeros filling the remainder of the matrix. In Python when I use linalg.lstsq(b.T,a.T), all of the values in the first matrix returned (i.e. not the singular one) are 0.2. I have already tried right division in Python and it gives something completely off with the wrong dimensions.
I understand what a least square approximation is, I just need to know what mrdivide is doing.
Related:
Array division- translating from MATLAB to Python
MRDIVIDE or the / operator actually solves the xb = a linear system, as opposed to MLDIVIDE or the \ operator which will solve the system bx = a.
To solve a system xb = a with a non-symmetric, non-invertible matrix b, you can either rely on mridivide(), which is done via factorization of b with Gauss elimination, or pinv(), which is done via Singular Value Decomposition, and zero-ing of the singular values below a (default) tolerance level.
Here is the difference (for the case of mldivide): What is the difference between PINV and MLDIVIDE when I solve A*x=b?
When the system is overdetermined, both algorithms provide the
same answer. When the system is underdetermined, PINV will return the
solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will
pick the solution with least number of non-zero elements.
In your example:
% solve xb = a
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9];
b = ones(25, 18);
the system is underdetermined, and the two different solutions will be:
x1 = a/b; % MRDIVIDE: sparsest solution (min L0 norm)
x2 = a*pinv(b); % PINV: minimum norm solution (min L2)
>> x1 = a/b
Warning: Rank deficient, rank = 1, tol = 2.3551e-014.
ans =
5.0000 0 0 ... 0
>> x2 = a*pinv(b)
ans =
0.2 0.2 0.2 ... 0.2
In both cases the approximation error of xb-a is non-negligible (non-exact solution) and the same, i.e. norm(x1*b-a) and norm(x2*b-a) will return the same result.
What is MATLAB doing?
A great break-down of the algorithms (and checks on properties) invoked by the '\' operator, depending upon the structure of matrix b is given in this post in scicomp.stackexchange.com. I am assuming similar options apply for the / operator.
For your example, MATLAB is most probably doing a Gaussian elimination, giving the sparsest solution amongst a infinitude (that's where the 5 comes from).
What is Python doing?
Python, in linalg.lstsq uses pseudo-inverse/SVD, as demonstrated above (that's why you get a vector of 0.2's). In effect, the following will both give you the same result as MATLAB's pinv():
from numpy import *
a = array([1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9])
b = ones((25, 18))
# xb = a: solve b.T x.T = a.T instead
x2 = linalg.lstsq(b.T, a.T)[0]
x2 = dot(a, linalg.pinv(b))
TL;DR: A/B = np.linalg.solve(B.conj().T, A.conj().T).conj().T
I did not find the earlier answers to create a satisfactory substitute, so I dug into Matlab's reference documents for mrdivide further and found the solution. I cannot explain the actual mathematics here or take credit for coming up with the answer. I'm just following Matlab's explanation. Additionally, I wanted to post the actual detail from Matlab to give credit. If it's a copyright issue, someone tell me and I'll remove the actual text.
%/ Slash or right matrix divide.
% A/B is the matrix division of B into A, which is roughly the
% same as A*INV(B) , except it is computed in a different way.
% More precisely, A/B = (B'\A')'. See MLDIVIDE for details.
%
% C = MRDIVIDE(A,B) is called for the syntax 'A / B' when A or B is an
% object.
%
% See also MLDIVIDE, RDIVIDE, LDIVIDE.
% Copyright 1984-2005 The MathWorks, Inc.
Note that the ' symbol indicates the complex conjugate transpose. In python using numpy, that requires .conj().T chained together.
Per this handy "cheat sheet" of numpy for matlab users, linalg.lstsq(b,a) -- linalg is numpy.linalg.linalg, a light-weight version of the full scipy.linalg.
a/b finds the least square solution to the system of linear equations bx = a
if b is invertible, this is a*inv(b), but if it isn't, the it is the x which minimises norm(bx-a)
You can read more about least squares on wikipedia.
according to matlab documentation, mrdivide will return at most k non-zero values, where k is the computed rank of b. my guess is that matlab in your case solves the least squares problem given by replacing b by b(:1) (which has the same rank). In this case the moore-penrose inverse b2 = b(1,:); inv(b2*b2')*b2*a' is defined and gives the same answer
I want to find numerical solutions to the following exponential equation where a,b,c,d are constants and I want to solve for r, which is not equal to 1.
a^r + b^r = c^r + d^r (Equation 1)
I define a function in order to use Scipy.optimize.fsolve:
from scipy.optimize import fsolve
def func(r,a,b,c,d):
if r==1:
return 10**5
else:
return ( a**(1-r) + b**(1-r) ) - ( c**(1-r) + d**(1-r) )
fsolve(funcp,0.1, args=(5,5,4,7))
However, the fsolve always returns 1 as the solution, which is not what I want. Can someone help me with this issue? Or in general, tell me how to solve (Equation 1). I used an online numerical solver long time ago, but I cannot find it anymore. That's why I am trying to figure it out using Python.
You need to apply some mathematical reasoning when choosing the initial guess. Consider your problem f(r) = (51-r + 51-r) − (41-r + 71-r)
When r ≤ 1, f(r) is always negative and decreasing (since 71-r is growing much faster than other terms). Therefore, all root-finding algorithms will be pushed to right towards 1 until reaching this local solution.
You need to pick a point far away from 1 on the right to find the nontrivial solution:
>>> scipy.optimize.fsolve(lambda r: 5**(1-r)+5**(1-r)-4**(1-r)-7**(1-r), 2.0)
array([ 2.48866034])
Simply setting f(1) = 105 is not going to have any effect, as the root-finding algorithm won't check f(1) until the very last step(note).
If you wish to apply a penalty, the penalty must be applied to a range of value around 1. One way to do so, without affecting the position of other roots, is to divide the whole function by (r − 1):
>>> scipy.optimize.fsolve(lambda r: (5**(1-r)+5**(1-r)-4**(1-r)-7**(1-r)) / (r-1), 0.1)
array([ 2.48866034])
(note): they may climb like f(0.1) → f(0.4) → f(0.7) → f(0.86) → f(0.96) → f(0.997) → … and stop as soon as |f(x)| < 10-5, so your f(1) is never evaluated
First of your code seems to uses a different equation than your question: 1-r instead of just r.
Valid answers to the equation is 1 and 2.4886 approximately as can be seen here. With the second argument of fsolve you specify a starting estimate. I think due to 0.1 being close to 1 you get that result. Using the 2.1 as starting estimate I get the other answer 2.4886.
from scipy.optimize import fsolve
def func(r,a,b,c,d):
if r==1:
return 10**5
else:
return ( a**(1-r) + b**(1-r) ) - ( c**(1-r) + d**(1-r) )
print(fsolve(func, 2.1, args=(5,5,4,7)))
Chosing a starting estimate is tricky as many give the following error: ValueError: Integers to negative integer powers are not allowed.
When I run my python 3 program:
exp = 211
p = 199
q = 337
d = (exp ** (-1)) % ((p - 1)*(q - 1))
results in 211^(-1).
But when I run the calculation in wolfram alpha I get the result I was expecting.
I did some test outputs and the variables exp, p and q in the program are all the integer values I used in wolfram alpha.
My goal is to derive a private key from a (weakly) encrypted integer.
If I test my wolfram alpha result, I can decrypt the encrypted message correctly.
Wolfram Alpha is computing the modular inverse. That is, it's finding the integer x such that
exp*x == 1 mod (p - 1)*(q - 1)
This is not the same as the modulo operator %. Here, Python is simply calculating the remainder when 1/exp is divided by (p - 1)*(q - 1) when given the expression in your question.
Copying the Python code from this answer, you can compute the desired value with Python too:
>>> modinv(exp, (p - 1)*(q - 1))
45403
Wolfram Alpha does not have well-defined syntax. It takes arbitrary text you provide and attempts to figure out what you meant by that input. In this case, it decided you were probably looking for a modular inverse, and it gave you one.
Python has well-defined syntax. In Python, the parser does not take the ** and the % together and guess that that combination makes the two operators have a meaning other than their usual meaning. The ** is computed the usual way, and then % is the modulo operator. If you want a modular inverse, you'll have to write one yourself.
I think the idea here is that wolfram alpha and python define the modulo operation differently depending on the fact that you are dealing with integers or real numbers.
In this case, Wolfram Alpha is using the modulo inverse because it detects the first number is 0 < x < 1
More information about the definition on real numbers here
Python evaluates immediately (211^(-1) gets computed as 0.004739... and not ekpt as 1/211) and the modular Euclidan remainder for x and y is conventinally defined as x-floor(x/y)*y if any of x,y is a rational number. If you do your calculation with some dedicated number theoretic program like e.g.: GP/Pari
ep = 211;p = 199;q = 337;(ep ^ (-1)) % ((p - 1)*(q - 1))
you will get the result you expected to get because a) it keeps fractions as fractions as long as possible and b) knows about modular arithmetic.
Is you like Python you may take a look at the programms an libraries offered at SciPy. SymPy might be what you are looking for.
Trying to compute the following lines I'm getting a realy complex result.
from sympy import *
s = symbols("s")
t = symbols("t")
h = 1/(s**3 + s**2/5 + s)
inverse_laplace_transform(h,s,t)
The result is the following:
(-(I*exp(-t/10)*sin(3*sqrt(11)*t/10) - exp(-t/10)*cos(3*sqrt(11)*t/10))*gamma(-3*sqrt(11)*I/5)*gamma(-1/10 - 3*sqrt(11)*I/10)/(gamma(9/10 - 3*sqrt(11)*I/10)*gamma(1 - 3*sqrt(11)*I/5)) + (I*exp(-t/10)*sin(3*sqrt(11)*t/10) + exp(-t/10)*cos(3*sqrt(11)*t/10))*gamma(3*sqrt(11)*I/5)*gamma(-1/10 + 3*sqrt(11)*I/10)/(gamma(9/10 + 3*sqrt(11)*I/10)*gamma(1 + 3*sqrt(11)*I/5)) + gamma(1/10 - 3*sqrt(11)*I/10)*gamma(1/10 + 3*sqrt(11)*I/10)/(gamma(11/10 - 3*sqrt(11)*I/10)*gamma(11/10 + 3*sqrt(11)*I/10)))*Heaviside(t)
However the answer should be simpler, Wolframalpha proves it.
Is there any way to simplify this result?
I tried a bit with this one and the way I could find a simpler solution is using something like:
from sympy import *
s = symbols("s")
t = symbols("t", positive=True)
h = 1/(s**3 + s**2/5 + s)
inverse_laplace_transform(h,s,t).evalf().simplify()
Notice that I define t as a positive variable, otherwise the sympy function returns a large term followed by the Heaviaside function. The result still contains many gamma functions that I could not reduce to the expression returned by Wolfram. Using evalf() some of those are converted to their numeric value and then after simplification you get a expression similar like the one in Wolfram but with floating numbers.
Unfortunately this part of Sympy is not quite mature. I also tried with Maxima and the result is quite close to the one in Wolfram. So it seems that Wolfram is not doing anything really special there.
I am currently working on a project which involves translating a program which runs in MATLAB to Python to increase speed and efficiency. However, I have hit a stumbling block. First, I am confused as to what the tilde(~) indicates in MATLAB, and how to represent that in a corresponding way in python. Second, I have been searching through documentation and I'm also having a difficult time finding an equivalent function for the 'sign' function in MATLAB.
indi = ~abs(indexd);
wav = (sum(sum(wv)))/(length(wv)*(length(wv)-1));
thetau = (sign(sign(wv - wav) - 0.1) + 1)/2;
thetad = (sign(sign(wav - wv) - 0.1) + 1)/2;
I have already converted indexd, and wv, which are from a previous section of code, to numpy arrays. What is the most efficient Pythonic way to replace the ~ and sign functions?
If you're using numpy, then you also use ~ to invert things just like MATLAB. See: What does the unary operator ~ do in numpy?. The sign function also exists in numpy. You use numpy.sign. Therefore, the code above is simply:
>>> import numpy as np
>>> indi = ~np.abs(indexd)
>>> wav = (np.sum(wv))/(len(wv)*(len(wv)-1))
>>> thetau = (np.sign(np.sign(wv - wav) - 0.1) + 1)/2
>>> thetad = (np.sign(np.sign(wav - wv) - 0.1) + 1)/2
Be advised that using length in MATLAB on matrices finds the largest dimension in the matrix whereas numpy uses len to give you the total number of rows in the matrix. Assuming that the number of rows in wv is greater than or equal to the number of columns in wv, then the above code will work as you expect. However, if you have more columns than rows, that you'd need is to find the maximum of the dimensions and use that instead... so:
>>> import numpy as np
>>> maxdim = np.max(wv.shape)
>>> indi = ~np.abs(indexd)
>>> wav = (np.sum(wv))/(maxdim*(maxdim-1))
>>> thetau = (np.sign(np.sign(wv - wav) - 0.1) + 1)/2
>>> thetad = (np.sign(np.sign(wav - wv) - 0.1) + 1)/2
The above call to numpy.sum actually sums over all of the dimensions by default, so there's no need to call nested sum calls to sum over the entire matrix (Thanks Divakar!).
Totally recommend you go here and see the awesome table and guide from translating from MATLAB to numpy: Link