Raise Integers to negative integer powers in Python 2.7 - python

I'm having a really hard time translating this Matlab code to Python.
I'll show you my effort so far.
This is the matlab code
Sigma=BW1/(2*(2*(-log(10^(att_bw/10)))^(1/Order))^(1/2))
Now I tried to used Python power operator as I studied earlier this morning **
My code is
BW1 = np.array([100])
att_bw = np.array([-3])
Order = np.array([1])
Sigma = BW1/(2*(2*(-np.log(10**(att_bw[0]/10)))**(1/Order))**(1/2))
However it says that it cannot handle negative powers unfortunately
The result for sigma should be 42.539
EDIT: it seems my code runs perfectly fine in Python 3. However I'm stuck with Python 2.7. So is there any easy way to port it?

In python2 you need to make sure you use floating point numbers. To make them so, add . after each integer you now have in your formula.
Like this:
import numpy as np
BW1 = np.array([100])
att_bw = np.array([-3])
Order = np.array([1])
Sigma = BW1/(2.*(2.*(-np.log(10.**(att_bw[0]/10.)))**(1./Order))**(1./2.))
print Sigma
Output
[42.53892736]

Related

Basic Python code works differently on new machine (Mac)

I just started my new M1 MacBook and since Python2.7 is preinstalled, I wanted to run a little harmonic series program in the terminal to compare to my old Mac (old Intel). A silly test, I know.
import time
t_1=time.time()
x=1
i=1
while x<20:
i+=1
x+=1/i
if i % 1e6 == 0: print(i,x);
print(i)
t_2=time.time()
print(str(t_2-t_1)+' s')
It turned out that only on the M1 Mac the program was not running as expected, since x did not increase. I had to change x+=1/i to x+=1/float(i) so x was understood as a float and not 'rounded' to 1 (staying an int). I thought that, while the latter is actual the more correct way to program, python is flexible with variables, and my most important question is of course: Why does this work differently on different machines?
In python2.7 the / is integer division. If you import
from __future__ import division
at the top of your file, then you can use / for float division.
If you still want to be able to use integer division, you can always use //.

scipy.optimize.root returning incorrect solution

I am trying to solve a system of simultaneous equations as follows:
"145.0x/21025 = -0.334"
"(-48.402x-96.650y+96.650z)/21025 = -0.334"
"(-48.402x+132.070y+35.214z)/21025 = -0.334"
"sqrt(x^2+y^2+z^2) = 145.0"
I am using the following Python script:
from scipy.optimize import root
from numpy import sqrt
from sys import argv, stdout
initGuesses = eval(argv[1])
equations = argv[2:]
def f(variables):
x,y,z = variables
results = []
for eqn in equations:
results.append(eval(eqn))
return results
solution = root(f, initGuesses, method="lm")
stdout.write(str(solution["x"][0]) + "," + str(solution["x"][1]) + "," + str(solution["x"][2]))
stdout.flush()
The program is called as follows:
python3 SolvePosition3D.py "(1,1,1)" "(145.0*x+0.0*y+0.0*z)/21025.0+0.334" "(-48.402*x-96.650*y+96.650*z)/21025+0.334" "(-48.402*x+132.070*y+35.214*z)/21025+0.334" "sqrt(x**2+y**2+z**2)-145.0"
And I am receiving the following output:
48.2699997956,35.4758788666,132.042180583
This solution is wrong; the correct solution is roughly
-48,-35,-132
which is the same numbers but * -1.
The answer returned by the program satisfies the final equation, but violates all others.
Does anyone know why this is happening? Getting the correct solutions to these equations (and many others like them) is vitally important to my current project.
I was able to run the code via adding
from numpy import sqrt
from scipy.optimize import root
and switching to regular old prints for the solution.
Your starting guess seems to be wrong. Starting from (1, 1, 1), the root finder converges to 48.2699997956,35.4758788666,132.042180583, as you said. If you plug in (-1,-1,-1), you get -48.2649482763,-35.4698607274,-132.050694891 as expected.
As for why it does that, well, nonlinear systems are just hard to solve like that. Most algorithms tend to find solutions deterministically based on the starting point. If you need to try multiple starting points, try a grid-based search.

Sympy: Expanding sum that involves Kets from its quantum module

Today I started using sympy and its quantum module to implement some basic calculations in Bra-Ket notation.
Executing the code:
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import *
from sympy import *
from sympy.abc import k
print Sum(Ket(k),(k,0,5))
yields the expected result, that is, Sum(|k>, (k, 0, 5)) is printed.
Now I'd like to expand the sum and therefore write:
print Sum(Ket(k),(k,0,5)).doit()
However, this doesn't give the correct result, but prints out 6*|k> which obviously is not the desired output. Apparently, the program doesn't recognize Ket(k) as depending on the index k.
How could I work around or solve this issue?
Looks like a bug. You can probably work around it by doing the sum outside of sympy, with standard python functions like sum(Ket(i) for i in range(6)).

Calculating pi in python - Leibnitz

I am starting my adventure with Python. My current program is very simple, it has to calculate pi using Leibnitz formula, and stop working when module from "a" varibale is less than x. So far, it looks like this:
from math import fabs
from math import pi
x=float(input('Enter accuracy of calculating:'))
sum=0
a=0
n=0
while True:
a=float(((-1)**n)/(2*n+1))
sum+=a
n+=1
result=sum*4
if fabs(a)<x:
break
print('Calculated pi:',result)
print('Imported pi:', pi)
It looks ok, but here's the problem:
In my version of Geanie it works great, but on my friend's Geanie - it calculates 0.0.
Also, on Ideone.com (without keyboard input, just e.g. x=0.0001) it also returns 0.0.
Does anyone knows where the problem is?
Try this
a=((-1)**n)/float(2*n+1)
instead of this
a=float(((-1)**n)/(2*n+1))
Reason: I don't see a point to setting a itself to be a float, but setting the divisor or dividend in the division that produces a will ensure that Python doesn't cut the remainder (which is the default for integer division before Python 3.0).
Side issue: Your style doesn't match the official Python style guidelines, so you might want to change that.

Python NumPy log2 vs MATLAB

I'm a Python newbie coming from using MATLAB extensively. I was converting some code that uses log2 in MATLAB and I used the NumPy log2 function and got a different result than I was expecting for such a small number. I was surprised since the precision of the numbers should be the same (i.e. MATLAB double vs NumPy float64).
MATLAB Code
a = log2(64);
--> a=6
Base Python Code
import math
a = math.log2(64)
--> a = 6.0
NumPy Code
import numpy as np
a = np.log2(64)
--> a = 5.9999999999999991
Modified NumPy Code
import numpy as np
a = np.log(64) / np.log(2)
--> a = 6.0
So the native NumPy log2 function gives a result that causes the code to fail a test since it is checking that a number is a power of 2. The expected result is exactly 6, which both the native Python log2 function and the modified NumPy code give using the properties of the logarithm. Am I doing something wrong with the NumPy log2 function? I changed the code to use the native Python log2 for now, but I just wanted to know the answer.
No. There is nothing wrong with the code, it is just because floating points cannot be represented perfectly on our computers. Always use an epsilon value to allow a range of error while checking float values. Read The Floating Point Guide and this post to know more.
EDIT - As cgohlke has pointed out in the comments,
Depending on the compiler used to build numpy np.log2(x) is either computed by the C library or as 1.442695040888963407359924681001892137*np.log(x) See this link.
This may be a reason for the erroneous output.

Categories

Resources