The question is:
How do I verify that sin^2(x) + cos^2(x) = 1 for 𝑥=𝜋,𝜋/2,𝜋/4,𝜋/6?
I have no idea on how to approach this problem
Please Help!
Any help would be greatly appreciated
First, ^ in python is bitwise XOR. To raise x to the y power, you do x ** y. To do math operations like sin and cos, you would use the builtin math module. Lastly, to check if a value is equal to another value, you use ==. So checking what you wrote out would look like this:
import math
x = math.pi
print(math.sin(x) ** 2 + math.cos(x) ** 2 == 1)
Related
I'm working in somewhat of a limited development environment. I'm writing a neural network in Python. I don't have access to numpy and as it is I can't even import the math module. So my options are limited. I need to calculate the sigmoid function, however I'm not sure how the exp() function works under the hood. I understand exponents and that I can use code like:
base = .57
exp = base ** exponent
However I'm not sure what exponent should be? How do functions like numpy.exp() calculate the exponent? This is what I need to replicate.
The exponential function exp(a) is equivalent to e ** a, where e is Euler's number.
>>> e = 2.718281828459045
>>> def exp(a):
... return e ** a
...
>>> import math # accuracy test
>>> [math.exp(i) - exp(i) for i in range(1, 12, 3)]
[0.0, 7.105427357601002e-15, 2.2737367544323206e-13, 1.4551915228366852e-11]
def sigmoid(z):
e = 2.718281828459
return 1.0/(1.0 + e**(-1.0*z))
# This is the formula for sigmoid in pure python
# where z = hypothesis. You have to find the value of hypothesis
you can use ** just fine for your use case it will work with both float and integer input
print(2**3)
8
print(2**0.5 )
1.4142135623730951
if you really need a drop in replacement for numpy.exp()
you can just make a function that behaves like it is written in the docs https://numpy.org/doc/stable/reference/generated/numpy.exp.html
from typing import List
def not_numpy_exp(x:[List[float],float]):
e = 2.718281828459045 # close enough
if type(x) == list:
return [e ** _x for _x in x]
else:
return e**x
how the exp() function works under the hood
If you mean math.exp from built-in module math in this place it does simply
exp(x, /)
Return e raised to the power of x.
where e should be understand as math.e (2.718281828459045). If import math is not allowed you might do
pow(2.718281828459045, x)
instead of exp(x)
def GetE(x1, x2, k, x, z, N):
firstHeight = math.exp(((k/(2*math.pi*z)) * ((x-x1) ** 2))j)
My function gives me a syntax error on the line defining firstHeight. I believe it is to do with not being able to define a complex number with variables, as I have tried:
test = 2 + (k)j
and also recieved a syntax error. Does anyone know how to fix this?
math does not support complex numbers, for that you have cmath:
import math, cmath
cmath.exp(((k/(2*math.pi*z)) * ((x-x1) ** 2))*1j)
# (0.998966288513345+0.045457171204028084j)
Or you could use NumPy:
np.exp(((k/(2*np.pi*z)) * ((x-x1) ** 2))*1j)
#(0.998966288513345+0.045457171204028084j)
That, and also as #GreenCloakGuy points out, you can't use j to convert a non-literal into a complex number. You can instead use complex() or 1j
The j suffix can only be used in an imaginary literal, not with variables. To get a negative imaginary number from a variable, multiply the variable by -1j.
firstHeight = math.exp(((k/(2*math.pi*z)) * ((x-x1) ** 2)) * -1j)
test = 2 + k * -1j
Does anyone know why the below doesn't equal 0?
import numpy as np
np.sin(np.radians(180))
or:
np.sin(np.pi)
When I enter it into python it gives me 1.22e-16.
The number π cannot be represented exactly as a floating-point number. So, np.radians(180) doesn't give you π, it gives you 3.1415926535897931.
And sin(3.1415926535897931) is in fact something like 1.22e-16.
So, how do you deal with this?
You have to work out, or at least guess at, appropriate absolute and/or relative error bounds, and then instead of x == y, you write:
abs(y - x) < abs_bounds and abs(y-x) < rel_bounds * y
(This also means that you have to organize your computation so that the relative error is larger relative to y than to x. In your case, because y is the constant 0, that's trivial—just do it backward.)
Numpy provides a function that does this for you across a whole array, allclose:
np.allclose(x, y, rel_bounds, abs_bounds)
(This actually checks abs(y - x) < abs_ bounds + rel_bounds * y), but that's almost always sufficient, and you can easily reorganize your code when it's not.)
In your case:
np.allclose(0, np.sin(np.radians(180)), rel_bounds, abs_bounds)
So, how do you know what the right bounds are? There's no way to teach you enough error analysis in an SO answer. Propagation of uncertainty at Wikipedia gives a high-level overview. If you really have no clue, you can use the defaults, which are 1e-5 relative and 1e-8 absolute.
One solution is to switch to sympy when calculating sin's and cos's, then to switch back to numpy using sp.N(...) function:
>>> # Numpy not exactly zero
>>> import numpy as np
>>> value = np.cos(np.pi/2)
6.123233995736766e-17
# Sympy workaround
>>> import sympy as sp
>>> def scos(x): return sp.N(sp.cos(x))
>>> def ssin(x): return sp.N(sp.sin(x))
>>> value = scos(sp.pi/2)
0
just remember to use sp.pi instead of sp.np when using scos and ssin functions.
Faced same problem,
import numpy as np
print(np.cos(math.radians(90)))
>> 6.123233995736766e-17
and tried this,
print(np.around(np.cos(math.radians(90)), decimals=5))
>> 0
Worked in my case. I set decimal 5 not lose too many information. As you can think of round function get rid of after 5 digit values.
Try this... it zeros anything below a given tiny-ness value...
import numpy as np
def zero_tiny(x, threshold):
if (x.dtype == complex):
x_real = x.real
x_imag = x.imag
if (np.abs(x_real) < threshold): x_real = 0
if (np.abs(x_imag) < threshold): x_imag = 0
return x_real + 1j*x_imag
else:
return x if (np.abs(x) > threshold) else 0
value = np.cos(np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
value = np.exp(-1j*np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
Python uses the normal taylor expansion theory it solve its trig functions and since this expansion theory has infinite terms, its results doesn't reach exact but it only approximates.
For e.g
sin(x) = x - x³/3! + x⁵/5! - ...
=> Sin(180) = 180 - ... Never 0 bout approaches 0.
That is my own reason by prove.
Simple.
np.sin(np.pi).astype(int)
np.sin(np.pi/2).astype(int)
np.sin(3 * np.pi / 2).astype(int)
np.sin(2 * np.pi).astype(int)
returns
0
1
0
-1
I have the following linear equations.
m = 2 ** 31 - 1
(207560540 ∗ a + b) modulo m = 956631177
(956631177 ∗ a + b) modulo m = 2037688522
What is the most efficient way to solve these equations?
I used Z3 however it did not find any solution. My code for Z3 to solve the above equations is:
#! /usr/bin/python
from z3 import *
a = Int('a')
b = Int('b')
s = Solver()
s.add((a * 207560540 + b) % 2147483647 == 956631177)
s.add((a * 956631177 + b) % 2147483647 == 2037688522)
print s.check()
print s.model()
I know that the solution is: a = 16807, b = 78125, however, how can I make Z3 solve it?
The other method I tried is by setting a and b to BitVec() instead of Integers as shown below:
a = BitVec('a', 32)
b = BitVec('b', 32)
This gives me an incorrect solution as shown below:
[b = 3637638538, a = 4177905984]
Is there way to solve it with Z3?
Thanks.
An aside on bit-vectors: When you use bit-vectors, then all operations are done modulo 2^N where N is the bit-vector size. So, z3 isn't giving you an incorrect solution: If you do the math modulo 2^32, you'll find that the model it finds is indeed correct.
It appears your problem does indeed need unbounded integers, and it is not really linear due to modulus 2^31-1. (Linear means multiplication by a constant; modulus by a constant takes you to a different realm.) And modulus is just not easy to reason with; I don't think z3 is the right tool for this sort of problem, nor any other SMT solver. Tools like mathematica, wolfram-alpha etc. probably are better choices in this case; for instance, see: wolfram-alpha solution
I'm trying to solve the integral of (2**(1/2)*y**(1/2)/2)**2 from 0 to 5 (also shown here). I've been using
func = lambda y: ( 2**(1/2) * y**(1/2)/2 )**2 and a == 0 and b == 5
from scipy import integrate
integrate.quad(func, a b)
For some reason, I keep getting the value 1.25, while wolfram says it should be 6.25? I can't seem to put my finger on the error.
p.s. sorry for the error katrie, i forgot that python uses and not && for logical AND
SOLVED:
this was a silly int/float error. thank you everyone.
Well, let me write your function in normal mathematical notation (I can't think in Python). I don't like **, as it gets confusing:
(2**(1/2)*y**(1/2)/2)**2 =>
(2^(1/2) * (1/2) * y^(1/2))^2 =>
2 * (1/4) * y =>
y / 2
So to integrate, antidifferentiate (I'm just thinking aloud):
antidifferentiate(y / 2) = y^2 / 4
Therefore
integral('y / 2', 0, 5) =
5^2 / 4 - 0^2 / 4 =
25 / 4 =
6.25
Right. Have you tried replacing 1/2 with 0.5? It could be interpreted as the quotient of two integers, which is rounded up.
Try this (as the others have suggested):
func = lambda y: (2**(0.5) * y**(0.5) / 2.0)**2.0 & a == 0 & b == 5
from scipy import integrate
integrate.quad(func, a b) # What's 'a b'? Maybe 'a, b' would work?
Good luck!
The problem is that Python sees (1/2) and evaluates it with integer division, yielding zero.
Well what's the value supposed to be? Also, you should put more parenthesis in your equation. I have no idea what (2**(1/2)*y**(1/2)/2)**2 gets parsed out to be.