Why does python does print(type(-1**0.5)) return float instead of complex?
Getting the square root of negative integer of float always mathematically consider as complex numbers. How does python exponent operator support to get complex number?
print(type(-1**0.5))
<type 'float'>
In the mathematical order of operations, exponentation comes before multiplication and unary minus counts as multiplication (by -1). So your expression is the same as -(1**0.5), which doesn't involve any imaginary numbers.
If you do (-1)**0.5 you'll get an error in Python 2 because the answer isn't a real number. If you want a complex answer, you need to use a complex input by doing (-1+0j)**0.5. (In Python 3, (-1)**0.5 will return a complex result.)
Try (-1)**0.5 instead.
-1**0.5 is parsed as -(1**0.5), which is equal to -1.
>>> -1**0.5
-1
>>> (-1)**0.5
(6.123e-17+1j)
The exponentiation is being carried out first, and then its sign is inverted. To get the result you want, use parentheses to ensure that the - sign stays with the 1:
>>> -1**0.5
-1.0
>>> (-1)**0.5
(6.123233995736766e-17+1j)
Python is correct as -1**0.5 is different from (-1)**0.5.
The first one raises one to the power of 0.5 and negates the result.
The second one raises -1 to the same power and returns a complex number as expected.
Related
I was trying to understand bitwise NOT in python.
I tried following:
>>> ~0b0
-1
>>> 0b1
1
Why is this the case? As per my understanding, ~0b0 is 0b1. But seems that python interprets it -1 in 2's complement, but 0b1 is getting interpreted as 1.
Why is this so?
More importantly, how and why does python determines when to interpret number or MSB of binary string negative?
Python interprets a word of binary bits as a negative number when
a) the sign bit is set (the most significant bit of a word)
and
b) the str or repr method is called (to get the string representation or value representation of a word).
or, implicitly, any time you treat the word as a number.
Per-spec:
The unary ~ (invert) operator yields the bitwise inversion of its integer argument. The bitwise inversion of x is defined as -(x+1).
because semantically Python's integers are always signed and manipulated as two's complement.
What's the usage of the tilde operator in Python?
One thing I can think about is do something in both sides of a string or list, such as check if a string is palindromic or not:
def is_palindromic(s):
return all(s[i] == s[~i] for i in range(len(s) / 2))
Any other good usage?
It is a unary operator (taking a single argument) that is borrowed from C, where all data types are just different ways of interpreting bytes. It is the "invert" or "complement" operation, in which all the bits of the input data are reversed.
In Python, for integers, the bits of the twos-complement representation of the integer are reversed (as in b <- b XOR 1 for each individual bit), and the result interpreted again as a twos-complement integer. So for integers, ~x is equivalent to (-x) - 1.
The reified form of the ~ operator is provided as operator.invert. To support this operator in your own class, give it an __invert__(self) method.
>>> import operator
>>> class Foo:
... def __invert__(self):
... print 'invert'
...
>>> x = Foo()
>>> operator.invert(x)
invert
>>> ~x
invert
Any class in which it is meaningful to have a "complement" or "inverse" of an instance that is also an instance of the same class is a possible candidate for the invert operator. However, operator overloading can lead to confusion if misused, so be sure that it really makes sense to do so before supplying an __invert__ method to your class. (Note that byte-strings [ex: '\xff'] do not support this operator, even though it is meaningful to invert all the bits of a byte-string.)
~ is the bitwise complement operator in python which essentially calculates -x - 1
So a table would look like
i ~i
-----
0 -1
1 -2
2 -3
3 -4
4 -5
5 -6
So for i = 0 it would compare s[0] with s[len(s) - 1], for i = 1, s[1] with s[len(s) - 2].
As for your other question, this can be useful for a range of bitwise hacks.
One should note that in the case of array indexing, array[~i] amounts to reversed_array[i]. It can be seen as indexing starting from the end of the array:
[0, 1, 2, 3, 4, 5, 6, 7, 8]
^ ^
i ~i
Besides being a bitwise complement operator, ~ can also help revert a boolean value, though it is not the conventional bool type here, rather you should use numpy.bool_.
This is explained in,
import numpy as np
assert ~np.True_ == np.False_
Reversing logical value can be useful sometimes, e.g., below ~ operator is used to cleanse your dataset and return you a column without NaN.
from numpy import NaN
import pandas as pd
matrix = pd.DataFrame([1,2,3,4,NaN], columns=['Number'], dtype='float64')
# Remove NaN in column 'Number'
matrix['Number'][~matrix['Number'].isnull()]
The only time I've ever used this in practice is with numpy/pandas. For example, with the .isin() dataframe method.
In the docs they show this basic example
>>> df.isin([0, 2])
num_legs num_wings
falcon True True
dog False True
But what if instead you wanted all the rows not in [0, 2]?
>>> ~df.isin([0, 2])
num_legs num_wings
falcon False False
dog True False
I was solving this leetcode problem and I came across this beautiful solution by a user named Zitao Wang.
The problem goes like this for each element in the given array find the product of all the remaining numbers without making use of divison and in O(n) time
The standard solution is:
Pass 1: For all elements compute product of all the elements to the left of it
Pass 2: For all elements compute product of all the elements to the right of it
and then multiplying them for the final answer
His solution uses only one for loop by making use of. He computes the left product and right product on the fly using ~
def productExceptSelf(self, nums):
res = [1]*len(nums)
lprod = 1
rprod = 1
for i in range(len(nums)):
res[i] *= lprod
lprod *= nums[i]
res[~i] *= rprod
rprod *= nums[~i]
return res
Explaining why -x -1 is correct in general (for integers)
Sometimes (example), people are surprised by the mathematical behaviour of the ~ operator. They might reason, for example, that rather than evaluating to -19, the result of ~18 should be 13 (since bin(18) gives '0b10010', inverting the bits would give '0b01101' which represents 13 - right?). Or perhaps they might expect 237 (treating the input as signed 8-bit quantity), or some other positive value corresponding to larger integer sizes (such as the machine word size).
Note, here, that the signed interpretation of the bits 11101101 (which, treated as unsigned, give 237) is... -19. The same will happen for larger numbers of bits. In fact, as long as we use at least 6 bits, and treating the result as signed, we get the same answer: -19.
The mathematical rule - negate, and then subtract one - holds for all inputs, as long as we use enough bits, and treat the result as signed.
And, this being Python, conceptually numbers use an arbitrary number of bits. The implementation will allocate more space automatically, according to what is necessary to represent the number. (For example, if the value would "fit" in one machine word, then only one is used; the data type abstracts the process of sign-extending the number out to infinity.) It also does not have any separate unsigned-integer type; integers simply are signed in Python. (After all, since we aren't in control of the amount of memory used anyway, what's the point in denying access to negative values?)
This breaks intuition for a lot of people coming from a C environment, in which it's arguably best practice to use only unsigned types for bit manipulation and then apply 2s-complement interpretation later (and only if appropriate; if a value is being treated as a group of "flags", then a signed interpretation is unlikely to make sense). Python's implementation of ~, however, is consistent with its other design choices.
How to force unsigned behaviour
If we wanted to get 13, 237 or anything else like that from inverting the bits of 18, we would need some external mechanism to specify how many bits to invert. (Again, 18 conceptually has arbitrarily many leading 0s in its binary representation in an arbitrary number of bits; inverting them would result in something with leading 1s; and interpreting that in 2s complement would give a negative result.)
The simplest approach is to simply mask off those arbitrarily-many bits. To get 13 from inverting 18, we want 5 bits, so we mask with 0b11111, i.e., 31. More generally (and giving the same interface for the original behaviour):
def invert(value, bits=None):
result = ~value
return result if bits is None else (result & ((1 << bits) - 1))
Another way, per Andrew Jenkins' answer at the linked example question, is to XOR directly with the mask. Interestingly enough, we can use XOR to handle the default, arbitrary-precision case. We simply use an arbitrary-sized mask, i.e. an integer that conceptually has an arbitrary number of 1 bits in its binary representation - i.e., -1. Thus:
def invert(value, bits=None):
return value ^ (-1 if bits is None else ((1 << bits) - 1))
However, using XOR like this will give strange results for a negative value - because all those arbitrarily-many set bits "before" (in more-significant positions) the XOR mask weren't cleared:
>>> invert(-19, 5) # notice the result is equal to 18 - 32
-14
it's called Binary One’s Complement (~)
It returns the one’s complement of a number’s binary. It flips the bits. Binary for 2 is 00000010. Its one’s complement is 11111101.
This is binary for -3. So, this results in -3. Similarly, ~1 results in -2.
~-3
Output : 2
Again, one’s complement of -3 is 2.
This is minor usage is tilde...
def split_train_test_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
the code above is from "Hands On Machine Learning"
you use tilde (~ sign) as alternative to - sign index marker
just like you use minus - is for integer index
ex)
array = [1,2,3,4,5,6]
print(array[-1])
is the samething as
print(array[~1])
So I was writing a simple script to demonstrate geometric series convergence.
from decimal import *
import math
initial = int(input("a1? "))
r = Decimal(input("r? "))
runtime = int(input("iterations? "))
sum_value=0
for i in range(runtime):
sum_value+=Decimal(initial * math.pow(r,i))
print(sum_value)
When I use values such as:
a1 = 1
r = .2
iterations = 100000
I get the convergence to be 1.250000000000000021179302083
When I replace the line:
sum_value+=Decimal(initial * math.pow(r,i))
With:
sum_value+=Decimal(initial * r ** i)
I get a more precise value, 1.250000000000000000000000002
What exactly is the difference here? From my understanding, it has to do with math.pow being a floating point operation, but I would just think that ** is syntactic sugar for the math power function. If they are indeed different, then why with a precision value of 200, when inputting the following to IDLE:
>>> Decimal(.8**500)
Decimal('3.50746621104350087215129555150772856244326043764431058846880005304485310211166734705824986213804838358790165633656170035364028902957755917668691836297512054443359375E-49')
>>> Decimal(math.pow(.8,500))
Decimal('3.50746621104350087215129555150772856244326043764431058846880005304485310211166734705824986213804838358790165633656170035364028902957755917668691836297512054443359375E-49')
They seem to be exactly the same. What is happening here?
The difference is, as you imply, that math.pow() converts the inputs to floats as stated in the documentation: "Unlike the built-in ** operator, math.pow() converts both its arguments to type float."
Therefore math.pow() also delivers a float as answer, independently of whether the input is Decimal (or int) or whatever. When using numbers that are not exactly representable as a float(but is as Decimal) you are likely to get a more precise answer using the ** operator.
This explains why your loop gives a more exact result in case of using ** since you are working with Decimal numbers raised to an integer. In the second case, you are inadvertently using floats for both calculations and then converting the result to Decimal when the operation is already executed. If you instead work with explicit Decimal values you will see the difference:
>>> Decimal('.8')**500
Decimal('3.507466211043403874762758796E-49')
>>> Decimal(math.pow(Decimal('.8'), 500))
Decimal('3.50746621104350087215129555150772856244326043764431058846880005304485310211166734705824986213804838358790165633656170035364028902957755917668691836297512054443359375E-49')
Thus, in the second case, the Decimal value is automatically casted to a float and the result is the same as for your example above. In the first case, however, the calculation is executed in the Decimal domain and yields a slightly different result.
This question already has answers here:
Negative integer division surprising result
(5 answers)
Closed 8 years ago.
Just randomly tried out this:
>>> int(-1/2)
-1
>>> int(-0.5)
0
why the results are different??
Try this:
>>> -1/2
-1
>>> -0.5
-0.5
The difference is that integer division (the former) results in an integer in some versions of Python, instead of a float like the second number is. You're using int on two different numbers, so you'll get different results. If you specify floats first, you'll see the difference disappear.
>>> -1.0/2.0
-0.5
>>> int(-1.0/2.0)
0
>>> int(-0.5)
0
The difference you see is due to how rounding works in Python. The int() function truncates to zero, as noted in the docs:
If x is floating point, the conversion truncates towards zero.
On the other hand, when both operands are integers, the / acts as though a mathematical floor operation was applied:
Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result.
So, in your first case, -1 / 2 results, theoretically, in -0.5, but because both operands are ints, Python essentially floors the result, which makes it -1. int(-1) is -1, of course. In your second example, int is applied directly to -0.5, a float, and so int truncates towards 0, resulting in 0.
(This is true of Python 2.x, which I suspect you are using.)
This is a result of two things:
Python 2.x does integer division when you divide two integers;
Python uses "Floored" division for negative numbers.
Negative integer division surprising result
http://python-history.blogspot.com/2010/08/why-pythons-integer-division-floors.html
Force at least one number to float, and the results will no longer surprise you.
assert int(-1.0/2) == 0
As others have noted, in Python 3.x the default for division of integers is to promote the result to float if there would be a nonzero remainder from the division.
As TheSoundDefense mentioned, it depends upon the version. In Python 3.3.2:
>>> int(-1/2)
0
>>> int(-0.5)
0
int() command truncates towards 0, unlike floor() which rounds downwards to the next integer.
So int(-0.5) is clearly 0.
As for -1/2, actually -1/2 is equal to -1! Therefore rounding downwards to the next integer is -1. In Python 2, -a/b != -(a/b). Actually, -1/2 equals floor(-1.0 / 2.0), which is -1.
I am wondering about the way Python (3.3.0) prints complex numbers. I am looking for an explanation, not a way to change the print.
Example:
>>> complex(1,1)-complex(1,1)
0j
Why doesn't it just print "0"? My guess is: to keep the output of type complex.
Next example:
>>> complex(0,1)*-1
(-0-1j)
Well, a simple "-1j" or "(-1j)" would have done. And why "-0"?? Isn't that the same as +0? It doesn't seem to be a rounding problem:
>>> (complex(0,1)*-1).real == 0.0
True
And when the imaginary part gets positive, the -0 vanishes:
>>> complex(0,1)
1j
>>> complex(0,1)*-1
(-0-1j)
>>> complex(0,1)*-1*-1
1j
Yet another example:
>>> complex(0,1)*complex(0,1)*-1
(1-0j)
>>> complex(0,1)*complex(0,1)*-1*-1
(-1+0j)
>>> (complex(0,1)*complex(0,1)*-1).imag
-0.0
Am I missing something here?
It prints 0j to indicate that it's still a complex value. You can also type it back in that way:
>>> 0j
0j
The rest is probably the result of the magic of IEEE 754 floating point representation, which makes a distinction between 0 and -0, the so-called signed zero. Basically, there's a single bit that says whether the number is positive or negative, regardless of whether the number happens to be zero. This explains why 1j * -1 gives something with a negative zero real part: the positive zero got multiplied by -1.
-0 is required by the standard to compare equal to +0, which explains why (1j * -1).real == 0.0 still holds.
The reason that Python still decides to print the -0, is that in the complex world these make a difference for branch cuts, for instance in the phase function:
>>> phase(complex(-1.0, 0.0))
3.141592653589793
>>> phase(complex(-1.0, -0.0))
-3.141592653589793
This is about the imaginary part, not the real part, but it's easy to imagine situations where the sign of the real part would make a similar difference.
The answer lies in the Python source code itself.
I'll work with one of your examples. Let
a = complex(0,1)
b = complex(-1, 0)
When you doa*b you're calling this function:
real_part = a.real*b.real - a.imag*b.imag
imag_part = a.real*b.imag + a.imag*b.real
And if you do that in the python interpreter, you'll get
>>> real_part
-0.0
>>> imag_part
-1.0
From IEEE754, you're getting a negative zero, and since that's not +0, you get the parens and the real part when printing it.
if (v->cval.real == 0. && copysign(1.0, v->cval.real)==1.0) {
/* Real part is +0: just output the imaginary part and do not
include parens. */
...
else {
/* Format imaginary part with sign, real part without. Include
parens in the result. */
...
I guess (but I don't know for sure) that the rationale comes from the importance of that sign when calculating with elementary complex functions (there's a reference for this in the wikipedia article on signed zero).
0j is an imaginary literal which indeed indicates a complex number rather than an integer or floating-point one.
The +-0 ("signed zero") is a result of Python's conformance to IEEE 754 floating point representation since in Python, complex is by definition a pair of floating point numbers. Due to the latter, there's no need to print or specify zero fraction parts for a complex too.
The -0 part is printed in order to accurately represent the contents as repr()'s documentation demands (repr() is implicitly called whenever an operation's result is output to the console).
Regarding the question why (-0+1j) = 1j but (1j*-1) = (-0+1j).
Note that (-0+0j) or (-0.0+0j) aren't single complex numbers but expressions - an int/float added to a complex. To compute the result, first the first number is converted to a complex (-0-> (0.0,0.0) since integers don't have signed zeros, -0.0-> (-0.0,0.0)). Then its .real and .imag are added to the corresponding ones of 1j which are (+0.0,1.0). The result is (+0.0,1.0) :^) . To construct a complex directly, use complex(-0.0,1).
As far as the first question is concerned: if it just printed 0 it would be mathematically correct, but you wouldn't know you were dealing with a complex object vs an int. As long as you don't specify .real you will always get a J component.
I'm not sure why you would ever get -0; it's not technically incorrect (-1 * 0 = 0) but it's syntactically odd.
As far as the rest goes, it's strange that it isn't consistent, however none are technically correct, just an artifact of the implementation.