Modulo in sage returning a negative value - python

I am new to SAGE and am having a problem with something very simple. I have the following code:
delay = float(3.5)
D = delay%1.0
D
But this returns the value -0.5 instead of the expected 0.5. What am I doing wrong?
If I change delay to be delay = float(2.5), I get the right answer, so I don't know why it isn't consistent (I am sure I am using the modulo wrong somehow).

I think that this question will answer things very well indeed for you.
However, I don't know why you are using float in Sage. Then you could just use Python straight up. Anyway, the % operator is tricky to use outside of integers. For example, here is the docstring for its use on Sage rational numbers.
Return the remainder of division of self by other, where other is
coerced to an integer
INPUT:
* ``other`` - object that coerces to an integer.
OUTPUT: integer
EXAMPLES:
sage: (-4/17).__mod__(3/1)
1
I assume this is considered to be a feature, not a bug.

Related

Python - Specify arguments' data type

To keep things simplistic, consider the Python function:
def to_string(value):
return str(value)
Since we cannot define data types of parameters in Python (as for as I know), when I pass 1/2 to above function, it automatically converts 1/2 to 0.5 and then returns string '0.5'. How do I make it return '1/2'? How do I force Python to treat arguments to be of certain data type, no matter how they "appear"?
Here (in python 3) 1/2 is evaluated to 0.5 before being even passed into the function. For this specific example you have lost the information, due to possible float accuracy errors, before the function is even called; In theory you can get back to 1/2 from 0.5 but you should not rely on this float manipulation. In order to not lose this accuracy here you should probably treat a fraction as two pieces of integer information as it really is, instead of one float.
from fractions import gcd
def to_string(n, d):
g = gcd(n, d)
return str(n//g) + "/" + str(d//g)
If what you are asking is specifically about fractions then a class built around this idea is probably your best bet. If your example is not explanatory then (famously) python does not have type enforcement. However you can read here https://docs.python.org/3/library/typing.html about modernisation of this idea and decorators.

Why no optimization of Python 3 range object for floats?

Jumping off from a previous question I asked a while back:
Why is 1000000000000000 in range(1000000000000001) so fast in Python 3?
If you do this:
1000000000000000.0 in range(1000000000000001)
...it is clear that range has not been optimized to check if floats are within the specified range.
I think I understand that the intended purpose of range is to work with ints only - so you cannot, for example, do something like this:
1000000000000 in range(1000000000000001.0)
# error: float object cannot be interpreted as an integer
Or this:
1000000000000 in range(0, 1000000000000001, 1.0)
# error: float object cannot be interpreted as an integer
However, the decision was made, for whatever reason, to allow things like this:
1.0 in range(1)
It seems clear that 1.0 (and 1000000000000.0 above) are not being coerced into ints, because then the int optimization would work for those as well.
My question is, why the inconsistency, and why no optimization for floats? Or, alternatively, what is the rationale behind why the above code does not produce the same error as the previous examples?
This seems like an obvious optimization to include in addition to optimization for ints. I'm guessing there are some nuanced issues preventing a clean implementation of such optimization, or alternatively there is some kind of rationale as to why you would not actually want to include such an optimization. Or possibly both.
EDIT: To clarify the issue here a bit, all the following statements evaluated to False as well:
3.2 in range(5)
'' in range(1)
[] in range(1)
None in range(1)
This seems like unexpected behavior to me, but so far there is definitely no inconsistency. However, the following evaluates to True:
1.0 in range(2.0)
And as shown previously, constructions similar to the above have not been optimized.
This does seem inconsistent- at some point in the evaluation, the value 1.0 (or 1000000000001.0 as in my original example) is being coerced into an int. This makes sense since it is a natural thing to convert a float ending in .0 to an int. However, the question still remains: if it is being converted an int anyway, why has 1000000000000.0 in range(1000000000001) not been optimized?
There is no inconsistency here. Floating point values can't be coerced to integers, that only works the other way around. As such, range() won't implicitly convert floats to integers when testing for containment either.
A range() object is a sequence type; it contains discrete integer values (albeit virtually). As such, it has to support containment testing for any object that may test as equal. The following works too:
>>> class ThreeReally:
... def __eq__(self, other):
... return other == 3
...
>>> ThreeReally() in range(4)
True
This has to do a full scan over all possible values in the range to test for equality with each contained integer.
However, only when using actual integers can the optimisation be applied, as that's the only type where the range() object can know what values will be considered equal without conversion.

Python Math module subtleties

I've been experimenting with the standard python math module and have come across some subtle difficulties. For example, I'm noticing the following behavior concerning indeterminate forms:
0**0
>>> 1
def inf():
return 1e900
# Will return inf
inf()**inf()
>>> inf
And other anomalies of the sort. I'm writing a calculator, and I'd like to have it be mathematically accurate. Is there something I can do about this? Or, is there some way to circumvent this? Thanks in advance.
There's nothing wrong with your first example. 0**0 is often defined to be 1.
The second example is all to do with precision of doubles. 1E900 exceeds the maximum positive value of a (most likely 64-bit) double. If you want doubles outside of that range, you'll have to look into libraries. Fortunately Python has one built-in: the decimal module.
For example:
from decimal import Decimal
d = Decimal('1E900')
f = d + d
print(f)
>>> 2E900
According to Wolfram (quoting Knuth) while 0**0 is indeterminate, it's sometimes given as 1. This is because holding the statement 'x**0 = 1' to be true in all cases is in some cases useful. Even more interestingly Python will consider NaN**0 to be 1 as well.
http://mathworld.wolfram.com/Power.html
In the case of infinity**infinity, you're not really dealing with the mathematical concept of infinity here (where that would be undefined), but rather a number that's too large and has overflowed. As such all that statement is saying is that a number that's huge to the power of another number that's huge is still a number that's huge.
Edit: I do not think it is possible to overload a built in type (such as float) in Python so overloading the float.__pow__(x,y) operator directly. What you could possibly do is define your own version of float.
class myfloat(float):
def __pow__(x,y):
if(x==y==0):
return 'NaN'
else:
return float.__pow__(x,y)
m = myfloat(0)
m**0
Not sure if that's exactly what you're looking for though.
Well returning NaN for 0**0 is almost always useless and lots of algorithms avoid special cases if we assume 0**0 == 1. So while it may not be mathematically perfect - we're talking about IEEE-754 here, mathematical exactness is really the least of our problems [1]
But if you want to change it, that's rather simple. The following works as expected in Python 3.2:
def my_pow(x, y):
if y == 0: return 'NaN'
return float.__pow__(float(x), y)
pow = my_pow
[1] The following code can theoretically execute the if branch with x86 CPUs (well at least in C and co):
float x = sqrt(y);
if (x != sqrt(y)) printf("Surprise, surprise!\n");

pyschools: wrong answer being given by site? (Topic 2, Q 7)

I am new to python! Done my studying, gone through several books and now attempting pyschools challenges. Done Variables and data types successfully but Question 7 of Topic 2 (Functions) is giving me hell.
I am using Eclipse with Python (ver 3.2). in my eclipse, I get the answers 100, 51 and 525. Those are the same answers pyschools expects but it shows that my function returns 100, 0 and 500.
Here is the question (Hope am allowed to post it here!):
Write a function percent(value, total) that takes in two numbers as arguments, and returns the percentage value as an integer.
And below is my function
def percent (value, total):
a = value
b = total
return(int((a / b) * 100))
percent(70, 70)
percent(46, 90)
percent(63, 12)
Can anyone tell me what pyschools really want me to do or where am going wrong?
Thanks!
You're using Python 3.x and they're using Python 2.x. In Python 2.x, the / operation is always an integer division when the arguments are integers. 1/2 is 0. So, use float() to change one of your arguments to a floating-point number, such as int((float(a) / b) * 100). Then a/b will have a fractional part.
Or, assuming they are using a recent version of Python 2.x, you can just add this to the beginning of your script and it should work on the site:
from __future__ import division
As an aside, why are you assigning your input parameters to variables? They're already variables. If you want them named a and b, just receive them that way:
def percent(a, b):
return int((float(a) / b) * 100)
because it is using python 2.x you need to atleast convert one value to float
def percent(number, total):
return int((float(number)/total) * 100)
this will work fine
Python performs mathematical operations on integers with strictly integer math, truncating fractional parts.
To avoid that, multiply one of the inputs by 1.0 before dividing.
You can use the truncating divide, like this:
def percent(value, total):
return value * 100 // total
Advantages:
(0) Works with all Pythons from 2.2 onwards.
(1) The reader knows what that one line does without having to guess what version of Python is being run and without having to inspect the start of the module for from __future__ magic.
(2) Avoids any potential floating-point problems.
(3) Avoids using float and int i.e. saves two (relatively expensive) function calls.
Note carefully: // works on float objects as well as int objects. The question says that the args are "numbers" (can include float objects) and the result should be an "integer" (not necessarily an int object). That gives you considerable licence.
>>> percent(63.001, 11.9999)
525.0 # That's an integer
If they insist on the return value being an int object, then of course you'll need an int().

Get ZeroDivisionError: float division in python

In the code below: highly simplified. I get ZeroDivisionError: float division
Any value below one gives errors. Other times 5/365 gives the error.
How do I fix?
import math
def top( t):
return ((.3 / 2) * t) / (.3 * math.sqrt(t))
t = 365/365
top= top(t)
print (top)
The problem is here:
t = 365/365
You are dividing two integers, so python is using integer division. In integer division, the quotient is rounded down. For example, 364/365 would be equal to 0. (365/365 works because it is equal to 1, which is still 1 rounded down.)
Instead, use float division, like so.
t = 365.0/365.0
In addition to cheeken's answer, you can put the following at the top of your modules:
from __future__ import division
Doing so will make the division operator work the way you want it to i.e always perform a (close approximation of) true mathematical division. The default behaviour of the division operator (where it performs truncating integer division if the arguments happen to be bound to integers) was inherited from C, but it was eventually realised that it was not a great fit for a dynamically typed language like Python. In Python 3, this no longer happens.
In my Python 2 modules, I almost always import division from __future__, so that I can't get caught out by accidentally passing integers to a division operation I don't expect to truncate.
It's worth noting that from __future__ import ... statements have to be the very first thing in your module (I think you can have comments and a docstring before it, nothing else). It's not really a normal import statement, even though it looks like one; it actually changes the way the Python interpreter reads your code, so it can't wait until runtime to be exectuted like a normal import statement. Also remember that import __future__ does not have any of the magic effects of from __future__ import ....
Try this:
exponent = math.exp(-(math.pow(x-mean,2)/(2*math.pow(stdev,2))))
A ZeroDivisionError is encountered when you try to divide by zero.

Categories

Resources