To keep things simplistic, consider the Python function:
def to_string(value):
return str(value)
Since we cannot define data types of parameters in Python (as for as I know), when I pass 1/2 to above function, it automatically converts 1/2 to 0.5 and then returns string '0.5'. How do I make it return '1/2'? How do I force Python to treat arguments to be of certain data type, no matter how they "appear"?
Here (in python 3) 1/2 is evaluated to 0.5 before being even passed into the function. For this specific example you have lost the information, due to possible float accuracy errors, before the function is even called; In theory you can get back to 1/2 from 0.5 but you should not rely on this float manipulation. In order to not lose this accuracy here you should probably treat a fraction as two pieces of integer information as it really is, instead of one float.
from fractions import gcd
def to_string(n, d):
g = gcd(n, d)
return str(n//g) + "/" + str(d//g)
If what you are asking is specifically about fractions then a class built around this idea is probably your best bet. If your example is not explanatory then (famously) python does not have type enforcement. However you can read here https://docs.python.org/3/library/typing.html about modernisation of this idea and decorators.
Related
Searching for this topic I came across the following: How to represent integer infinity?
I agree with Martijn Peeters that adding a separate special infinity value for int may not be the best of ideas.
However, this makes type hinting difficult. Assume the following code:
myvar = 10 # type: int
myvar = math.inf # <-- raises a typing error because math.inf is a float
However, the code behaves everywhere just the way as it should. And my type hinting is correct everywhere else.
If I write the following instead:
myvar = 10 # type: Union[int, float]
I can assign math.inf without a hitch. But now any other float is accepted as well.
Is there a way to properly constrain the type-hint? Or am I forced to use type: ignore each time I assign infinity?
The super lazy (and probably incorrect) solution:
Rather than adding a specific value, the int class can be extended via subclassing. This approach is not without a number of pitfalls and challenges, such as the requirement to handle the infinity value for the various __dunder__ methods (i.e. __add__, __mul__, __eq__ and the like, and all of these should be tested). This would be an unacceptable amount of overhead in the use cases where a specific value is required. In such a case, wrapping the desired value with typing.cast would be able to better indicate to the type hinting system the specific value (i.e. inf = cast(int, math.inf)) be acceptable for assignment.
The reason why this approach is incorrect is simply this: since the value assigned looks/feels exactly like some number, some other users of your API may end up inadvertently use this as an int and then the program may explode on them badly when math.inf (or variations of such) be provided.
An analogy is this: given that lists have items that are indexed by positive integers, we would expect that any function that return an index to some item be some positive integer so we may use it directly (I know this is not the case in Python given there are semantics that allow negative index values be used, but pretend we are working with say C for the moment). Say this function return the first occurrence of the matched item, but if there are any errors it return some negative number, which clearly exceed the range of valid values for an index to some item. This lack of guarding against naive usage of the returned value will inevitably result in problems that a type system is supposed to solve.
In essence, creating surrogate values and marking that as an int will offer zero value, and inevitably allow unexpected and broken API/behavior to be exhibited by the program due to incorrect usage be automatically allowed.
Not to mention the fact that infinity is not a number, thus no int value can properly represent that (given that int represent some finite number by its very nature).
As an aside, check out str.index vs str.find. One of these have a return value that definitely violate user expectations (i.e. exceed the boundaries of the type positive integer; won't be told that the return value may be invalid for the context which it may be used at during compile time, results in potential failure randomly at runtime).
Framing the question/answer in more correct terms:
Given the problem is really about the assignment of some integer when a rate exist, and if none exist some other token that represent unboundedness for the particular use case should be done (it could be some built-in value such as NotImplemented or None). However as those tokens would also not be int values, it means myvar would actually need a type that encompasses those, and with a way to apply operation that would do the right thing.
This unfortunately isn't directly available in Python in a very nice way, however in strongly static typed languages like Haskell, the more accepted solution is to use a Maybe type to define a number type that can accept infinity. Note that while floating point infinity is also available there, it inherits all the problems of floating point numbers that makes that an untenable solution (again, don't use inf for this).
Back to Python: depending on the property of the assignment you actually want, it could be as simple as creating a class with a constructor that can either accept an int or None (or NotImplemented), and then provide a method which the users of the class may make use of the actual value. Python unfortunately do not provide the advanced constructs to make this elegant so you will inevitably end up with code managing this be splattered all over the place, or have to write a number of methods that handle whatever input as expected and produce the required output in the specific ways your program actual needs.
Unfortunately, type-hinting is really only scratching the surface and simply grazing over of what more advanced languages have provided and solved at a more fundamental level. I supposed if one must program in Python, it is better than not having it.
Facing the same problem, I "solved" as follow.
from typing import Union
import math
Ordinal = Union[int, float] # int or infinity
def fun(x:Ordinal)->Ordinal:
if x > 0:
return x
return math.inf
Formally, it does exactly what you did not wanted to. But now the intend is clearer. When the user sees Ordinal, he knows that it is expected to be int or math.inf.
and the linter is happy.
I was recently working with Python and wanted to use another way of finding square roots. For example I wanted to find square root of n with Newton-Raphson approximation. I need to overload the overload the ** (only when you raise a number to 0.5),o perator as well as math.sqrt(), because I have several older projects that could be sped up by doing so and replacing all math.sqrt() and **(0.5) with another function isn't ideal.
Could this be done in Python?
Is it possible to overload either ** or math.sqrt?
Any helpful links are also much appreciated.
def Square_root(n):
r = n/2
while(abs(r-(n/r)) > t):
r = 0.5 * (r + (n/r))
return r
print(2**(0.5)) ## changes to print(Square_root(2))
print(math.sqrt(2)) ## that also becomes print(Square_root(2))
In short: you can't change the behavior of __pow__ for built-in types.
Long answer: you can subclass the float, but it will require additional coding and refactoring of the input values of the program, to the new float class with overwritten operators and functions.
And you can overwrite the math.sqrt, but this is not recommended:
import math
math.sqrt = lambda x, y: print(x, y)
math.sqrt(3, 2)
# 3 2
This will require the custom function to have the same signature.
If you really want to overload languages int and float objects - you can use variants of magic functions. In order to be consistent, you'll have to write a lot of code.
A lot of work. Python is for lazy people - if you like to write a lot, stick to Java or C++ :)
I've written a library of functions to make my engineering homework easier, and use them in the python interpreter (kinda like a calculator). Some return matrices, some return floats.
The problem is, they return too many decimals. For example, currently, when a number is 0, I get an extremely small number as a return (e.g. 6.123233995736766e-17)
I know how to format outputs individually, but that would require adding a formatter for every line I type in the interpreter. I'm using python 2.6.
Is there a way to set the global output formatting (precision, etc...) for the session?
*Note: For scipy functions, I know I can use
scipy.set_printoptions(precision = 4, suppress = True)
but this doesn't seem to work for functions that don't use scipy.
One idea would be to add from __future__ import print_function (at the very top) and then override the standard print function. Here's a very simple implementation that prints floats with exactly two digits after the decimal point:
def print(*args):
__builtins__.print(*("%.2f" % a if isinstance(a, float) else a
for a in args))
You would need to update your output code to use the print function, but at least it will be generic, rather than requiring custom formatting rules in each place. If you want to change how the formatting works, you just need to change the custom print function.
With numpy, you could use the set_printoptions method (http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html).
For example:
import numpy as np
np.set_printoptions(precision=4)
print(np.pi * np.arange(8))
What you are seeing is the fact that decimal floating point numbers can only be approximated by binary floating point. See Floating Point Arithmetic: Issues and Limitations.
You could put a module-level variable in your library and use that as the second parameter of round() to round off the return value of the functions in your module, but that is rather drastic.
If you use ipython (which I would recommend for interactive use, much better than the regular interpreter), you can use the 'magic' function %precision.
You could add str methods (assuming you don't already have them) for your number and matrix results, and make them always use the same .format or %f.
Building on #Blckknght solution, we could use regex for something a bit more robust.
import re
round_to = 4
def str_round(match):
return str(round(eval(match.group()), round_to))
def print(*args, **kwargs):
if len(args):
args = list(args)
for i, text in enumerate(args):
text = str(text)
if re.search(r"([-+]?\d*\.?\d+|[-+]?\d+)", text):
args[i] = re.sub(r"([-+]?\d*\.?\d+|[-+]?\d+)", str_round, text)
return __builtins__.print(*args, **kwargs)
Now the number in the following print statement will get rounded
>>> print(f"The number {1/3} is in this sentence.")
The number 0.3333 is in this sentence.
I've been experimenting with the standard python math module and have come across some subtle difficulties. For example, I'm noticing the following behavior concerning indeterminate forms:
0**0
>>> 1
def inf():
return 1e900
# Will return inf
inf()**inf()
>>> inf
And other anomalies of the sort. I'm writing a calculator, and I'd like to have it be mathematically accurate. Is there something I can do about this? Or, is there some way to circumvent this? Thanks in advance.
There's nothing wrong with your first example. 0**0 is often defined to be 1.
The second example is all to do with precision of doubles. 1E900 exceeds the maximum positive value of a (most likely 64-bit) double. If you want doubles outside of that range, you'll have to look into libraries. Fortunately Python has one built-in: the decimal module.
For example:
from decimal import Decimal
d = Decimal('1E900')
f = d + d
print(f)
>>> 2E900
According to Wolfram (quoting Knuth) while 0**0 is indeterminate, it's sometimes given as 1. This is because holding the statement 'x**0 = 1' to be true in all cases is in some cases useful. Even more interestingly Python will consider NaN**0 to be 1 as well.
http://mathworld.wolfram.com/Power.html
In the case of infinity**infinity, you're not really dealing with the mathematical concept of infinity here (where that would be undefined), but rather a number that's too large and has overflowed. As such all that statement is saying is that a number that's huge to the power of another number that's huge is still a number that's huge.
Edit: I do not think it is possible to overload a built in type (such as float) in Python so overloading the float.__pow__(x,y) operator directly. What you could possibly do is define your own version of float.
class myfloat(float):
def __pow__(x,y):
if(x==y==0):
return 'NaN'
else:
return float.__pow__(x,y)
m = myfloat(0)
m**0
Not sure if that's exactly what you're looking for though.
Well returning NaN for 0**0 is almost always useless and lots of algorithms avoid special cases if we assume 0**0 == 1. So while it may not be mathematically perfect - we're talking about IEEE-754 here, mathematical exactness is really the least of our problems [1]
But if you want to change it, that's rather simple. The following works as expected in Python 3.2:
def my_pow(x, y):
if y == 0: return 'NaN'
return float.__pow__(float(x), y)
pow = my_pow
[1] The following code can theoretically execute the if branch with x86 CPUs (well at least in C and co):
float x = sqrt(y);
if (x != sqrt(y)) printf("Surprise, surprise!\n");
I am new to python! Done my studying, gone through several books and now attempting pyschools challenges. Done Variables and data types successfully but Question 7 of Topic 2 (Functions) is giving me hell.
I am using Eclipse with Python (ver 3.2). in my eclipse, I get the answers 100, 51 and 525. Those are the same answers pyschools expects but it shows that my function returns 100, 0 and 500.
Here is the question (Hope am allowed to post it here!):
Write a function percent(value, total) that takes in two numbers as arguments, and returns the percentage value as an integer.
And below is my function
def percent (value, total):
a = value
b = total
return(int((a / b) * 100))
percent(70, 70)
percent(46, 90)
percent(63, 12)
Can anyone tell me what pyschools really want me to do or where am going wrong?
Thanks!
You're using Python 3.x and they're using Python 2.x. In Python 2.x, the / operation is always an integer division when the arguments are integers. 1/2 is 0. So, use float() to change one of your arguments to a floating-point number, such as int((float(a) / b) * 100). Then a/b will have a fractional part.
Or, assuming they are using a recent version of Python 2.x, you can just add this to the beginning of your script and it should work on the site:
from __future__ import division
As an aside, why are you assigning your input parameters to variables? They're already variables. If you want them named a and b, just receive them that way:
def percent(a, b):
return int((float(a) / b) * 100)
because it is using python 2.x you need to atleast convert one value to float
def percent(number, total):
return int((float(number)/total) * 100)
this will work fine
Python performs mathematical operations on integers with strictly integer math, truncating fractional parts.
To avoid that, multiply one of the inputs by 1.0 before dividing.
You can use the truncating divide, like this:
def percent(value, total):
return value * 100 // total
Advantages:
(0) Works with all Pythons from 2.2 onwards.
(1) The reader knows what that one line does without having to guess what version of Python is being run and without having to inspect the start of the module for from __future__ magic.
(2) Avoids any potential floating-point problems.
(3) Avoids using float and int i.e. saves two (relatively expensive) function calls.
Note carefully: // works on float objects as well as int objects. The question says that the args are "numbers" (can include float objects) and the result should be an "integer" (not necessarily an int object). That gives you considerable licence.
>>> percent(63.001, 11.9999)
525.0 # That's an integer
If they insist on the return value being an int object, then of course you'll need an int().