I am creating an algorithm where a metric may take 3 values:
Infinite
Too large but not infinite
Some number that is the result of a calculation
Now, math.inf handles the infinite.
The result of the third value has no determined borders. But, I want the second value to be always smaller than infinite and always larger than the third value. Therefore I cannot give it some very large number like 999999999999999 since there is always a possibility that the calculation may exceed it.
What I am looking for is another constant like Ellipsis of Python 2.
How can I make this happen?
You can try sys.float_info.max:
>>> import sys
>>> sys.float_info.max
1.7976931348623157e+308
According to the documentation, it's the "maximum representable positive finite float".
Related
I got a function in another file, to recall that function i use:
app.put_number(row, column, number)
which is basically generating a matrix, but in the first column, for each row I need a random number from 1 to 20, and each number must be unique, I tried using random.randrange().
For example:
app.put_number(0, 0, random.randrange(1,21))
app.put_number(1, 0, random.randrange(1,21))
But sometimes it generates the same number and I need that each one of them to be different.
Plus I cant mess with the file with the actual function, I can only use if, else, while, for and app.put_number(row, column, number)
It would be helpful if you tagged your question with a specific language you would like to complete this task in.
Conceptually, however; you need to check the populated array every time a new number is generated to ensure that you are not inserting a number already in the array. If you want to get fancy you could even have an array of numbers and randomly choose the index of the array you are putting the numbers into, effectively "shuffling" them together.
Try using the Random.sample(). It returns unique numbers in a range
As per doc below "To choose a sample from a range of integers, use a range() object as an argument. This is especially fast and space efficient for sampling from a large population: sample(range(10000000), k=60)."
https://docs.python.org/3/library/random.html
One solution is to keep the generated numbers in a list and then calling the random number generator until it gives a number that is not present in your list of already generated numbers.
By this I mean I need Python to generate any random number under one condition, it must be greater than the user defined number (lets say "x"), in other words it would be like saying x>num>infinity.
I tried using the random module but can't seem to figure anything out, as I tried something like this
import random
import math
inf=math.inf
x=15
random.randint(x,inf)
But that doesn't work because infinity is registered as a floating point number and the randint function only works with integers. I also tried something like this
import random
import math
inf=math.inf
x=15
random.uniform(x,inf)
This should work because the uniform function works with floating points, but it just ends up returning infinity every time.
This would mean that I need someway to get this done without using infinity (or math.inf) within my range, I just need the range to have a minimum (x) and no maximum at all. Any suggestions?
I'm running Python 3.6.0b4 on OS X Yosemite
There is no specific maximum integer. See this article
Use sys.maxsize instead.
sys.maxsize An integer giving the maximum value a variable of type Py_ssize_t can take. It’s usually 2^31 - 1 on a 32-bit platform and 2^63 - 1 on a 64-bit platform.
To limit the upper bound to some lesser value, one could simply define a maximum that is less than that implied by sys.maxsize. For example, the 64-bit maximum signed integer 9,223,372,036,854,775,807 could be limited to 999,999,999,999,999L for 15 decimal places. You could just use that or any long integer less than that implied by sys.maxsize.
Let's say I'm considering M=N**2 where N is an integer. It appears that numpy.sqrt(M) returns a float (actually numpy.float64).
I could imagine that there could be a case where it returns, say, N-10**(-16) due to numerical precision issues, in which case int(numpy.sqrt(M)) would be N-1.
Nevertheless, my tests have N==numpy.sqrt(M) returning True, so it looks like this approximation isn't happening.
Is it safe for me to assume that int(numpy.sqrt(M)) is indeed accurate when M is a perfect square? If so, for bonus, what's going on in the background that makes it work?
To avoid missing the integer by 1E-15, you could use :
int(numpy.sqrt(M)+0.5)
or
int(round(numpy.sqrt(M)))
Edited to put questions in bold.
I wrote the following Python code (using Python 2.7.6) to calculate the Fibonacci sequence. It doesn't use any extra libraries, just the core python modules.
I was wondering if there was a limit to how may terms of the sequence I could calculate, perhaps due to the absurd length of the resulting integers, or if there would be a point where Python no longer performed the calculations accurately.
Also, for the fibopt(n) function, it seems to sometimes return the term under the one requested (e. g. 99th instead of 100th) but always works at lower terms (1st, 2nd, 10th, 15th). Why is that?
def fibopt(n): # Returns term "n" of the Fibonacci sequence.
f = [0,1] # List containing the first two numbers in the Fibonacci sequence.
x = 0 # Empty integer to store the next value in the sequence. Not really necessary.
optnum = 2 # Number of calculated entries in the sequence. Starts at 2 (0, 1).
while optnum < n: # Until the "n"th value in the sequence has been calculated.
if optnum % 10000 == 0:
print "Calculating index number %s." % optnum # Notify the user for every 10000th value calculated. This is useful because the program can take a very long time to calculate higher values (e. g. about 15 minutes on an i7-4790 for the 10000000th value).
x = [f[-1] + f[-2]] # Calculate the next value in the sequence based of the previous two. This could be built into the next line.
f.extend(x) # Append that value to the sequence. This could be f.extend([f[-1] + f[-2]]) instead.
optnum +=1 # Increment the counter for number of values calculated by 1.
del f[:-2] # Remove all values from the table except for the last two. Without this, the integers become so long that they fill 16 GB of RAM in seconds.
return f[:n] # Returns the requested term of the sequence.
def fib(n): # Similar to fibopt(n), but returns all of the terms in the sequence up to and including term "n". Can use a lot of memory very quickly.
f = [0,1]
x = 0
while len(f) < n:
x = [f[-1] + f[-2]]
f.extend(x)
return f[:n]
The good news is: integer math in Python is easy -- there are no overflows.
As long as your integers can fit within a C long, Python will use that. Once you go past that, it will auto-promote to arbitrary-precision integers (which means it'll be slower and use more memory, but the calculations will remain correct).
The only limits are:
The amount of memory addressable by the Python process. If you're using 32-bit Python, you need to be able to fit all of your data within 2 gigabytes or RAM (get past that and your program will fail with MemoryError). If you're using 64-bit Python, your physical RAM + swapfile is the theoretical limit.
The time you're willing to wait while calculations are being performed. The larger your ints, the slower the calculations are. If you ever hit your swap space, your program will reach continental drift levels of slow.
If you go to Python 2.7 documentation, there is a section on Fibonacci numbers. In this section on Fibonacci numbers, the arbitrary end is not the elongated answer we would all want to view. It shortens it.
If this does not answer your question, please see: 4.6 Defining Functions.
If you have downloaded the interpreter, the manuals come preinstalled. You can go online if necessary to www.python.org or you can view your manual to see the Fibonacci numbers that end in an "arbitrary" short, i.e. not the entire numerical value.
Seth
P.S. If you have any questions on where to find this section in your manual, please see The Python Tutorial/4. More Control Flow Tools/4.6 Defining Functions. I hope this helps a bit.
Python integers can express arbitrary-length values and will not be automatically converted to float. You can check by simply creating a very large number and checking its type:
>>> type(2**(2**25))
<class 'int'> # long in Python 2.x
fibopt returns f[:n], and that is a list. You seem to expect it to return a term, so either the expectation (the first comment) or the implementation must change.
I'm working on a program with fairly complex numerics, mostly in numpy with complex datatypes. Some of the calculation are returning nearly empty arrays with a complex component that is almost zero. For example:
(2 + 0j, 3+0j, 4+3.9320340202e-16j)
Clearly the third component is basically 0, but for whatever reason, this is the output of my calculation and it turns out that for some of these nearly zero values, np.is_complex() returns True. Rather than dig through that big code, I think it's sensible to just apply a cutoff. My question is, what is a sensible cutoff that anything below should be considered a zero? 0.00? 0.000000? etc...
I understand that these values are due to rounding errors in floating point math, and just want to handle them sensibly. What is the tolerance/range one allows for such precision error? I'd like to set it to a parameter:
ABOUTZERO=0.000001
As others have commented, what constitutes 'almost zero' really does depend on your particular application, and how large you expect the rounding errors to be.
If you must use a hard threshold, a sensible value might be the machine epsilon, which is defined as the upper bound on the relative error due to rounding for floating point operations. Intuitively, it is the smallest positive number that, when added to 1.0, gives a result >1.0 using a given floating point representation and rounding method.
In numpy, you can get the machine epsilon for a particular float type using np.finfo:
import numpy as np
print(np.finfo(float).eps)
# 2.22044604925e-16
print(np.finfo(np.float32).eps)
# 1.19209e-07