Having problems with sqrt() in C - python

While writing a code for my class I managed to create a working example in Python that follows all of the requirements that my professor set out for us to achieve. But when I translate it into C to hand it in to my prof I found several problems that I thought that I managed to circumnavigate but it turns out that I was wrong.
One of the problems that I've found is when in Python I used the following code to determine if the square root is a natural number.
if (math.sqrt(a**2+b**2)%2)==1 or (math.sqrt(a**2+b**2)%2)==0:
When I translated this in C I found that I was getting error messages while trying to compile the code saying that I was trying to modulo floats with integers and a whole lot of other stuff.
I did a few test runs in C to get a feel of the environment around roots and found that a variable assigned as a float or double returns 0 if the root is a natural number.
#include<stdio.h>
#include<math.h>
int main()
{
float r_float_a, r_float_b;
int a=16,b=15,r_int_a,r_int_b;
r_float_a=sqrt(a);
r_float_b=sqrt(b);
r_int_a=sqrt(a);
r_int_b=sqrt(b);
printf("%d\n",r_float_a);
printf("%d\n",r_float_b);
printf("%d\n",r_int_a);
printf("%d\n",r_int_b);
}
But if I were to try to call out r_float_a in a if statement it would always return false no matter the value of a:
#include<stdio.h>
#include<math.h>
int main()
{
float r_float_a, r_float_b;
int a=16,b=15,r_int_a,r_int_b;
r_float_a=sqrt(a);
r_float_b=sqrt(b);
r_int_a=sqrt(a);
r_int_b=sqrt(b);
printf("%d\n",r_float_a);
printf("%d\n",r_float_b);
if (r_float_a==0)
{
printf("%d\n",r_int_a);
}
else
{
printf("%d\n",r_int_b);
}
}
How can I fix this to be able to check if the square root of a number is a natural number?

Obviously, if the number's square root is a natural (integer) number, so is the number itself.
unsigned a = 15;
unsigned root = (unsigned)sqrt(a);
if (root * root == a)
printf("%u has the natural square root %u\n", a, root);
else
printf("%u does not have a natural square root\n", a);
I used unsigned because a natural number is a whole, non-negative number. Therefore the use of signed numbers is outside the scope of this question. A square root can be negative but that would not be a natural number. The square root of a negative number enters the realm of complex numbers.

OP's and Weather Vane methods fail for large float. float typically has 6 to 8 decimal digits of precision. For numbers larger than 1e14, the sqrt(), saved as a float, will always be a whole number - convertible exactly to an integer (if in the integer range).
To determine if a float x is an exact square of another whole number:
Test if x is negative, INF or NaN and fail those pathological cases.
Use modff() to extract the fractional portion and if not 0.0, fail.
Use fmodf() to extract the mantissa/significand and exponent. Examine the exponent. Factor out even powers above the float bit width - reform x and then apply #Weather Vane test.

Related

What happens when a float is cast to/from a boolean at the first principle level?

I'm trying to figure out what really happens under the hood when a float is cast to/from a bool in python. Here is the information I could find online and I was wondering if anyone else would be willing to step in and help me get an official understanding:
floats are represented in C as a 64-bit data structure and known as a double.
booleans are represented in C as 1's and 0's. If true -> 1 (binary form 0001) if false -> 0 (binary form 0000).
From this link here, I can see that there are really 3 parts to a double in memory. The sign, exponent, and fraction.
Working up from first principles I'm inclined to think that some combination of the exponent and fraction are used to be cast to float to boolean. For example 2^0 is 1 but e^x != 0 for all permutations of e and x respectively, so I'm really confused.
I have an interview tomorrow that is most likely going to ask me this question so I'm wondering if I could get some help figuring this out. Thanks and have a great day/night!
This seems to be the relevant code from https://github.com/python/cpython/blob/master/Objects/floatobject.c.
static int
float_bool(PyFloatObject *v)
{
return v->ob_fval != 0.0;
}
As we can see, the value v->ob_fval (a double) is compared to 0.0. If they compare unequal, the function returns a non-zero value, which Python then maps to the bool value True (or 1). If they compare equal, the function returns 0 (false), which Python maps to the bool value False (0).
So the question of how double is represented isn't really relevant at the level of the Python interpreter. The expression v->ob_fval != 0.0 will most likely map to a single hardware compare instruction. Most processors have dedicated hardware for floating point operations.
Comparing to 0 is slightly tricky, because IEEE floating point numbers have both a +0 and -0 representation (as you can see in the link you provided), so the hardware needs to check for the case where v->ob_fval is -0, but being compared to +0. But the hardware typically takes care of that, without Python (or even the C compiler) generally having to worry about that level of detail.
Your confusion about 2^0 and e^x isn't really relevant to the original question, but I think you are definitely confused, so I'd suggest reading your link again. The key is that the exponent in the double is not the exponent of the fractional part; it's the exponent of the constant value 2, and the fraction (plus 1) is multiplied by the result.
This has nothing to do with the internal representation of a float.
Booleans are a subclass of int, so float(True) == 1.0 and float(False) == 0.0.
Only 0.0 maps to False; all other floating-point values (including float("nan") and float("inf")) map to True.
In C
Converting a float to bool:
+0.0f, -0.0f --> false, all else float, including not-a-numbers, are true.
Internal float representation is irrelevant , only its value.
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1 C11 §6.3.1.2 1
Converting a bool to float:
false --> 0.0f, true --> 1.0f
An object declared as type _Bool is large enough to store the values 0 and 1. §6.2.5 2

Smallest Number in Python

PROBLEM STATEMENT: Write a Python script to determine the smallest positive double number in Python.
Your code should produce a variable called smallest_num which is the smallest double number in Python.
Your script should determine this value in a systematic manner. You may NOT simply call a built-in function that returns this value or access a built-in variable that contains this information. This includes np.finfo() and other built-in functions or variables.
The setup code gives the following variables:
Your code snippet should define the following variables:
Name Type Description
smallest_num floating point The smallest number possible in Python
Attempted Solution
import numpy as np
import math
def machineEpsilon(func=float):
machine_epsilon = func(1)
while func(1)+func(machine_epsilon) != func(1):
machine_epsilon_last = machine_epsilon
machine_epsilon = func(machine_epsilon) / func(2)
return machine_epsilon_last
sum_f = machineEpsilon(np.float64)
smallest_sum = float(sum_f)
print(isinstance(smallest_sum, float))
print(smallest_sum)
Output
True
2.220446049250313e-16
However, I am unable to get the correct answer. As the true smallest number is much smaller than the printed value. I know this number will be underflow to zero and I might want to do some comparison here. But i am a bit stuck. Any thoughts?
Probably the most reasonable thing to do would be to directly compute the next double-precision float after 0.0:
smallest_num = np.nextafter(0, 1)
This does not simply call a built-in function that returns the smallest positive double-precision float, or access a variable pre-set to that value. However, people get weird when they see function call syntax in problems like this, so it risks being marked incorrect anyway.
Taking advantage of how IEEE754 floating-point representation and rounding works, we can start with 1.0 and divide by 2 until the next division would underflow:
smallest_num = 1.0
while smallest_num / 2:
smallest_num /= 2

how python 3 handles big integers internally [duplicate]

In C, C++, and Java, an integer has a certain range. One thing I realized in Python is that I can calculate really large integers such as pow(2, 100). The same equivalent code, in C, pow(2, 100) would clearly cause an overflow since in 32-bit architecture, the unsigned integer type ranges from 0 to 2^32-1. How is it possible for Python to calculate these large numbers?
Basically, big numbers in Python are stored in arrays of 'digits'. That's quoted, right, because each 'digit' could actually be quite a big number on its own. )
You can check the details of implementation in longintrepr.h and longobject.c:
There are two different sets of parameters: one set for 30-bit digits,
stored in an unsigned 32-bit integer type, and one set for 15-bit
digits with each digit stored in an unsigned short. The value of
PYLONG_BITS_IN_DIGIT, defined either at configure time or in pyport.h,
is used to decide which digit size to use.
/* Long integer representation.
The absolute value of a number is equal to
SUM(for i=0 through abs(ob_size)-1) ob_digit[i] * 2**(SHIFT*i)
Negative numbers are represented with ob_size < 0;
zero is represented by ob_size == 0.
In a normalized number, ob_digit[abs(ob_size)-1] (the most significant
digit) is never zero. Also, in all cases, for all valid i,
0 <= ob_digit[i] <= MASK.
The allocation function takes care of allocating extra memory
so that ob_digit[0] ... ob_digit[abs(ob_size)-1] are actually available.
*/
struct _longobject {
PyObject_VAR_HEAD
digit ob_digit[1];
};
How is it possible for Python to calculate these large numbers?
How is it possible for you to calculate these large numbers if you only have the 10 digits 0-9? Well, you use more than one digit!
Bignum arithmetic works the same way, except the individual "digits" are not 0-9 but 0-4294967296 or 0-18446744073709551616.

Why is the absolute value of a complex number a floating point number?

In python3:
>>> abs(-5) == 5
and
>>> abs(5) == 5
but
>> abs(5+0j) == 5.0
The absolute value of a complex number a+bj, is defined as the distance between the origin (0,0) and the point (a,b) in the complex plane. In other words, it's sqrt(a2 + b2).
I take it that the real question is “Why does Python's abs return integer values for integer arguments but floating point values for complex numbers with a plain integer value.”
Concerning the argument and result types of abs there are three main cases:
argument is integer ⇒ result is integer; so it's safe to say that abs(-5) returns an integer (5).
argument is real (floating point) ⇒ result is real; so abs(5.1) returns a floating point number (5.1).
argument is complex ⇒ the result is a floating point number but the decision whether it has an exact integer value depends on the values of the real/imaginary parts of the argument.
This decision in the last case is far from trivial: abs(5+0i) has an integer value, so has abs(3+4i) (Pythagoras) but abs(5+2i) has not. In other words, it would not make sense to create an "integer complex" type and provide an abs implementation for it; the result would in most cases not be integer.
So it is quite sensible not to extend the integer/real distinction into the fields of complex numbers. (It would work for addition but the practical benefit would be close to zero.)
Assuming you know about the definition of the norm of a complex number, then your question becomes: why is abs(5j) returning 5.0 instead of 5 even though you provided an int as imaginary component?
The answer is type consistency. Since abs returns a float for complex numbers, there is no reason to make a special case and return an int if the output happens to be a round number.
Also, note that the same reasoning applies for the components of your imaginary numbers which are always stored as float.
z = 1 + 1j
z.real # 1.0
z.imag # 1.0
Because the absolute value of a complex number is the distance from origin to the number on the complex plane (where the two components of the complex number form the coordinates).
The imaginary i and real r components of a complex number can be seen as coordinates on a plane, and you can calculate the distance from the origin ((0, 0)) by using Pythagorean distance formula, sqrt(i**2 + r**2).
The distance can be expressed as a floating point (real) number, there is no imaginary component.
It also can’t be an integer, because the Pythagorean distance is not always a convenient whole number (unlike the absolute value of an integer, which can only ever be another integer).

Why is 10/3 equal to 3.3333333333333335 instead of ...332 or ..334?

Could someone specifically explain why the last digit is 5 instead of 2 or 4?
Knowing a float must have an inaccuracy in the binary world, I can't get exactly how the result of the case above has been made in detail.
It's because it's the closest a (64bit) float can be to the "true value" of 10/3.
I use cython here (to wrap math.h nexttoward function) because I don't know any builtin function for this:
%load_ext cython
%%cython
cdef extern from "<math.h>" nogil:
# A function that get's the next representable float for the
# first argument in direction to the second argument.
double nexttoward(double, long double)
def next_float_toward(double i, long double j):
return nexttoward(i, j)
(or as pointed out by #DSM you could also use numpy.nextafter)
When I display the values for 10/3 and the previous and next representable float for this value I get:
>>> '{:.50}'.format(10/3)
3.3333333333333334813630699500208720564842224121094
>>> '{:.50}'.format(next_float_toward(10/3., 0))
3.3333333333333330372738600999582558870315551757812
>>> '{:.50}'.format(next_float_toward(10/3., 10))
3.3333333333333339254522798000834882259368896484375
So each other representable floating point value is "further" away.
Because the largest representable value smaller than ...335 is ...330 (specifically, it's smaller by 2**-51, or one Unit in the Last Place). ...333 is closer to ...335 than it is to ...330, so this is the best possible rounding.
There are a number of ways to confirm this; MSeifert shows one thing you can do directly with <math.h> from C or Cython. You can also use this visualization tool (found by Googling "floating point representation" and picking one of the top results) to play with the binary representation.

Categories

Resources