A Digression On Using Floating Points 01204111 Computers and Programming Chale lermsa sak Chatdokmaip iprai Dep epartment of f Computer Engin ineering Kasetsart Univ iversity Revised 2018/07/27 Cliparts are taken from http://openclipart.org
Task: Is it a root of the equation? ❖ Write a function to check if a given number is a root of the equation X 2 + 3X - 10 = 0 Define a function def isroot (x): to do the task. return x**2 + 3*x - 10 == 0 Call the function >>> print(isroot(2)) to check if the True given number is >>> isroot(-3) a root. False In interactive mode, >>> isroot(-5) print() can be omitted. True >>> isroot(0) False 2
Let’s try another equation ❖ Write a function to check if a given number is a root of the equation X 2 – 0.9X – 0.1 = 0 Define a function def isroot (x): to do the task. return x**2 - 0.9*x - 0.1 == 0 Test the function. >>> isroot(2) The roots should be False -0.1 and 1 >>> isroot(-0.1) True >>> isroot(1) Oh-O! Why false? False It should be true. 3
Floating-point calculations are often inexact (but a close approximation). def isroot (x): return x**2 - 0.9*x - 0.1 == 0 Let’s investigate why this became False >>> isroot(1) when it should be True. False Let’s see the value of >>> 1**2 each term when x is 1. 1 Oh-O! This is not 0.1 as it should be. >>> -0.9*1 Just a close approximation. -0.9 >>> 1-0.9 So the result is not 0 but it is 0.09999999999999998 - 0.0000000000000000227755575615628914 >>> 1**2 - 0.9*1 - 0.1 which is a good approximation of zero. -2.7755575615628914e-17 >>> 1**2 - 0.9*1 - 0.1 == 0 Now we know why False this comparison yields False. 4
The reason behind floating-point inexactness: • Modern computers use binary, not decimal, representations of numbers. • Python uses binary floating-point values (type float) to represent fractional numbers such as 0.1, -30.625, 3.1416, etc. • Some fractional numbers such as 0.1, 0.2, 0.9, when converted into binary, become repeating binary fractions. For example, Decimal: 0.1 Binary: 0.0001 1001 1001 1001 1001 1001 1001 1001 … • That's why it's not possible to hold the exact value of some numbers, e.g. 0.1, in a fixed-sized floating point representation. 5
Example of floating-point inexactness Let's see how the fractional decimal 0.1 stored in computers as a floating point. 0.1 This is its binary equivalent, a repeating binary fraction. 0.0001 1001 1001 1001 1001 1001 … Converted into a normalized binary scientific notation, 1. 1001 1001 1001 1001 … * 2 -4 which in turn converted into a 64-bit floating point. 0 01111111011 1001100110011001100110011001100110011001100110011001 Notice that the repeating fraction has to be chopped off here to fit into 64-bit limit. 0.1000000000000000055511151231257827021181583404541015625 which is equivalent to this decimal number, not 0.1 but a pretty close approximation. 6
Some floating points are really exact. (Thanks goodness!) Let's see how the fractional decimal 5.375 5.375 stored in computers as a floating point. This is its exact binary equivalent, having a non-repeating binary fraction. 101.011 Converted into a normalized binary scientific notation, 1.01011 * 2 2 which in turn converted into a 64-bit floating point. 0 10000000001 0101100000000000000000000000000000000000000000000000 Chopping zeros off to fit into 64-bit limit has no effect on precision. 5.375 which is exactly 5.375 in decimal. 7
Rounding Errors The discrepancy between an actual number and its approximated, rounded value 0.1 is called a rounding error . 0.0001 1001 1001 1001 1001 1001 … 1. 1001 1001 1001 1001 … * 2 -4 0 01111111011 1001100110011001100110011001100110011001100110011001 0.1000000000000000055511151231257827021181583404541015625 8
Accumulative Rounding Errors All these expressions should have yielded the same value >>> 0.1*33.33 3.333 but they didn't. 3.3330000000000002 >>> 33.33/10 3.3329999999999997 The more calculations, >>> (1-0.9)*33.33 the larger rounding errors. 3.3329999999999993 >>> 333.3*0.1*0.1 3.3330000000000006 >>> 333.3*(1-0.9)*(1-0.9) 3.3329999999999984 >>> 3.333*(1-0.9)/0.1*(1-0.9)*10 3.332999999999999 All these rounding errors are unsurprising results of floating-point inexactness. 9
Does a rounding error really matter? >>> 0.1*33.33 Most of the time it does not 3.3330000000000002 (thanks heaven!), because the >>> 33.33/10 rounding error is usually very small (at the 15 th -16 th decimal 3.3329999999999997 >>> (1-0.9)*33.33 places in this example). 3.3329999999999993 >>> 333.3*0.1*0.1 3.3330000000000006 >>> 3.33*(1-0.9)/0.1 3.3299999999999987 >>> 3.333*(1-0.9)/0.1*(1-0.9)*10 3.332999999999999 >>> 10
Does a rounding error really matter? a = 0.1 * 33.33 b = 33.33 / 10 Also, most programs only c = ( 1 - 0.9 )* 33.33 care to print just the first d = 333.3 * 0.1 * 0.1 few digits of the results e = 333.3 *( 1 - 0.9 )*( 1 - 0.9 ) so the rounding error is f = 3.333 *( 1 - 0.9 )/ 0.1 *( 1 - 0.9 )* 10 rarely visible or bothering print ( f'{a:.6f}' ) to us print ( f'{b:.6f}' ) print ( f'{c:.6f}' ) print ( f'{d:.6f}' ) Output print ( f'{e:.6f}' ) print ( f'{f:.6f}' ) 3.333000 3.333000 3.333000 3.333000 3.333000 3.333000 11
But the real perils are: Tests for floating-point equality (or inequality) a = 0.1 * 33.33 b = 33.33 / 10 c = ( 1 - 0.9 )* 33.33 d = 333.3 * 0.1 * 0.1 e = 333.3 *( 1 - 0.9 )*( 1 - 0.9 ) f = 3.333 *( 1 - 0.9 )/ 0.1 *( 1 - 0.9 )* 10 print ( a == b , a == c , a == d , a == e , a == f ) Output print ( b == c , b == d , b == e , b == f ) print ( c == d , c == e , c == f ) print ( d == e , d == f ) False False False False False print ( e == f ) False False False False print ( a != b ) False False False print ( c != d ) False False print ( e != f ) False The test results are all True mathematically wrong but True we're not really surprised True because we know why. 12
Some more funny, useless floating-point equality tests >>> 0.7+0.1 == 0.8 >>> 1.1+2.2 == 3.3 False False >>> 0.7+0.1 == 0.6+0.2 >>> 0.3/0.7 == 3/7 False False >>> 0.1*3 == 0.3 >>> (1/10)*3 == 0.3 False False >>> 0.1+0.1+0.1 == 0.3 >>> (1/10)*3 == 1*3/10 False False >>> 0.1*0.1 == 0.01 >>> 3.3/10 == 3.3*0.1 False False >>> 1-0.9 == 0.1 >>> 3.3/10 == 0.33 False False >>> 0.1*6 == 0.5+0.1 >>> 6*0.1 == 0.6 False False >>> 0.2*3 == 0.6 >>> 6*(1-0.9) == 0.6 False False >>> 0.3*2 == 10-9.4 >>> 6*0.1 == 6*(1-0.9) False False 13
Tests for float equality can render some programs nearly useless. As we have seen in this implementation. The roots should def isroot (x): be -0.1 and 1 return x**2 - 0.9*x - 0.1 == 0 It tests for floating-point equality, which is dangerous. >>> isroot(2) We're lucky that False in these two cases >>> isroot(-0.1) the output is correct. True >>> isroot(1) But this is wrong so the False function is untrustworthy for its duty. 14
So, what should we do to deal with the problem? Thou shalt sing the Mantra of Floating-Point Equality . " Inexact " doesn't mean " wrong " " Close enough " means " equal enough " which leads to the following rule of thumbs It's almost always more appropriate to ask whether two floating points are close enough to each other, not whether they are equal. 15
Test for "close enough" is much less perilous. In general, instead of using the perilous x == y as the test for equality of two floats x and y , we'd better use the expression |x - y| <= epsilon where epsilon is a positive number tiny enough for the task. x y- y+ y For example, suppose the task we are solving needs precision of only 5 decimal places. Then two floats x and y that differ from each other not more than 0.000001 can be considered "equal" for our purpose. So we use the expression |x - y| <= 0.000001 to test whether x and y are equal. 16
Test for "close enough" is much less perilous. Mathematically, x and y are equal. >>> x = 33.33/10 But due to rounding errors, >>> y = (1-0.9)*33.33 they become minutely different. >>> print(x, y) 3.3329999999999997 3.3329999999999993 And Python is honest enough to >>> x == y yield False for equality test. False >>> abs(x-y) <= 0.000001 Now we apply the mantra "close enough means equal" True to make the test result more >>> in line with Mathematics. 17
Let's fix our function isroot() Apply the mantra here >>> def isroot (x): epsilon = 0.000001 return abs(x**2 - 0.9*x - 0.1) <= epsilon >>> 0.9-1 >>> isroot(2) -0.09999999999999998 False >>> isroot(0.9-1) >>> isroot(-0.1) True True >>> 5.23*10/52.3 >>> isroot(1) 1.0000000000000002 True >>> isroot(5.23*10/52.3) True >>> Such a mathematician's delight! Now we're very pleased that our function works in perfect agreement with Mathematics. 18
19
Recommend
More recommend