Your language isn't broken, it's doing floating point math.
Computers can only natively store integers,
so they need some way of representing decimal numbers.
This representation comes with some degree of inaccuracy.
That's why, more often than not, .1 + .2 != .3
.
Why does this happen?
It's actually pretty simple. When you have a base 10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating decimals because their denominators use a prime factor of 3 or 7. In binary (or base 2), the only prime factor is 2. So you can only express fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5) while clean decimals in a base 10 system, are repeating decimals in the base 2 system the computer is operating in. When you do math on these repeating decimals, you end up with leftovers which carry over when you convert the computer's base 2 (binary) number into a more human readable base 10 number.
Below are some examples of sending .1 + .2
to standard output in a variety of languages.
read more: |wikipedia |IEEE 754 |Stack Overflow
#include<stdio.h>
int main(int argc, char** argv) {
printf("%.17f\n", .1+.2);
return 0;
}
#include <iomanip>
std::cout
echo .1 + .2;
SELECT .1 + .2;
SELECT select 0.1::float + 0.2::float;
writeln(0.1 + 0.2);
io:format("~w~n", [0.1 + 0.2]).
IO.puts(0.1 + 0.2)
puts 0.1 + 0.2
And
puts 1/10r + 2/10r
And
3/10
print(.1 + .2)
And
float(decimal.Decimal(".1") + decimal.Decimal(".2"))
And.1 + .2
And
0.3
And
0.30000000000000004
print(.1 + .2)
And
.1 + .2
And
0.30000000000000004
print(.1 + .2)
print(string.format("%0.17f", 0.1 + 0.2))
0.30000000000000004
document.writeln(.1 + .2);
System.out.println(.1 + .2);
And
System.out.println(.1F + .2F);
And
0.3
.1 + .2
(+ 0.1 0.2)
Console.WriteLine("{0:R}", .1 + .2);
0.1 + 0.2
0.1 + 0.2
0.1 + 0.2
echo(0.1 + 0.2)
0.1e 0.2e f+ f.
0.1 0.2 + p
(+ .1 .2)
And
(+ 1/10 2/10)
And
3/10
extern crate num;
use num::rational::Ratio;
fn main() {
println!(.1+.2);
println!("1/10 + 2/10 = {}", Ratio::new(1, 10) + Ratio::new(2, 10));
}
3/10
(+ .1 .2)
writeln(0.1 + 0.2);
* (+ .1 .2)
And
* (+ 1/10 2/10)
And
3/10
package main
import "fmt"
func main() {
fmt.Println(.1 + .2)
var a float64 = .1
var b float64 = .2
fmt.Println(a + b)
fmt.Printf("%.54f\n", .1 + .2)
}
0.30000000000000004
0.299999999999999988897769753748434595763683319091796875
0.1 + 0.2;
0.1 +. 0.2;;
PS C:\>0.1 + 0.2
?- X is 0.1 + 0.2.
perl -E 'say 0.1+0.2'
perl -e 'printf q{%.17f}, 0.1+0.2'
0.30000000000000004
perl6 -e 'say 0.1+0.2'
perl6 -e 'say sprintf(q{%.17f}, 0.1+0.2)'
perl6 -e 'say 1/10+2/10'
0.30000000000000000
0.3
print(.1+.2)
print(.1+.2, digits=18)
0.300000000000000044
scala -e 'println(0.1 + 0.2)'
And
scala -e 'println(0.1F + 0.2F)'
Andscala -e 'println(BigDecimal("0.1") + BigDecimal("0.2"))'
And
0.3
And
0.3
0.1 + 0.2.
0.1 + 0.2
NSString(format: "%.17f", 0.1 + 0.2)
0.30000000000000004
import std.stdio;
void main(string[] args) {
writefln("%.17f", .1+.2);
writefln("%.17f", .1f+.2f);
writefln("%.17f", .1L+.2L);
}
0.30000001192092896
0.30000000000000000
WRITE / CONV f( '.1' + '.2' ).
And
WRITE / CONV decfloat16( '.1' + '.2' ).
And
0.3