A perfect example of this is how computers represent fractions and small decimals. Take a look at the following program flowchart:
start:
a = a + (1/10)
loop to [start] 10 times.
At the end of the program, a should logically equal 1. Work it out - if you add 0.1 or 1/10 ten times, the answer should be 1. Written in perl the above program would look like:
#!/usr/bin/perl
for ($num = $i = 0; $i < 10; $i++) { $num += 0.1 }
if ($num != 1) {
printf "\n$num = %.45f\n", $num;
}
effectively this program displays the variable $num after it has had 0.1 added to itself 10 times, except the option %.45f tells the computer to display the variable as accurately as possible. The output looks a little something like this:

So, 1 in fact is equal to 0.999999999999999888977697537484345957636833191 in the mind of a computer. While most of us would dismiss this as too insignificant to make any sort of difference, this could mean the world to precise sciences such as chemistry, where parts per billion and parts per trillion are pretty common place.
The error is in converting 1/10 into base two for use by the computer, as it must represent it as a decimal it therefore looses accuracy in the conversion. For those of you with macs who have no idea what I'm talking about, you can go ahead and try it yourselves. You'll produce exactly the same result. Go to your applications and search for terminal. Open it, type perl, paste the program in above, and press control+D
No comments:
Post a Comment