Because of the way computers represent floating point (decimal) numbers, such as doubles, there is almost always an error in its value. Many novice programmers forget that the computer representation of a number is not exact and try to compare decimal numbers using ==. Such comparisons are a bad idea.

The correct way to compare two decimal numbers to see if they are equal is to see if they are "close enough" to each other. It is up the program designer to decide what "close enough" really means. Here, we have chosen EPSILON = 0.00000000001 to mean "close enough". That is, whenever two numbers differ by less than 0.00000000001, we will consider them equal.

One major problem that results from this inaccurate representation is that many programmers try to represent money in dollars and cents using decimal numbers. This always leads to errors, which means someone's money is either lost, or not really there in the first place. It's now understood that money should always be represented with an int, and tracked in pennies. Savvy programmers can clean up their output so it looks like dollars!

Error in floating point numbers:

The computer representation of most decimal numbers is an approximation:

cout << "(1.0 / 5.0) + (2.0 / 5.0) + (4.0 / 5.0) = "
   << (1.0 / 5.0) + (2.0 / 5.0) + (4.0 / 5.0) << endl;

Here, the output (with help from iomanip) will be:

(1.0 / 5.0) + (2.0 / 5.0) + (4.0 / 5.0) = 1.40000000000000010000

Comparing decimal numbers:

The wrong way:

double almost1point4 = 1.0/5.0 + 2.0/5.0 + 4.0/5.0;
  
if( almost1point4 == 1.4 ) // always false!
   cout << "almost1point4 really is 1.4!" << endl;
else
   cout << "almost1point4 is not 1.4!" << endl;

The right way:

const double EPSILON = 0.00000000001;
   // if they're this close, we'll call them equal

if( fabs( almost1point4 - 1.4 ) < EPSILON ) 
   cout << "almost1point4 is close enough!" << endl;
else
   cout << "almost1point4 is not close enough to 1.4!"
      << endl;