2.1: Bisection Method
The bisection method is the easiest to numerically implement and almost always works. The main disadvantage is that convergence is slow. If the bisection method results in a computer program that runs too slow, then other faster methods may be chosen; otherwise it is a good choice of method.
We want to construct a sequence \(x_0, x_1, x_2, ...\) that converges to the root \(x = r\) that solves \(f(x) = 0\). We choose \(x_0\) and \(x_1\) such that \(x_0 < r < x_1\). We say that \(x_0\) and \(x_1\) bracket the root. With \(f(r) = 0\), we want \(f(x_0)\) and \(f(x_1)\) to be of opposite sign, so that \(f(x_0)f(x_1) < 0\). We then assign x_2 to be the midpoint of \(x_0\) and \(x_1\), that is \(x_2 = (x_0 + x_1)/2\), or
\(x_2 = x_0 + \dfrac{x_1 - x_0}{2}\).
The sign of \(f(x_2)\) can then be determined. The value of \(x_3\) is then chosen as either the midpoint of \(x_0\) and \(x_2\) or as the midpoint of \(x_2\) and \(x_1\), depending on whether \(x_0\) and \(x_2\) bracket the root, or \(x_2\) and \(x_1\) bracket the root. The root, therefore, stays bracketed at all times. The algorithm proceeds in this fashion and is typically stopped when the increment to the left side of the bracket (above, given by \((x_1 - x_0)/2)\) is smaller than some required precision.