Note: all example code is given in SML and tested with SMLNJ.
A classic example in the introduction of algorithms and complexity is the evaluation of
an (for integer
n). A naïve implementation uses
a0 = 1, an+1 = a * an. The result is
fun powNaive _ 0 = 1 | powNaive a n = a * powNaive a (n-1);
As a practical point, tail recursion can be introduced here to avoid unnecessary stack usage:
fun powNaiveTail a n = let fun inner 0 _ accum = accum | inner n a accum = inner (n-1) a (a*accum) in inner a n 1 end;
This approach clearly requires
O(n) arithmetic operations. (We shall brush the non-constant cost of arithmetic operations on big numbers under the carpet for the time being).
The non-naïve approach uses the same base case but the more powerful recurrence relation
am+n = am * an. In particular,
a2n = (an)2 and
a2n+1 = a * (an)2. Jumping straight to the tail recursive version, we have
fun pow a n = let fun inner 0 _ accum = accum | inner n a accum = inner (n div 2) (a*a) (if n mod 2 = 1 then a*accum else accum) in inner a n 1 end;
It can easily be seen that the number of arithmetic operations is proportional to the number of bits in
n, i.e. this approach requires
O(lg n) arithmetic operations.
Treatments of the Fibonacci numbers differ slightly, so in the interests of clarity we shall define the treatment used here. The Fibonacci numbers are an integer sequence which can be considered as a function from the natural numbers to the natural numbers satisfying
F(0) = 0, F(1) = 1, F(n+2) = F(n+1) + F(n).
The naïve approach to evaluating Fibonacci numbers is to simply apply the definition. As a tail-recursive function this gives
fun fiboNaive n = let fun inner 0 x _ = x | inner n x y = inner (n-1) y (x+y) in inner n 0 1 end;
However, Binet's formula for the Fibonacci numbers as a sum of two exponentials suggests that we might be able to achieve the same complexity as computing powers. The key is the identity
F(m+n) = F(m+1) F(n) + F(m) F(n-1).
Proof: by induction on
m. Consider the case
m = 0: we have
F(n) = F(1) F(n) + F(0) F(n-1), which is trivially true by the definitions of
F(0) = 0 and
F(1) = 1. Now suppose that it holds for
m and consider
F(m+1+n) = F(m+n) + F(m+n-1) by definition = F(m+1) F(n) + F(m) F(n-1) + F(m+1) F(n-1) + F(m) F(n-2) by inductive hypothesis = F(m+1) F(n) + F(m) F(n) + F(m+1) F(n-1) grouping terms in F(m) and applying definition = F(m+2) F(n) + F(m+1) F(n-1) grouping terms in F(n) and applying definition
Following the approach we used for computing powers, we observe the particular cases
F(2n) = F(n+1) F(n) + F(n) F(n-1) and
F(2n+1) = F(n+1)2 + F(n)2.
In the interests of keeping fewer variables around, we can exploit the definition to reduce the first of these cases to
F(2n) = 2 F(n+1) F(n) - F(n)2.
We can now work with pairs of
(F(n), F(n+1)) to build a recursive function to evaluate the
nth Fibonacci number:
fun fiboRecur n = let fun inner 0 = (0, 1) | inner n = let val (fx, fy) = inner (n div 2) in if n mod 2 = 1 then (fx * fx + fy * fy, fy * (2 * fx + fy)) else (fx * (2 * fy - fx), fx * fx + fy * fy) end val (a, b) = inner n in a end;
So we can indeed evaluate
O(lg n) arithmetic operations. However, we're also building up a stack of height
O(lg n). Can we add an accumulator as we did for the powers and make this tail-recursive?
Consider the structure of the function
pow above. In the
ith recursive call to
inner, what are the arguments (in terms of the original
a2i, floor(n / 2i), an mod 2i. So by analogy, we want to carry forward
(F(2i), F(2i+1)), floor(n / 2i), (F(n mod 2i), F(n mod 2i + 1)). We'll have to break out the full generality of our identity for
fun fibo n = let fun inner 0 _ _ x _ = x | inner n i j x y = let val n' = n div 2 val i' = i * (2 * j - i) val j' = i * i + j * j in if n mod 2 = 0 then inner n' i' j' x y else inner n' i' j' (j * x + i * (y - x)) (i * x + j * y) end in inner n 1 1 0 1 end;
Let's expose the structure a bit more. What we're doing is to take base cases
f(1) and apply a composition
f(m+n) = g(f(m), f(n)) to evaluate
O(lg n) applications of
fun evalRecur g f0 f1 n = let fun inner 0 _ b = b | inner n a b = inner (n div 2) (g a a) (if n mod 2 = 0 then b else (g a b)) in inner n f1 f0 end; val powRefactored = evalRecur (fn x => fn y => x*y) 1; fun fiboRefactored n = let fun g (i, j) (x, y) = (j*x + i*(y-x), i*x + j*y) val (a, b) = evalRecur g (0, 1) (1, 1) n in a end;
Up to now we have counted arithmetic operations, ignoring their cost. However, the cost of adding two
n-bit numbers is really
Θ(n), and the cost of multiplying two
n-bit numbers is
O(n2) (and likely to be
Θ(n2) in most practical implementations). How does this affect the analysis?
For both powers and Fibonacci numbers, the size of the output is exponential in the size of the input. Therefore the running cost is also exponential in the size of the input - so although we've reduced the number of operations from an exponential to a polynomial, we haven't strictly made worthwhile savings from a complexity-theoretic point of view. However, the efficiency improvement is still useful in practical settings which require us to actually produce values rather than reason about asymptotic performance.
Note also that if we were to tweak the problem statements and operate in a finite field then arithmetic operations become
O(1) and we do get an obvious benefit from reducing an exponential number of operations to a polynomial number.