[Instructions] [Search] [Current] [Syllabus] [Links] [Handouts] [Outlines] [Labs] [More Labs] [Assignments] [Quizzes] [Exams] [Examples] [Book] [Tutorial] [API]

Back to Recursion. On to Project Discussion.

**Held** Friday, October 1, 1999

**Overview**

**Notes**

- Don't forget Professor Salter's talk at noon.
- Mr. Walker will be visiting our class next week as part of my 3rd-year review.

**Contents**

**Summary**

- Analyzing functions
- Three versions of exponentiation
- Two versions of Fibonacci

- Recursive functions and recurrence relations

- We'll begin with an analysis of the running times of the various forms of Fibonacci and exponentiation.
- Although our algorithms will admit negative exponents, our analyses will assume positive exponents.

- There are a number of techniques one can use for computing
x
^{n}, where n is an integer.- Complicated mathematics
- Multiply x*x*x*...*x n times
- Divide and conquer

- As we observed, there are two techniques for doing the repeated multiplication: we can use iteration or recursion.
- Let's consider each function
**/** * An illustration of different mechanisms for computing x^n. * * @author Samuel A. Rebelsky * @version 1.0 of October 1999 */**__public____class__Exponentiation {**// +---------+-------------------------------------------------****// | Methods |****// +---------+****/** * Compute x^n by multiplying x*x*...*x n times. Uses * iteration as the mechanism for repetition. */**__public____static____double__expA(__double__x,__int__n) {**// Handle negative exponents. x^(-n) = 1/(x^n)**__if__(n < 0) {__return__1/expA(x,-n); }**// Prepare to compute the product.**__double__product = 1;**// Multiply by each x. (Note that 1*x*x*...*x = x*x*...*x.)**__for__(__int__i = 0; i < n; ++i) { product = product * x; } // for**// That's it, we're done.**__return__product; } // expA(double,int)**/** * Compute x^n by multiplying x*x*...*x n times. Uses * recursion as the mechanism for repetition. */**__public____static____double__expB(__double__x,__int__n) {**// Handle negative exponents. x^(-n) = 1/(x^n)**__if__(n < 0) {__return__1/expB(x,-n); } // if (n < 0)**// Base case: x^0 is 1.**__if__(n == 0) {__return__1; } // if (n == 0)**// Recursive case: x*x*...*x = x*(x*...*x)****// That is: x^n = x*(x^(n-1))**__else__{__return__x * expB(x, n-1); } // Recursive case, if (n > 0) } // expB(double,int)**/** * Compute x^n by a recursive divide-and-conquer algorithm. */**__public____static____double__expC(__double__x,__int__n) {**// Handle negative exponents. x^(-n) = 1/(x^n)**__if__(n < 0) {__return__1/expB(x,-n); } // if (n < 0)**// Base case: x^0 is 1.**__if__(n == 0) {__return__1; } // Base case: n == 0**// Recursive case (when n is odd)****// x*x*...*x = x*(x*...*x)****// That is: x^n = x*(x^(n-1))**__else____if__(n % 2 == 1) {__return__x * expC(x,n-1); } // Recursive case (when n is odd)**// Recursive case (when n is even)****// Let n be 2k****// x^n = x^(2k) = x^k * x^k**__else__{__int__k = n/2;__double__tmp = expC(x,k);__return__tmp*tmp; } // Recursive case (when n is even) } // expC } // class Exponentiation

- The first version,
`expA`

, is perhaps the easiest to analyze. - There is some preparatory work. That takes a constant number of
steps. We'll call that a
_{1}. - There is a loop that is repeated n times.
- Each time through the loop, a constant number of steps are done.
We'll call that a
_{2}.

- Each time through the loop, a constant number of steps are done.
We'll call that a
- After the loop, a few more steps are done (okay, just a return).
We'll call that a
_{3}. - What's the approximate running time?
- a
_{1}+ a_{2}*n + a_{3}

- a
- Using Big-O notation, we can throw away the constants and get O(n).

- The second version,
`expB`

, is somewhat more difficult to analyze because of the recursion. - One technique that is typically fruitful (although somewhat
mathematical), is to define a function, f
_{b}(n), that represents the the number of steps the algorithm takes on input of size n. - What can we say about this function?
- We know that when n is 0, the function takes a constant number of
steps.
- f
_{b}(0) = b_{1}

- f
- When n is greater than 0, the function takes some constant
number of steps before and after the recursive call
*plus*the time for the recursive call.- We'll call the constant extra steps b
_{2}

- We'll call the constant extra steps b
- How do we determine the number of steps for the recursive call
(
`expB(x,n-1)`

)? Hmmm ... Wait! f_{b}is supposed to tell us that!- f
_{b}(n) = b_{2}+ f_{b}(n-1)

- f
- We've defined the running time recursively, too. That doesn't
help us much if we want to do some Big-O analysis. So let's
figure out what f
_{b}really is. - Let's start by repeatedly applying our recursive rule ...
- f
_{b}(n) i+ = b_{2}+ f_{b}(n-1) - = b
_{2}+ b_{2}+ f_{b}(n-2) - = b
_{2}+ b_{2}+ b_{2}+ f_{b}(n-3)

- f
- We might then generalize
- f
_{b}(n) = k*b_{2}+ f_{b}(n-k)

- f
- Can we ever get rid of the recursive term? Yes! When n = k,
n-k is 0, and we know the value of f
_{b}(0).- f
_{b}(n) = n*b_{2}+ b_{1}

- f
- Hey, that's just O(n).

- The third version,
`expC`

, is perhaps even more difficult to evaluate. - We'll use the same technique of ``define a function whose value is
the number of steps the algorithm takes''. This time, we'll
call the function f
_{c}. - This time, life is a little harder because we have two different
recursive cases. (Odd and Even).
- Fortunately, the odd case transforms into the even case. That is, if n is odd, then on the next recursive call it will be even.
- In effect, by choosing a bigger value for the ``constant'' work, we can pretend that n is divided by two in each recursive call.

- So
- f
_{c}(0) = c_{1} - f
_{c}(1) = c_{2} - f
_{c}(n) = c_{3}+ f_{c}(n/2)

- f
- What function is this? Again, we will try some analysis by hand to figure it out.
- A few applications of the recursive rule
- f
_{c}(n) - = c
_{3}+ f_{c}(n/2) - = c
_{3}+ c_{3}+ f_{c}(n/4) - = c
_{3}+ c_{3}+ c_{3}+ f_{c}(n/8)

- f
- Can we generalize? Certainly.
- f
_{c}(n) = k*c_{3}+ f_{c}(n/2^{k})

- f
- When can we get rid of the recursive term?
When n/2
^{k}is 1.- That is, k is ``the power to which you must raise 2 in order to get n''.
- Fortunately, there's a name for that value. We call it
log
_{2}(n).

- Hence,
- f
_{c}(n) = log_{2}(n)*c_{3}+ c_{2}

- f
- That is, this algorithm is an O(log
_{2}(n)) algorithm.

- We can compute Fibonacci numbers
- iteratively, starting at the bottom and working our way up
- recursively, using the rule that fib(n) = fib(n-1) + fib(n-2)

- The iterative Fibonacci calculation is fairly easy to analyze. Since we step through n different Fibonacci numbers and do a constant amount of calculation for each, that algorithm is O(n).
- The recursive version is much harder. Again, we'll use a function to represent the number of steps the algorithm takes on input of size n. We'll just call this function f.
- Two base cases:
- f(0) = d
_{1} - f(1) = d
_{2}

- f(0) = d
- One recursive case
- f(n) = d
_{3}+ f(n-1) + f(n-2)

- f(n) = d
- Hmmm ... this is surprisingly similar to the function that
defines Fibonacci.
- Thus, the number of steps to compute the nth Fibonacci number is more than the value of the nth Fibonacci number.

- This function grows fairly quickly, but how quickly?
- We'll ignore the d
_{3}to make our life easier. (No, you can't always do so, but it seems safe in this case.)

- We'll ignore the d
- We'll take advantage of the observation that f(n-2) <= f(n-1)
- While the observation is fairly obvious, the decision to use it is not. You should not expect to have come up with it on your own.

- So, let's try applying that observation
- f(n) = f(n-1) + f(n-2)
- f(n) <= f(n-1) + f(n-1)
- f(n) <= 2*f(n-1)

- Okay, now we can try applying that rule a few times.
- f(n) <= 2*f(n-1)
- f(n) <= 2*2*f(n-2)
- f(n) <= 2*2*2*f(n-3)

- Can we generalize? Sure
- f(n) <= 2
^{k}*f(n-k)

- f(n) <= 2
- Does this lead to a natural answer. Let k = n.
- f(n) <= 2
^{n}*d_{1}

- f(n) <= 2
- f(n) is in O(2
^{n})- This may be painfully slow (as you've seen)

- But this is an upper bound, and it may be much too large. Let's try a lower bound.
- Again, we'll use the relationship between f(n-1) and f(n-2), but
in the opposite direction.
- f(n) = f(n-1) + f(n-2)
- f(n) >= f(n-2) + f(n-2)
- f(n) >= 2*f(n-2)

- We can now apply that rule a few times
- f(n) >= 2*f(n-2)
- f(n) >= 2*2*f(n-4)
- f(n) >= 2*2*2*f(n-6)

- Generalizing,
- f(n) >= 2
^{k}*f(n-2k)

- f(n) >= 2
- When k = n/2, we're done
- f(n) >= 2
^{(n/2)}*d_{1}

- f(n) >= 2
- This is a
*lower bound*, rather than an upper bound. Hence, we cannot use Big-O. Instead, we use little-o for lower bound.- f(n) is in o(2
^{(n/2)}).

- f(n) is in o(2
- This is still painfully slow.

Tuesday, 10 August 1999

- Created as a blank outline.

Friday, 1 October 1999

- Filled in some details.
- Added more details after class.

Back to Recursion. On to Project Discussion.

[Instructions] [Search] [Current] [Syllabus] [Links] [Handouts] [Outlines] [Labs] [More Labs] [Assignments] [Quizzes] [Exams] [Examples] [Book] [Tutorial] [API]

**Disclaimer** Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.

This page may be found at http://www.math.grin.edu/~rebelsky/Courses/CS152/99F/Outlines/outline.22.html

Source text last modified Fri Oct 1 16:01:04 1999.

This page generated on Fri Oct 1 16:01:39 1999 by Siteweaver. Validate this page's HTML.

Contact our webmaster at rebelsky@grinnell.edu