# Outline of Class 18: Introduction to Algorithm Analysis

Held: Tuesday, February 17, 1998

## Administrivia

• Reminder: there is an exam next Wednesday.
• It will be open book, open notes, open web.
• I won't be there, as I'll be at a national conference on computer science education.
• You get Friday the 27th off to recover.
• I'll try to have a review sheet ready later this week.
• Women's hockey won the gold! I have a personal interest because I taught Gretchen Ulion, who scored the first goal. I hope to hear similarly good things from you folks in the future.

## Algorithm Analysis

• As you may have noted, there are often multiple algorithms one can use to solve the same problem.
• In searching an ordered list, one can use linear search, binary search, or "look randomly" (as well as many others). [Don't worry if you don't know any of these.]
• In finding the minimum element of a list, you can step through the list, keeping track of the current minimum. You could also sort the list and grab the first element.
• You can come up with your own variants.
• How do we choose which algorithm is the best?
• The fastest/most efficient algorithm.
• The one that uses the fewest resources.
• The clearest.
• The shortest.
• ...
• Most frequently, we look at efficiency (how long does the algorithm take to run).
• What is the best way to represent the running time of an algorithm?
• Is there an exact number we can provide? Surprisingly, no.
• Different inputs lead to different running times. For example, if there are conditionals in the algorithm (as there are in a typical minimum algorithm), different instructions will be executed depending on the input.
• Not all operations take the same time. For example, addition is typically quicker than multiplication, and integer addition is typically quicker than floating point addition.
• The same operation make take different times on different machines.
• The same operation make appear to take different times on the same machine, particularly if other things are happening on the same machine.
• Many things are happening behind the scenes that we can't predict (e.g., caching).

### Asymptotic Analysis

• Noting problems in providing a precise analysis of the running time of programs, computer scientists developed a technique which is often called asymptotic analysis. In asymptotic analysis of algorithms, one describes the general behavior of algorithms in terms of the size of input, but without delving into precise details.
• There are many issues to consider in analyzing the asymptotic behavior of a program. One particularly useful metric is an upper bound on the running time of an algorithm. We call this the "big O" of an algorithm.
• Big O is defined somewhat mathematically, as a relationship between functions.
• f(n) is O(g(n)) iff
• there exists a number n0
• there exists a number d > 0
• for all n > n0, abs(f(n)) <= abs(d*g(n))
• What does this say? It says that after a certain point (n0), f(n) is bounded above by a constant (d) times g(n).
• The constant (d) helps accommodate the variation in the algorithm.
• We don't usually identify the d precisely.
• For algorithms,
• n is the "size" of the input (e.g., the number of items in a list or vector to be manipulated).
• f(n) is the running time of the algorithm.
• Some common big-O bounds
• An algorithm that is O(1) takes constant time. That is, the running time is independent of the input. Getting the size of a vector is often an O(1) algorithm.
• An algorithm that is O(n) takes time linear in the size of the input. That is, we basically do constant work for each "element" of the input. Finding the smallest element in a list is often an O(n) algorithm.
• An algorithm that is O(log_2(n)) takes logarithmic time. While the running time is dependent on the size of the input, it is clear that not every element of the input is processed.

### Eliminating Constants

• One of the nice things about asymptotic analysis is that it makes constants "unimportant" because they can be "hidden" in the d.
• If f(n) is 100*n seconds and g(n) is 0.5*n seconds, then f(n) is O(g(n)) [let d be 200] and g(n) is f(n).
• If f(n) is 100*n seconds and g(n) is n*n seconds, then f(n) is O(g(n)) [let n0 be 100 and d be 1; let n0 be 1 and d be 100; ...].
• However, g(n) is not O(f(n)). Why not? Suppose there were an n0 and a d. Consider what happens for n = 101d. d*f(n) = d*100*101*d = d*d*100*101. However, g(n) = d*d*101*101, which is even larger. If n0 is greater than 101d, we'll still have this problem [proof left to reader].
• Since constants can be eliminated, we normally don't write them.

### Asymptotic Analysis in Practice

• We now have a theoretical grounding for asymptotic analysis. How do we do it in practice?
• At this point in your career, it's often best to "count" the steps in an algorithm and then add them up. After you've taken combinatorics, you can use recurrence relations.
• We'll work on a few sample algorithms, including
• Finding the smallest/largest element in a Vector of comparable elements.
• Finding the average of all the elements in an array of integers.
• Removing an element from a Vector.
• Putting the largest element in a Vector at the end of the vector.
• Putting the largest element in a Vector at the end of the vector if we're only allowed to swap subsequent elements.
• Computing the nth Fibonacci number.
• ...

On to Algorithm Analysis, Continued
Back to Comparable Objects
Outlines: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
Current position in syllabus

Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.

Source text last modified Tue Jan 12 11:52:21 1999.

This page generated on Mon Jan 25 09:49:08 1999 by SiteWeaver. Validate this page.

Contact our webmaster at rebelsky@math.grin.edu