Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Detailed schedule (Fall 2022)

Week 1: Introduction and warmup (bubble sort, insertion sort, selection sort). Asymptotic analysis basics.

Aug 31, Sep 1

We start by reviewing two fundamental problems—searching and sorting—and going over a couple of simple algorithms. We also review the basics of algorithms analysis using big-oh notation, as well as best-cases and worst-cases. You are probably familiar with most of this content from Data Structures (if not, you’ll need to budget more time this first week).

Objectives:

  • Understand searching (linear search, binary search) and simple sorting (bubble sort, selection sort, insertion sort) and be able to compare them, analyze them and apply them to various inputs
  • Understand the basics of algorithm analysis, big-Oh notation, best-case and worst-case analysis

Resources:


Week 2: Asymptotic Notation and Summations

Sep 5- 9

Joke: An infinite number of computer scientists walk into a bar. The first one orders a beer. The second one, half a beer. The third one, a quarter. The barman pours two beers. The computer scientists complain: Is that all you’re giving us? The barman says: “Come on guys, you should know your limits! “

The next two weeks introduce the tools for algorithms analysis. Asymptotic analysis is expressed with O(), Ω() and Θ() notation. We give a formal definition of each one and discuss the differences between them and why big-oh is not sufficient. We introduce the two summations that come up often: arithmetic and geometric summations.

This part of the class is heavy on discrete math like logarithms, exponents, limits and recursive functions, which most of you have not seen in a long time. The bright side is that there are formulas and rules for everything and it’s just a matter of practice. Give yourself time to practice and expect it will take time.

Objectives:

  • Understand the relevance of analysis in practice, as well as its assumptions and limitations
  • Understand the definitions of O(), Ω(), Θ()
  • Understand the rate of growth of common functions
  • Find the rate of growth of a function
  • Compare the order of growth of two arbitrary functions f(n), g(n)
  • Analyze basic algorithm running times using O(), Ω(), Θ()
  • Understand arithmetic and geometric summations and their Θ() bound
  • Recognize arithmetic and geometric summations in different forms and give Θ() bounds

Resources:


Week 3: Mergesort and Recurrences

Sep 12-16

We continue with analysis and introduce the “recurrence” to express the running time of recursive algorithms. To motivate the first recurrence we introduce a new sorting algorithm called Mergesort. Mergesort is the first algorithm we see in this class which beats the quadratic bound.

Objectives:

  • Understand Mergesort: how it works, why it works, and its running time analysis
  • Understand how to express the running time of recursive algorithms using recurrences
  • Solve recurrences by repeated iteration
  • Recognize broadly classes of recurrences ( logarithmic, linear, Θ(n lg n), exponential)

Resources:


Week 4: Heapsort and Quicksort

Sep 19-23

So far we discussed the tools necessary for analyzing algorithms (asymptotic notation, summations and recurrences) and we have seen a couple of sorting algorithms at work. This week we introduce new sorting algorithms: Heapsort, Quicksort, and it’s randomized version, Randomized-Quicksort. Randomized-Quicksort is considered the most efficient general-purpose sort in practice.

Objectives:

  • Understand how the binary heap is defined and the operations supported (deleteMin, insert, heapify, buildheap) along with their analysis
  • Understand how Heapsort works in place
  • Be able to use the heap as a tool to solve new problems
  • Understand Quicksort, Randomized-Quicksort and analyzis

Resources:


Week 5: Sorting lower bound. Faster sorting. Selection.

Sep 26-30

We have seen the most important sorting algorithms and all of them have worst-case running times at least Ω(n lg n). The natural question is: Can a sorting algorithm do better than Θ(n lg n) in the worst-case? We introduce the concept of lower bound and show a lower bound for sorting in the comparison model of computation of Ω(n lg n). We describe a couple of different ways to sort (bucket sort and counting sort) which do not use the comparison model and under certain assumptions run in linear time.

The second topic this week is the selection problem: Given a set S of n elements, {x_1, x_2, …, x_n} and an integer k (1 ≤ k ≤ n), find the kth smallest element in S. We describe several ideas for solving this problem, culminating with an elegant and ingenious algorithm that runs in O(n) worst-case.

Objectives:

  • Understand the comparison-based sorting lower bound, when it applies and what assumptions it makes
  • Understand BucketSort and CountingSort, their analysis and assumptions
  • Understand the selection problem and the algorithms for it (quick-select and smart-select)

Resources:

Week 6: Problems and FLEX time

Oct 3-7

At this point in the class (1) you have the tools to analyze algorithms and start to appreciate the interplay between analysis and design (what we mean by this is that analyzing your ideas gives you further ideas for how to improve on your ideas) and (2) you have seen some fundamental algorithms and building blocks—sorting, priority queues and selection. This week we’ll work on new problems and start talking about techniques.

Objectives: This week’s objective is algorithmic problem solving.

  • You work on new problems that require using the algorithms learnt so far in new ways
  • You understand that algorithmic problem solving is both a science and an art
  • You love this class

Resources:

  • Lecture notes: (no new topics this week)
  • Lab: Lab6
  • Exam1 in class
  • Assignment: none this week in view of Fall break

Fall break!


Week 7: Divide-and-conquer

Oct 12-14

What do you do when you want to solve a problem and you don’t know where to start? Coming up with new algorithms is both an art and a science, and although there are no recipes, there are some techniques that come up frequently. We’ll spend the next three weeks looking at more problems, grouped by the technique used to solve them.

The first technique is divide-and-conquer, which solves a problem by dividing the problem into smaller subproblems and solving them recursively. You already saw it at work in Mergesort. This week you’ll see more examples of problems that can be solved via divide-and-conquer, including Karatsuba’s integer multiplication and Strassen’s matrix multiplication algorithms.

Objectives:

  • Understand how D&C works in general
  • Understand the D&C algorithms for integer multiplication and matrix multiplication (Karatsuba and Strassen)
  • Apply D&C to new problems

Resources:


Week 8: Dynamic Programming

Oct 17-21

We introduce the technique called dynamic programming which can be used for optimization problems (problems where we have to find the best way to do something) that have optimal sub-structure (an optimal solution to a problem contains within it optimal solutions to sub-problems).

Objectives:

  • Understand how dynamic programming works
  • Understand the examples discussed in the lecture notes (including justification and analysis)
  • Apply DP to new problems

Resources:


Week 9: More DP examples and the Greedy technique

Oct 24-28

We add more ddynamic programming examples, and we introduce the greedy technique via the activity selection problem.

Objectives:

  • Understand the DP solution for the knapsack problem
  • Understand how the greedy technique works in general and contrast it with DP
  • Understand the greedy solution for the example in the lecture (activity selection), including the correctness justification

Resources:


Week 10: More DP and FLEX time

Oct 31-Nov 4

We wrap up the module on algorithmic techniques —divide and conquer, dynamic programming, and greedy—by seeing more examples.

Objectives:

  • Apply the greedy technique to new problems
  • Reflect on the problems you’ve seen so far and how the general technique was instantiated for each problem.

Resources:


Week 11: Graphs: Basics, BFS and DFS

Nov 7-11

Once you learn about graphs, you start to see their applications everywhere. This week we start with basic terminology and the traversals, breadth-first and depth-first. These simple algorithms are the stepping stone to many other problems.

Objectives:

  • Compare and contrast the adjacency list and adjacency matrix representation of a graph
  • Compare and contrast different types of graph: sparse, complete, dense, trees
  • Understand basic graph problems: path, connectivity, connected components, reachability
  • Understand breadth-first and depth-first traversals, their complexity, and their properties

Resources:


Week 12 : Application of BFS and DFS. Topological order. Shortest paths on DAGs.

Nov 14-18

We introduce the problem of a computing topological order on a directed acyclic graph (DAG). Then we look at how topological order can be used to solve various other problems on DAGs, including a simple algorithm for computing shortest paths on a DAG.

Objectives:

  • Understand the concept of topological order and the two algorithms for computing a topological order
  • Use topological order to compute shortest paths on DAGs

Resources:


Week 13, 14: Shortest paths.

Nov 21-Dec 2

We discuss shortest paths on graphs and see some of the nicest algorithms in computer science: Dijkstra’s algorithm and Bellman-Ford’s algorithm. While describing them we try to understand some common principles that guided their design. We’ll see that Bellman-Ford’s algorithm uses dynamic programming and Dijkstra’s is a greedy algorithm, making these nice applications of the techniques we studied earlier in the semester.

Objectives:

  • Understand the algorithms for computing shortest paths explained in the notes: how they work, why they work, and their complexity

Resources:


Week 15: Minimum spanning tree.

Dec 5-9

We introduce another fundamental problem on graphs, the Minimum Spanning Tree (MST). We’ll see a couple of properties of MSTs which will get us intuition for how to compute an MST efficiently. We’ll glance at two well-known algorithms, Prim’s and Kruskal’s, which are both greedy algorithms much in the spirit of Dijkstra. Their correctness follows from a neat result called The Cut Theorem.

This last week we’ll also do a quick review and work on some extra fun problems!

Objectives:

  • Understand the concept of MST (minimum spaninng tree) in a graph, and be able to answer basic questions about it sproperties
  • Know the general idea of Kruskal’s and Prim’s algorithms, and the Cut Theorem which captures their correctness

Resources:


Final exam: According to Polaris, we have two exam slots: 12/15 8:30-11:30am, and 12/19, 8:30-11:30am. I have booked large rooms for both days (xact location will be announced on Slack). Choose the slot that works best for you and just show up.