# ClassS04CS141/Notes 04 30

ClassS04CS141 | ClassS04CS141 | recent changes | Preferences

### Dynamic Programming

Read sections 5.3.3 (0/1 knapsack) and 6.4.2 (transitive closure).

#### Example: Fibonacci numbers

The Fibonacci numbers are defined by the following recurrence:

F(0) = 0, F(1) = 1, F(N) = F(N-1) + F(N-2) for N > 1.

Give a recursive algorithm to compute this.

``` int f(int n) {
if (n <= 1) return n;
return f(n-1) + f(n-2);
}
```

The running time of this algorithm is at least 2n/2. (Prove this using a recursion tree.)

If instead we using caching or bottom-up dynamic programming, we reduce the running time to O(n).

Caching:

``` int f(int n) {
static HashTable<int,int> cache;
if (n <= 1) return n;
```

```   if (! cache.exists(n))  cache[n] = f(n-1) + f(n-2);
```

```   return cache[n];
}
```

Bottom up dynamic programming:

``` int f(int n) {
Array <int> cache(0);
```

```   cache[1] = 1;
```

```   for (int i = 2;  i <= n;  ++i)
cache[i] = cache[i-1] + cache[i-2];
```

```   return cache[n];
}
```

### Example: choosing k objects out of n: n choose k.

Define C(n,k) to be the number of different size-k subsets of {1,2,...,n}. C(n,k) is read as "n choose k", it counts the number of ways of choosing k items from a set of n items. Here we assume n and k are non-negative integers.

What can we say about C(n,k)?

If k = 0, then C(n,k) = 1. There is one size-0 subset of any set -- the empty set.

If k > n, then C(n,k) = 0. There is no way to choose a subset larger than the original set.

These are the boundary cases. What about the remaining cases, where 0 < k <= n?

Consider all the size-k subsets of {1,2,...,n}.

Classify them into two groups according to whether or not they contain the element n.

claim: The first group (those not containing n) has size C(n-1, k).

This is because the sets in this group are exactly the size-k subsets of {1,2,...,n-1}.

claim: The second group (those containing n) has size C(n-1,k-1).

This is because the sets in this group correspond exactly to the size-(k-1) subsets of {1,2,...,n-1}. To see this, consider listing the size-(k-1) subsets of {1,2,...,n-1}, and then changing each set by adding n. This gives you exactly the second group of sets.

These two claims together imply that C(n,k) satisfies the recurrence C(n,k) = C(n-1,k) + C(n-1,k-1).

### Computing C(n,k)

Consider the following algorithm for computing C(n,k):

``` unsigned int C(unsigned int n, unsigned int k) {
if (k == 0) return 1;
if (k > n) return 0;
return C(n-1,k) + C(n-1,k-1);
}
```

What is the running time for this algorithm? Consider the recursion tree. Do some examples.

The running time T(n,k) satisfies the recurrence T(n,k) = 1 + T(n-1,k) + T(n-1,k-1). Some consideration of this reveals that T(2k,k) >= 2k. So the running time is very large.

What if instead we cache the answers so we don't recompute them if we already have computed them once?

``` unsigned int C(unsigned int n, unsigned int k) {
static Array<Array<int> > cache;
```

```   if (! (cache.exists(n) && cache[n].exists(k)) {
if (k == 0) cache[n][k] = 1;
else if (k > n)  cache[n][k] = 0;
else cache[n][k] = C(n-1,k) + C(n-1,k-1);
}
return cache[n][k];
}
```

Now what's the running time? Drawing the recursion "tree", we see that it forms a grid, with a subproblem for each (n',k') pair where 0 ≤ k' ≤ k and 0 ≤ n' ≤ n.

Thus, the number of calls to C() where the answer is not cached is O(n k). Since these calls are the only ones that result in recursive calls, and each results in at most 2 calls, the total number of calls (cached or otherwise) is also O(n k). Since each call (not counting time spent in recursion) takes O(1) time, the total time to compute C(n,k) is O(nk).

Note that we could also just fill out the cache "bottom up":

``` unsigned int C(unsigned int n, unsigned int k) {
```

```   if (k = 0) return 1;
if (n > k) return 0;
```

```   static Array<Array<int> > cache;
```

```   for (int N = 0; N <= n;  ++N)      cache[N][0] = 1;
for (int K = 1; K <= k;  ++K)      cache[K-1][K] = 0;
```

```   for (int N = 0;  N <= n;  ++N)
for (int K = 1;  K <= k &&  K <= N;  ++K)
cache[N][K] = cache[N-1][K-1] + cache[N-1][K];
```

```   return cache[n][k];
}
```

#### Transitive Closure

(Read section 6.4.2 of the text.)

Given a digraph G = (V,E), we want to compute a matrix M such that M[i,j] is true if there is a path from i to j.

One way is to iterate over all vertices and do a depth-first or breadth-first search starting from each vertex to see what vertices are reachable from it. This takes O(n(n+m)) time.

Here's another more complicated way.

Order the vertices v1,v2,...,vn in some arbitrary way.

Define M[i,j,k] to be true if there is a path from i to j that goes through only the first k vertices v1,...,vk. (That is, i and j need not be in the set of k vertices, but all the other vertices on the path must be.)

For example:

M[i,j,0] = true if there is an edge from i to j in E.

M[i,j,1] = true if there is an edge from i to j or edges (i,1) and (1,j)

M[i,j,n] = true if there is any path from i to j.

Claim: M[i,j,k] = true iff (M[i,j,k-1] = true or (M[i,k,k-1] = true and M[k,j,k-1] = true) )

This recurrence leads to the following algorithm:

```  1. Initialize M[i,j] = false for all i,j.
2. Set M[i,j] = true for each edge (i,j) in E.
3. For k = 1,2,...,n
4.   For i = 1,2,...,n
5.     For j = 1,2,...,n
6.        M[i,j] = M[i,j] || (M[i,k,k-1] && M[k,j,k-1])
7. Return M.
```

The algorithm runs in n3 time.

This isn't particularly fast, however we will see later how to modify the algorithm to compute shortest paths (a slightly more difficult problem).

ClassS04CS141 | ClassS04CS141 | recent changes | Preferences