ClassS04CS141/Hwk1Hints

ClassS04CS141 | ClassS04CS141 | recent changes | Preferences

Showing revision 8
Difference (from revision 8 to revision 8) (minor diff)
(The revisions are identical or unavailable.)

Redoing written Homework 1

You may turn in another solution to homework 1 Monday 4/26 at start of class. If you do, that solution will be the one used to determine your grade, and after your homework is scored by your TA, 15% penalty will be deducted as the cost of redoing it. In redoing it, the following hints may be useful. Lab this week will also be devoted to discussion of this homework.

If you don't turn in another solution Monday, then the solution you turned in last Wednesday will be used to determine your score (with no deduction).

Hints for Homework 1

Some basic definitions:

Common mistakes:

General guidelines:

  1. First, understand the meaning of the problem. What precisely is the problem asking for? If you are unsure, think about it some more or ask.
  2. Second, think of a possible solution. Specify it (to yourself) as precisely as you can.
  3. Third, try to understand whether or not your solution is correct. Try to prove (to yourself) that it is correct, or prove (again to yourself) that it is not.
  4. Fourth, if you are convinced that your solution is correct, figure out how to explain your reasoning (that you used to see yourself why it is correct) in such a way that others can understand it. First, try explaining it to someone in words, allowing them to ask questions. Next, try to write down your line of reasoning fully, so that anyone familiar with the concepts that we've taught so far in class will be able to follow it. At this point, try to be precise and complete. Use formalism (mathemical notation, defining new terms) as necessary to make sure what you are writing is unambiguous and clear. If the solution is complex, break your ideas and reasoning down into independent and easily described sub-parts, describe the overview of how the parts fit together, then prove each part.

1. Stack via shrinkable array.

Figure 1.19(a) in the book (page 41) may help you with this exercise.

Here is an attempt at a solution to 1A that does not work.

We need to show that, for any positive integer N, there is a sequence of N stack operations, each of which is a push() or a pop(), that makes the implementation described there take total time at least Ω(N2).

The sequence we propose is the following sequence:

  push(), push(), push(), ..., push(), pop(), pop(), pop(), ..., pop()

where the number of push() operations is N/2, as is the number of pop() operations. That is, the sequence consists of N/2 pushes followed by N/2 pops.

What does the implementation do for this sequence?

For this particular sequence of N operations, the time taken by the implementation is O(N). We can prove this just as we did for our analysis of the growable array in the second lecture: the time spent growing is proportional to S + 2S + 4S + 8S ... + 2iS where S is the initial table size and 2iS is the table size after the last push() (so 2i S ≤ N). This sum is geometric, and so is proportional to its largest term, which is O(N). So the total time spent growing is O(N).

Likewise, the time spent shrinking is proportional to T + T/2 + T/4 + ... + T/2i where T is the table size after the last push, so T is O(N). This sum is geometric, so the sum is proportional to its largest term, which is T, so the sum is O(N). Thus, the total time spent shrinking is O(N).

Thus, the total time spent in grow() or shrink() is O(N). Other than time spent in grow() or shrink(), each push() or pop() operation takes constant time, so the time spent outside push() or pop() is also O(N).

To answer 1A, you need to come up with a different sequence of N Push and Pop operations, and show that the total time spent for the entire sequence is Ω(N2). You will need to find a sequence that makes grow() and shrink() happen much more often, probably by intermixing the two operations.

A different analysis

Here is a different analysis of the running time for the above sequence, one that may help you with 1B.

Say a push() or pop() operation is constant time if it does not cause the array to grow or shrink. Since there are at most N constant time operations, and each one takes O(1) time, the total time spent for these is O(N).

Next we argue that the time spent in the non-constant-time push or pop operations is also O(N).

Say a push() or pop() operation is constant-time if the operation doesn't cause the table to be resized. Since there are at most N such operations, and each takes O(1) time, the total time for constant-time push() and pop() operations is O(N).

Consider a non-constant-time push() operation. Let's say it causes the array to grow to some size 2T. The time spent for the operation is O(T). Preceeding this push(), there must have been at least T/2 constant-time push() operations since the last time the table was resized. (This is because the previous resizing must have been a grow, and it must have grown the table to size T, leaving T/2 free cells in the table.) Thus, the time taken for any non-constant-time push() operation is proportional to the number of constant-time push() operations that immediately preceeded it.

The latter observation implies that the total time taken for all non-constant-time push operations is proportional to the number of constant-time push() operations overall. Since there are at most N such operations, the total time for non-constant-time push operation is O(N).

Next consider a non-constant-time pop() operation, other than the first pop() operation. Let's say the pop() causes the array to shrink to some size T. The time spent for the operation is O(T). Preceeding this pop(), there must have been at least T constant-time pop() operations since the last time the table was resized. (This is because the previous resizing must have been a shrink(), and it must have shrunk the table to size 2T, leaving no empty cells.) Thus, the time spent for this pop() operation is proportional to the number of constant-time pop() operations preceeding it (since the last non-constant-time pop()).

The latter observation implies that the total time taken for non-constant-time pop() operations is proportional to the number of constant-time pop() operations. Since there are at most N such operations, the total time spent for non-constant-time pop operations is O(N).

In summary, the total time spent for all operation is

O(number of constant-time operations)
+ O(time spent for non-constant-time push() operations)
+ O(time spent for non-constant-time pop() operations other than the first)
+ O(time spent for first non-constant-time pop() operation).

Since each of these four terms is O(N), the total time is O(N).

2. Union-Find using parent pointers

First, read section 4.2.3 of the text. This question is about a different implemention than the one we discussed in class. Here is pseudo-code for the implementation we are asking about:

 MakeSet?(i) {
    Parent[i] = i;
    Size[i] = 1;
 }

 Find(i) {
    if Parent[i] == i then return i
    else return Find(Parent[i]).
 }

 Union(i, j) {
   i = Find(i);  j = Find(j);
   if Size[i] <= Size[j] then {
      Parent[i] = j;
      Size[j] = Size[j] + Size[i];
   } else {
      Parent[j] = i;
      Size[i] = Size[j] + Size[i];
   }
 }

2A. Hint: Do N/3 makeset operations, N/3 Union operations, then N/3 Find operations. Prove that your particular choice of Union operations produces a tree of depth log2(N/3). Make Find operations on the deepest node in the tree.

2B. Hint: To start, prove by induction that a tree of depth D (any tree produced by Unions, that is) size at least 2D . From this, what can you conclude about the maximum depth of any tree produced by at most N Union, Find, and MakeSet? operations?

Note: the depth of a tree is the maximum distance from the root to any leaf.

3. O-notation, sums

Recall from class the following arguments:

Since

i=1..n i2i=1..n n2 = n3
it follows that i=1..n i2 = O(n3).

Since

i=1..n i2i=n/2..n i2i=n/2..n (n/2)2 = n3/8
it follows that i=1..n i2 = Ω(n3).

Since i=1..n i2 = O(n3) and i=1..n i2 = Ω(n3), it follows that i=1..n i2 = Θ(n3).

Use these kinds of argument to solve 3A-3C.

4. Recurrence relations

Review the lectures on recurrence relations and the on-line notes.

When I ask you to describe the recurrence tree, what I am interested in is: what is the depth, the number of children of each node, the size of the subproblems associated with the nodes at each level, and the work done for the subproblems at each level.


ClassS04CS141 | ClassS04CS141 | recent changes | Preferences
This page is read-only | View other revisions | View current revision
Edited April 22, 2004 5:28 pm by roam-neal3.cs.ucr.edu (diff)
Search: