You may turn in another solution to homework 1 Monday 4/26 at start of class. If you do, that solution will be the one used to determine your grade, and after your homework is scored by your TA, 15% penalty will be deducted as the cost of redoing it. In redoing it, the following hints may be useful. Lab this week will also be devoted to discussion of this homework.

If you don't turn in another solution Monday, then the solution you turned in last Wednesday will be used to determine your score (with no deduction).

Some basic definitions:

- f(n) is O(g(n)) if, for some positive constant C and some integer N
_{0}, f(n) ≤ C g(n) for every integer n ≥ N_{0}.

- f(n) is Ω(g(n)) if g(n) is O(f(n)). Equivalently, for some positive constant C and some integer N
_{0}, f(n) ≥ C g(n) for every integer n ≥ N_{0}.

- f(n) is Θ(g(n) if f(n) is O(g(n)) and f(n) is Ω(g(n)). Equivalently, for some positive constants C and C' and some integer N
_{0}, C' g(n) f(n) ≤ C g(n) for every integer n ≥ N_{0}.

Common mistakes:

- If the problem says "show that there is a sequence of N operations that make the data structure take Ω(N
^{2}) time," then you need to show that*for all N*there is such a sequence. It is not enough to show it for a particular N (for example, for N=4).

- Similarly, if you want to show (using the definition) that a function f(n) is O(N) (for example), you have to find a constant C (such as 1/2) and then show that for
*all*large enough N, the inequality f(N) ≤ C*g(n) holds. It is not enough to show the inequality for a*particular*N.

- Make sure you write enough down so that your idea is clear to the reader. Often students will put down only a diagram or a few lines explaining something about the idea, but that is not enough for the reader to see what the idea is. If you are unsure whether what you've written is clear, try putting it aside until the idea is no longer fresh in your mind (maybe for a day or so) then go back and re-read it. Or, explain the problem (not the idea) to someone else, then hand them the solution and see if it is clear to them.

General guidelines:

- First, understand the meaning of the problem. What precisely is the problem asking for? If you are unsure, think about it some more or ask. Even if you think you understand what the problem is asking for, you can ask a TA or the instructor for confirmation that you've got it.
- Second, think of a possible solution. Specify it (to yourself) as precisely as you can.
- Third, try to understand whether or not your solution is correct. Try to prove (to yourself) that it is correct, or prove (again to yourself) that it is not.
- Fourth, if you are convinced that your solution is correct, figure out how to explain your reasoning (that you used to see yourself why it is correct) in such a way that others can understand it. First, try explaining it to someone in words, allowing them to ask questions. Next, try to write down your line of reasoning fully, so that anyone familiar with the concepts that we've taught so far in class will be able to follow it. At this point, try to be precise and complete. Use formalism (mathemical notation, defining new terms) as necessary to make sure what you are writing is unambiguous and clear. If the solution is complex, break your ideas and reasoning down into independent and easily described sub-parts, describe the overview of how the parts fit together, then prove each part.

Figure 1.19(a) in the book (page 41) may help you with this exercise.

Here is an attempt at a solution to 1A that does *not* work.

We need to show that, for *any* positive integer N, there is a sequence of N stack operations,
each of which is a push() or a pop(), that makes the implementation described
there take total time at least Ω(N^{2}).

The sequence we propose is the following sequence:

push(), push(), push(), ..., push(), pop(), pop(), pop(), ..., pop()

where the number of push() operations is N/2, as is the number of pop() operations. That is, the sequence consists of N/2 pushes followed by N/2 pops.

For this particular sequence of N operations, the time taken by the implementation is O(N).
We can prove this just as we did for our analysis of the growable array in the second lecture:
the time spent growing is proportional to S + 2S + 4S + 8S ... + 2^{i}S where S is the initial
table size and 2^{i}S is the table size after the last push() (so 2^{i} S ≤ N).
This sum is geometric, and so is proportional to its largest term, which is O(N).
So the total time spent growing is O(N).

Likewise,
the time spent shrinking is proportional to T + T/2 + T/4 + ... + T/2^{i}
where T is the table size after the last push, so T is O(N).
This sum is geometric, so the sum is proportional to its largest term,
which is T, so the sum is O(N). Thus, the total time spent shrinking is O(N).

Thus, the total time spent in grow() or shrink() is O(N). Other than time spent in grow() or shrink(), each push() or pop() operation takes constant time, so the time spent outside push() or pop() is also O(N).

To answer 1A, you need to come up with a different sequence of N Push and Pop operations,
and show that the total time spent for the entire sequence is Ω(N^{2}).
You will need to find a sequence that makes grow() and shrink() happen much more often,
probably by intermixing the two operations.

Say a push() or pop() operation is *constant time* if it does not cause
the array to grow or shrink. Since there are at most N constant time operations,
and each one takes O(1) time, the total time spent for these is O(N).

Next we argue that the time spent in the non-constant-time push or pop operations is also O(N).

Say a push() or pop() operation is *constant-time* if the operation
doesn't cause the table to be resized. Since there are at most N such
operations, and each takes O(1) time, the total time for constant-time push()
and pop() operations is O(N).

Consider a non-constant-time push() operation. Let's say it causes the array to grow to some size 2T.
The time spent for the operation is O(T). Preceeding this push(), there must have
been at least T/2 constant-time push() operations since the last time
the table was resized. (This is because the previous resizing
must have been a grow, and it must have grown the table to size T,
leaving T/2 free cells in the table.) Thus, * the time taken for any non-constant-time push() operation
is proportional to the number of constant-time push() operations that immediately preceeded it.*

The latter observation implies that the * total * time taken for *all* non-constant-time push operations
is proportional to the number of constant-time push() operations overall. Since there are at most
N such operations, the total time for non-constant-time push operation is O(N).

Next consider a non-constant-time pop() operation, other than the first pop() operation. Let's say the pop() causes the array to shrink to some size T. The time spent for the operation is O(T). Preceeding this pop(), there must have been at least T constant-time pop() operations since the last time the table was resized. (This is because the previous resizing must have been a shrink(), and it must have shrunk the table to size 2T, leaving no empty cells.) Thus, the time spent for this pop() operation is proportional to the number of constant-time pop() operations preceeding it (since the last non-constant-time pop()).

The latter observation implies that the total time taken for non-constant-time pop() operations is proportional to the number of constant-time pop() operations. Since there are at most N such operations, the total time spent for non-constant-time pop operations is O(N).

In summary, the total time spent for all operation is

- O(number of constant-time operations)
- + O(time spent for non-constant-time push() operations)
- + O(time spent for non-constant-time pop() operations other than the first)
- + O(time spent for first non-constant-time pop() operation).

Since each of these four terms is O(N), the total time is O(N).

First, *read section 4.2.3* of the text. This question is about
a different implemention than the one we discussed in class.
Here is pseudo-code for the implementation we are asking about:

MakeSet?(i) { Parent[i] = i; Size[i] = 1; }

Find(i) { if Parent[i] == i then return i else return Find(Parent[i]). }

Union(i, j) { i = Find(i); j = Find(j); if Size[i] <= Size[j] then { Parent[i] = j; Size[j] = Size[j] + Size[i]; } else { Parent[j] = i; Size[i] = Size[j] + Size[i]; } }

2A. Hint: Do N/3 makeset operations, N/3 Union operations, then N/3 Find operations.
Prove that your particular choice of Union operations produces a tree of depth log_{2}(N/3).
Make Find operations on the deepest node in the tree.

2B. Hint: To start, prove by induction that a tree of depth D (any tree produced by Unions, that is) size at least 2^{D} .
From this, what can you conclude about the maximum depth of any tree produced by at most N Union, Find, and MakeSet?
operations?

Note: the *depth* of a tree is the maximum distance from the root to any leaf.

Recall from class the following arguments:

Since

- ∑
_{i=1..n}i^{2}≤ ∑_{i=1..n}n^{2}= n^{3}

Since

- ∑
_{i=1..n}i^{2}≥ ∑_{i=n/2..n}i^{2}≥ ∑_{i=n/2..n}(n/2)^{2}= n^{3}/8

Since
∑_{i=1..n} i^{2} = O(n^{3})
and ∑_{i=1..n} i^{2} = Ω(n^{3}),
it follows that ∑_{i=1..n} i^{2} = Θ(n^{3}).

Use these kinds of argument to solve 3A-3C.

Review the lectures on recurrence relations and the on-line notes.

When I ask you to describe the recurrence tree, what I am interested in is: what is the depth, the number of children of each node, the size of the subproblems associated with the nodes at each level, and the work done for the subproblems at each level.