For a large system to be reliable, its components must be super-reliable.

If your company is writing an operating system with a thousand subroutines, and each subroutine has even a 1/1000 chance of being buggy, then the operating system has a good chance of being buggy.

If you write a code library that is used by thousands of people, and your code library has a subtle bug, you can bet that someone out there will likely be bitten by it.

A more subtle example is academic knowledge. In scientific and mathematical domains, knowledge builds on other knowledge. Suppose your work relies on previous work, and that work relies on some other previous work, and so on. Then, if there's a flaw somewhere down the line, it can invalidate all the later work that depends on it.

So, in these domains, certainty is desirable. Absolute certainty may be impossible, but where a high degree of certainty is possible, this may enable us to build large and complex systems that function with a high degree of reliability.

At a more personal level, having code that "just works" is important. Imagine what it would be like if you had to debug your phone every other time you made a phone call. Ideally, once you've solved an algorithmic problem, you should be able to use your solution as a black box, without having to worry about how it works. It should "just work".

For example, consider the problem of computing the g.c.d. from /Lecture1 .

Note that, to start, the problem itself is well-defined. We have a high degree of confidence that we can all agree on what the meaning of this problem is.

Next, note that the algorithms that we've proposed for this problem are well-defined. With some basic expertise in algorithms, a person will know with a high-degree of certainty what each algorithm is suppose to do on any given input.

Next, the meaning of the statement "this algorithm is correct" (for a particular algorithm) is clear. It means that on any input, the algorithm produces the right output.

Next, note that (and this is the key point I want to make)
*it is possible to know that an algorithm is correct with a
very high degree of certainty*.
For example, consider the first algorithm described in /Lecture1

- for k=1..i, check if k evenly divides i and j. return the largest k that does.

For the next algorithm considered (based on gcd(i,j) = gcd(i,j-i)), correctness is not so clear. However, with some work, it is also possible to verify to one's self that this algorithm is correct. Because the verification is a little more complicated, the confidence in correctness may be a little less. Still, it is possible to reach a state where you see with certainty that the algorithm is correct. (Not "I think it is probably correct, I tried it on a bunch of examples" but instead "Yes, I see why it works. I'm quite sure that it will give the correct output on any input.")

*One of the most important skills you can get out of this course
is this ability: to be able to understand the correctness of an algorithm
with this degree of confidence, and to know when you do or do not
have that degree of confidence.*

In your education so far, if you are like most people, you have usually been in the following situation: you have been asked to gain confidence in correctness of things when you already know that other people know that those things are correct. That is, you've been learning things that other people already know.

But when you are designing algorithms in the future, it is likely that you will need to be able to decide for yourself whether an algorithm you design is correct. You will only be able to proceed with confidence if you have the skill of being able to verify, with confidence, on your own, that an algorithm is correct.

It is possible to exercise this skill now if you try. In the course, I'll be asking you to do that.

"Proof" is just another name for an understanding that gives this high degree of certainty.

The first part of giving a proof that an algorithm is correct:
convincing *yourself* that the algorithm is correct.
That is, understanding the algorithm, then thinking about
it until you see with certainty that it is (or is not, perhaps) correct.
Until you yourself see it, you cannot give a proof to anyone else.

The second part of giving a proof is this: figuring out how to explain why the algorithm is correct to someone else. That is, you want them to be able to see the reasoning that you understand (the one that convinces you that the algorithm is correct). A proof is an explanation that achieves this.

Note that this means that the way you present a proof depends on your audience. You may need to go into more or less detail depending on how much the other person knows.

Since this part of giving a proof is about communication, you also need to think about how to organize what you say or write so that it is easily understood. But, the bottom line is, giving a proof is a form of communication. A successful proof is one that communicates to the other person enough information for them to see with certainty that what you are trying to prove is in fact true.