CMSC 27100 — Lecture 5

Uniqueness

So now that we know that our desired integers have to be out there somewhere, we need to show that they're the only ones. Recall that uniqueness is proved as a statement of the form \[\forall y (P(y) \rightarrow x = y)\]

This gives us an idea of how to prove that our integers are unique: we assume that we have two solutions, and then arrive at the conclusion that these two solutions were the same to begin with.

Let $n$ and $d \gt 0$ be arbitrary integers. By Lemma 4.3, there exist integers $q$ and $r$ such that $n = d \cdot q + r$ and $0 \leq r \lt d$. We want to show that $q$ and $r$ are unique.

Suppose that there exist integers $q_1, r_1, q_2, r_2$ such that $n = d \cdot q_1 + r_1$ and $n = d \cdot q_2 + r_2$ with $0 \leq r_1, r_2 \lt d$. Without loss of generality, let $r_1 \geq r_2$ and consider the following system of equations \begin{align*} n &= q_1 \cdot d + r_1 \\ n &= q_2 \cdot d + r_2. \end{align*} Subtracting gives us $0 = (q_1 - q_2) \cdot d + (r_1 - r_2)$. Rearranging this equation gives us $(q_2 - q_1) \cdot d = r_1 - r_2$. Since $(q_2 - q_1)$ is an integer, we have $d \mid (r_1 - r_2)$ by definition of divisibility. Since $0 \leq r_1 - r_2 \lt d$, we have $r_1 - r_2 = 0$ and therefore $r_1 = r_2$. Then from above, this gives us $(q_2 - q_1) \cdot d = 0$. Since $d \gt 0$, we have $q_2 - q_1 = 0$ and therefore $q_1 = q_2$. Thus, we have shown that $q$ and $r$ are unique.

Divisors

You might recall that all of the numbers we discussed in the Division Theorem actually have names (which is why we use certain letters to name them):

We are particularly interested in divisors. Why? Because the divisors of a number are exactly those that divide it. One thing that is of particular interest is when certain numbers share divisors. This is one way that we can relate numbers with each other.

Let $a,b,c$ be integers. If $c \mid a$ and $c \mid b$, then $c$ is a common divisor of $a$ and $b$. The integer $d$ is the greatest common divisor of $a$ and $b$ if $d$ is a common divisor of $a$ and $b$ and every common divisor of $a$ and $b$ divides $d$. Equivalently, $c \leq d$ for all common divisors $c$ of $a$ and $b$. The greatest common divisor of $a$ and $b$ is denoted $\gcd(a,b)$.

From the Division Theorem, we get the following result about the gcd by Étienne Bézout in the mid-1700s.

For all integers $m$ and $n$, there exist integers $a$ and $b$ such that $\gcd(m,n) = a \cdot m + b \cdot n$.

Non-constructive proof

The proof that we'll use is a non-constructive proof. When we proved the Division Theorem, we showed the existence of integers $q$ and $r$ by showing explicitly how to compute them. Here, we'll use a different tactic, a proof by contradiction.

Let's outline the general logical foundations for this approach. First, we have to ask ourselves what it means to prove that a statement is false. With other propositions we've seen, like $p \wedge q$ or $p \rightarrow q$, we construct their proofs using proofs of $p$ and $q$, the constituent parts of the proposition. However, for a proposition like $\neg p$, we only have the proposition $p$ and it's clear that a proof of $p$ would not let us prove $\neg p$.

Are there things that we know definitely can't be true? Yes! Consider the proposition $p \wedge \neg p$. This proposition can't be true because it can never be the case that both $p$ and $\neg p$ are true. Any proposition of this form is called a contradiction.

For any proposition $p$, the proposition $p \wedge \neg p$ is said to be a contradiction, and is denoted $\bot$.

The approach to proving $\neg p$ is show that it cannot be the case that $p$ can be proven. What does that look like?

  1. Assume that $p$ is true.
  2. Reason using the assumption that $p$ is true and arrive at a contradiction.
  3. Contradictions cannot occur. The assumption that $p$ is true created the conditions that allowed us to come up with a contradiction. Therefore, $p$ cannot be true.

The inference rule for this is \[\frac{\boxed{\begin{matrix}p \\ \vdots \\ \bot \end{matrix}}}{\neg p} \neg i\] In other words, a proof of $\neg p$ is a proof of the implication $p \rightarrow \bot$. One way to think of this is that $\neg p$ is a refutation of $p$.

If $m = n = 0$, then any $a$ and $b$ works. So without loss of generality, we have non-zero $m$ or $n$. Consider a set $S = \{am + bn \mid a,b \in \mathbb Z\}$. Let $d$ be the least positive integer in $S$ and let $a$ and $b$ be such that $d \in S$. First, we will show that $d$ is a common divisor of $m$ and $n$.

Suppose otherwise that $d$ isn't a common divisor of $m$ and $n$. Without loss of generality, say $d \not \mid m$. Then by the Division Theorem, there exist integers $q$ and $r$ such that $m = q \cdot d + r$ and $d \gt r \gt 0$. Then we have \begin{align*} r &= m - q \cdot d \\ &= m - q \cdot (am + bn) \\ &= (1-qa)m + (-bq)n \end{align*} This gives us $r \in S$. But since $0 \lt r \lt d$, this contradicts our assumption that $d$ was the least positive integer in $S$. Therefore, $d$ must be a common divisor of $m$ and $n$.

We'll finish off showing that $d$ is the greatest common divisor of $m$ and $n$ next class.

"Without loss of generality" and proof by cases

Our proofs today used the statement "without loss of generality" a few times. What does this mean? Consider the scenarios in which we used it:

In each of these cases, we were pre-emptively dealing with these questions:

Why do we need to consider each of these scenarios? It's because what we're really dealing with here is a statement of the form $p \vee q$:

How does one make use of a statement of the form $p \vee q$? Unlike a statement of the form $p \wedge q$, we can't say that $p$ is definitely true or $q$ is definitely true. We know that at least one of them is true but we don't know which. So we have to assume each of $p$ and $q$ and show that from either assumption, we can construct an appropriate proof. This is sometimes known as proof by cases. Here, $p$ and $q$ are our cases. There's an inference rule for this: \[\frac{\begin{matrix} \\ \\ p \vee q \end{matrix} \quad \boxed{\begin{matrix}p \\ \vdots \\ r \end{matrix}} \quad \boxed{\begin{matrix}q \\ \vdots \\ r \end{matrix}}}{r} \vee\!e\] What is this saying? Suppose we want to prove some proposition $r$ by using the proposition $p \vee q$. What we need to do is consider the $p$ case and show that we can arrive at $r$ and then consider the $q$ case and show that we can also arrive at $r$.

In other words, we do the following:

  1. Consider case $p$—assume that $p$ holds.
  2. Carry out proof of $r$ assuming $p$.
  3. Conclude that $r$ holds.
  4. Consider case $q$—assume that $q$ holds.
  5. Carry out proof of $r$ assuming $q$.
  6. Conclude that $r$ holds.
  7. Therefore, in either case $p$ or $q$, we can conclude $r$.

So what we need to do to prove each of our three situations is to consider two cases for each: $r_1 \geq r_2$ and $r_2 \geq r_1$, $m \neq 0$ and $n \neq 0$, and $d \nmid m$ and $d \nmid n$.

But what does "without loss of generality" mean here? We make the observation that if we carried out a proof for the other case, the proof would look exactly the same! The only things that would change would be the positions of $r_1$ and $r_2$ or $m$ and $n$. It's not very interesting to see the exact same proof twice. So mathematicians invented some language, "without loss of generality", to say to the reader that, look, the other case will basically give you the same proof but with the names swapped, so we'll only do it once.