CMSC 27100 — Lecture 3

Let's see an example of universal quantification by starting a proof of Proposition 2.13, which was

For all integers $a$, $b$, and $c$, if $a \mid b$, then $a \mid bc$.

This had the formula $\forall a,b,c (a \mid b \rightarrow a \mid bc)$.

Let $a$, $b$, and $c$ be arbitrary integers. \[\cdots\]

 

Now, we would proceed with our proof by using the $a,b,c$ we chose. This means constructing a proof for the sentence $a \mid b \rightarrow a \mid bc$. To do that, we need to know how to prove an implication.

Implication

As stated earlier, most theorems are implications. This corresponds to the inference rule $\rightarrow$-introduction: \[\frac{\boxed{\begin{matrix}p \\ \vdots \\ q\end{matrix}}}{p \rightarrow q} \rightarrow\!i\] What this rule means is that in order to prove such statements, we assume $p$ and then use that assumption to show that the conclusion $q$ is true. Again, we use a box to show that $p$ exists as an assumption inside the box, since it's not one of our given facts.

Let's continue our proof from above. Since we dealt with the universal quantification, we are left with the statement $a \mid b \rightarrow a \mid bc$. This means we need to assume $a \mid b$ is true and we can use this assumption to complete our proof.

Let $a$, $b$, and $c$ be arbitrary integers. We assume that $a \mid b$. So there exists an integer $n$ such that $a \cdot n = b$. Then \begin{align*} bc &= (a \cdot n) \cdot c &\text{substitution} \\ &= a \cdot (n \cdot c) &\text{associativity} \end{align*} We note that $n \cdot c$ is an integer, since the integers are closed under multiplication. Therefore, there exists an integer $n'$ (which is our new name for $n \cdot c$) such that $a \cdot n' = bc$. Therefore, $a \mid bc$.

From this, we can see that a proof of $p \rightarrow q$ involves

  1. Assume that $p$ holds.
  2. State known facts.
  3. Conclude that $q$ holds.

Here, step (2) is a bit thin on details, but this is where the clever combining of mathematical facts and knowledge happens, so there's not much we can say regarding the structure of the proof.

Existential quantification

You may have noticed we did something sneaky in the above proof. When we applied the definitions for divisibility, in a sense, we substituted $a \mid b$ for the statement $\exists n (a\cdot n = b)$. This means that we were working with an existentially quantified statement. Here's what happened

  1. Start with $a \mid b$.
  2. Apply definition, so that $a \mid b$ becomes $\exists n(a\cdot n = b)$.
  3. Since there exists an appropriate integer, we name this integer $n$ (a bit confusing to use the same name) and now we have the statement $a \cdot n = b$, for a specific integer $n$.

The inference rule for using an existentially quantiifed statement is \[\frac{\begin{matrix} \quad \\ \quad \\ \exists x P(x) \end{matrix} \quad \boxed{\begin{matrix} P(u), \text{sub $u$ for $x$} \\ \vdots \\ Q \end{matrix}}}{Q} \exists e\] In other words, if we want to use an existentially quantified statement $\exists x P(x)$, we simply choose a specific element $u$ and work with the statement $P(u)$. Notice that

Like the other rules we saw earlier, the chosen $u$ exists only within a specific scope. But we can use this fact to arrive at a proof of some other statement $Q$.

However, this is only half the story, because we end up having to arrive at a proof for an existentially quantified statement:

  1. Show that $bc = a \cdot (n \cdot c)$.
  2. Observe that $n \cdot c)$ is an integer.
  3. Because $n' = n \cdot c$ is an integer $a \cdot n' = bc$, conclude that $\exists n (a \cdot n = bc)$.
  4. Apply definition, so that $\exists n (a \cdot n = bc)$ becomes $a \mid bc$.

That is, we showed that there exists an appropriate integer and we use this to construct an existentially quantified statement. This is the inference rule \[\frac{P(t)}{\exists x P(x)} \exists i\] In essence, this is a proof that looks like this:

  1. Prove that $P(t)$ holds for some $t$ in our domain.
  2. Since we showed that there's some $t$ that works, we conclude that $\exists x P(x)$.

Back to induction

Let's go back to induction. Here, we have a sentence $\forall k (P(k) \rightarrow P(\operatorname{succ}(k)))$. What we need to do then is to show for an arbitrary element, say $t$, that $P(t) \rightarrow P(\operatorname{succ}(t))$. So now, we are left with an implication that we need to come up with a proof for. Then our inference rule says that we need to assume $P(t)$ and use that assumption to construct a proof for $P(\operatorname{succ}(t))$.

We can now put all of this together.

  1. Base case. Prove the statement $P(z)$.
  2. Inductive case. Prove the statement $\forall k (P(k) \rightarrow P(\operatorname{succ}(k)))$:
    1. Choose an arbitrary natural number $t$.
    2. Assume $P(t)$ holds. You should clearly state what $P(t)$ is. This is called the inductive hypothesis.
    3. Use this assumption together with known facts to prove $P(\operatorname{succ}(t))$. You should clearly state when you use the inductive hypothesis.

A common use for induction is for proving properties of sums or sequences of integers. The following is a common example.

For every $n \in \mathbb N$, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$.

If you've been taught this formula before, then you've probably been told the story of Gauss solving this problem when he was a child—a story which has been embellished over the last century and a half. However, his teachers in this story, in assigning $n = 100$, gave him an easy out since he only needed to consider one instance of the problem, so his solution does not use induction. We don't have that luxury, since we want to prove that this formula holds for all $n$.

We will prove this by induction on $n$.

Base case. If $n = 0$, then $\sum \limits_{i=0}^0 i = 0$ and $\frac{0 \cdot (0+1)}2 = 0$. Therefore, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$ holds for $n = 0$.

Inductive case. Let $k \in \mathbb N$ be arbitrary and assume that $\sum \limits_{i=0}^k i = \frac{k(k+1)}2$. We will show that $\sum \limits_{i=0}^{k+1} i = \frac{(k+1)(k+2)}2$. Then, we have the following: \begin{align*} \sum_{i=0}^{k+1} i &= \sum_{i=0}^k i + (k+1) \\ &= \frac{k(k+1)}2 + (k+1) & \text{by ind. hyp.} \\ &= (k+1) \cdot \left(\frac k 2 + 1 \right) \\ &= (k+1) \cdot \left(\frac k 2 + \frac 2 2 \right) \\ &= \frac{(k+1)(k+2)}2 \end{align*}

Thus, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$ holds for $n = k+1$. Therefore, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$ holds for all $n \in \mathbb N$.

Notice that even though we were proving a fact about a series, we were really performign induction on the structure of the natural numbers. This is typically how induction is learned—as a property of the natural numbers. But recall that the structure of the proof mirrors the structure of the recursive definition of the natural numbers. This suggests that induction can be applied to other recursive structures, beyond just the natural numbers.

Strings

Let's consider the structure of strings.

Let $\mathcal A$ be a finite set of symbols, called an alphabet. Then a string is defined by

To combine strings, we define the concatenation operation.

Let $\mathbf u$ and $\mathbf v$ be strings. Then the concatenation of $\mathbf u$ and $\mathbf v$, denoted $\mathbf u \cdot \mathbf v$ is defined as follows:

Let's consider the length of a string. Well, it's really just the number of characters in the string. Easy! Suppose we want to compute the length of a string. We can consider a formal definition.

Let $\mathbf s$ be a string. Then the length of $\mathbf s$, denoted $|\mathbf s|$ is defined as follows:

This leads to the following program.

def length(s: str) -> int:
    if s == "":
        return 0
    else:
        x: str = s[1:]
        return 1 + length(x)

Maybe that seems obvious. Something you can try to prove is the following.

For all strings $\mathbf u$ and $\mathbf v$, $|\mathbf u \cdot \mathbf v| = |\mathbf u| + |\mathbf v|$.