CMSC 27100 — Lecture 6

Modular arithmetic

Now, we'll define a system of arithmetic on integers based around remainders. Many times, we want to do calculations based around multiples of certain numbers, like time. Modular arithmetic formalizes these notions. One of the things we'll see is that in certain cases, working in these structures gives us a notion of "division" that is well-defined. The system of modular arithmetic was first developed by Gauss.

Definition 6.1. Let $m$ be an integer. For integers $a$ and $b$, we say that $a$ is congruent to $b$ modulo $m$, written $a = b \pmod m$ or $a \equiv_m b$, if $m \mid (a-b)$. Equivalently, $a = b \pmod m$ if $a \mathop{\mathbf{mod}} m = b \mathop{\mathbf{mod}} m$.

Here, we need to be careful to distinguish the notion of equivalence $a \pmod m$ versus the function $a \mathop{\mathbf{mod}} m$.

Ultimately, we want to be able to talk about integers that are equivalent to each other. An easy example of this is when we think about integers modulo 10, since our entire number system is built around 10s. We can formally define what it means to be equivalent.

Definition 6.2. A relation $\sim$ is an equivalence relation if $\sim$ satisfies the following:

  1. Reflexivity: For all $a$, $a \sim a$.
  2. Symmetry: For all $a, b$, if $a \sim b$, then $b \sim a$.
  3. Transitivity: For all $a, b, c$, if $a \sim b$ and $b \sim c$, then $a \sim c$.

Equivalence relations are called as such because they capture relationships that are similar to equality. For instance, if I have two formulas $\varphi$ and $\neg \neg \varphi$, we can't say they're equal because they aren't: they contain different symbols and one is longer than the other. However, we can say that they're logically equivalent because they mean the same thing. One can define the notion of logical equivalence more formally and then show that it satisfies the conditions for equivalence relations.

Theorem 6.3. For $m \gt 0$, $\equiv_m$ is an equivalence relation.

Proof.

  1. To see that $\equiv_m$ is reflexive, observe that $m \mid (a-a)$ for all integers $a$.
  2. To see that $\equiv_m$ is symmetric, if $a \equiv_m b$, then $m \mid (a - b)$. This means there is an integer $n$ such that $a - b = mn$. Then we get $b - a = m\cdot (-n)$ and we have $m \mid (b-a)$.
  3. To see that $\equiv_m$ is transitive, consider integers $a,b,c$ such that $a \equiv_m b$ and $b \equiv_m c$. We have $m \mid (a-b)$ and $m \mid (b-c)$, which gives us $m \mid (a-b) + (b-c)$ and therefore, $m \mid (a-c)$ and $a \equiv_m c$.
$$\tag*{$\Box$}$$

Using the notion of an equivalence relation, we can divide $\mathbb Z$ into sets that contain equivalent members. For instance, if we choose $m = 2$, then all even numbers are equivalent to each other ($0 \pmod 2)$ and all odd numbers are equivalent to each other $(1 \pmod 2)$. These sets are called equivalence classes.

Definition 6.4. For all $m \gt 0$ and $a \in \mathbb Z$, we define the equivalence class modulo $m$ of $a$ to be the set of integers $$[a]_m = \{b \in \mathbb Z \mid b \equiv_m a\}.$$

Typically, we refer to equivalence classes by the "obvious" name, which is the member of the class that is between 0 and $m-1$. This is called the canonical representative of the class. Of course, we should keep in mind that $[0] = [m] = [2m] = [-m]$ and such. But in addition to this, sometimes the $[]_m$ gets dropped for convenience's sake and we have to determine from context whether "2" means $2 \in \mathbb Z$ or $[2]_m$. Usually, this becomes clear with the usage of $\pmod m$ and we will try to make that explicit, but outside of this course, that's not always guaranteed.

Now, we'll define how to do arithmetic on these things.

Theorem 6.5. We define operations $+$ and $\cdot$ on the equivalence classes of $m$ by

All of this seems a bit obvious, but we should think about what we're really doing here. We've defined operations $+$ and $\cdot$ that look like our usual operations on the integers. However, observe that we're not adding and multiplying integers; we've defined a notion of adding and multiplying sets of integers.

Based solely on this, there is no reason that what we've defined is guaranteed to work. For instance, how do we know that when adding two sets in this way that we even get a set that makes sense at all? Of course, we have to prove this and it will turn out that our definitions of equivalence classes and addition and multiplication on those classes is such that everything works out intuitively almost without a second thought.

Proof. We have to show that for $a_1 \equiv a_2 \pmod m$ and $b_1 \equiv b_2 \pmod m$, we have $a_1 + b_1 \equiv a_2 + b_2 \pmod m$ and $a_1 \cdot b_1 \equiv a_2 \cdot b_2 \pmod m$.

First, by definition, we have that $m \mid (a_1 - a_2)$ and $m \mid (b_1 - b_2)$. Then we have $m \mid ((a_1 - a_2) + (b_1 - b_2))$. We can easily rearrange this to get $m \mid ((a_1 + b_1) - (a_2 + b_2))$ and therefore, $a_1 + b_1 \equiv a_2 + b_2 \pmod m$.

Next, consider $a_1 b_1 - a_2 b_2$. Since $m \mid (a_1 - a_2)$ and $m \mid (b_1 - b_2)$ there exist integers $k$ and $\ell$ such that $km = a_1 - a_2$ and $\ell m = b_1 - b_2$. Then, \begin{align*} a_1 b_1 - a_2 b_2 &= (a_2 + km) (b_2 + \ell m) - a_2 b_2 \\ &= a_2 b_2 + a_2 \ell m + b_2 k m + k \ell m^2 - a_2 b_2 \\ &= m(a_2 \ell + b_2 k + k \ell m) \end{align*} and therefore, $m \mid (a_1 b_1 - a_2 b_2)$ and $a_1 b_1 \equiv a_2 b_2 \pmod m$. $$\tag*{$\Box$}$$

Now, we can define our structure.

Definition 6.6. Let $\mathbb Z_m = \{[0]_m, [1]_m, \dots, [m-1]_m\}$. The integers mod $m$ is the set $\mathbb Z_m$, together with the binary operations $+$ and $\cdot$. The integers mod $m$ are denoted by $\mathbb Z/\equiv_m$ or simply as $\mathbb Z_m$.

Up to now, we have been working implicitly in the structure $\mathbb Z$, the integers. As I've briefly alluded to before, we're not only talking about the domain $\mathbb Z$ but also how we interpret operations like $+$ and $\cdot$. The integers mod $m$, $\mathbb Z_m$, is another structure, whose basic elements are the equivalence classes with respect to $\equiv_m$.

These kinds of structures—a set together with binary operations $+$ and $\cdot$ and identities for both operations— are called rings.

The notation $\mathbb Z/\equiv_m$ gives us a hint at what's happening. We took $\mathbb Z$ and partitioned it into equivalence classes by the relation $\equiv_m$. This idea of taking an algebraic structure and constructing another structure based on an equivalence relation is something that comes up a lot in algebra and is called a quotient structure, where quotient in the algebraic context just means equivalence class.

I mentioned earlier that one of the things that this structure allows us to do is, under certain conditions, "divide" things in the sense that there is an operation that we can perform on elements of our structure that reverse mulitplication. I say "divide" because the operation that we perform is not really division. It's more accurate to say that we'll be showing that multiplicative inverses exist.

First, we need the following notions.

Definition 6.7. Two integers $a$ and $b$ are relatively prime (or coprime) if $\gcd(a,b) = 1$.

Recall that a prime number is defined as the following.

Definition 6.8. An integer $p$ greater than 1 is called prime if the only positive divisors of $p$ are 1 and $p$. Otherwise, $p$ is called composite.

Example 6.9. Primes are obviously relatively prime to any number less than them, since they don't have any divisors except 1 and themselves. However, non-prime numbers (called composite numbers) can be relatively prime, even to each other. The numbers 10 and 21 are not prime, since $10 = 2 \cdot 5$ and $21 = 3 \cdot 7$. However, they are relatively prime, since $\operatorname{Div}(10) = \{\pm 1, \pm 2, \pm 5, \pm 10\}$ and $\operatorname{Div}(21) = \{\pm 1, \pm 3, \pm 7, \pm 21\}$.

Theorem 6.10. If integers $m \gt 0$ and $a$ are relatively prime, then $a$ has a multiplicative inverse mod $m$. That is, there exists an integer $b$ such that $a \cdot b = 1 \pmod m$.

Proof. By Theorem 5.2 (Bézout's lemma), there exist integers $n$ and $b$ such that $n \cdot m + b \cdot a = 1$. Then, \begin{align*} [1]_m &= [n \cdot m + b \cdot a]_m \\ &= [n]_m \cdot [m]_m + [b]_m \cdot [a]_m \\ &= [n]_m \cdot [0]_m + [b]_m \cdot [a]_m \\ &= [b]_m \cdot [a]_m \end{align*} $$\tag*{$\Box$}$$

Example 6.11. Consider $\mathbb Z_4$. There are four equivalence classes: $0,1,2,3$. Since 1 and 3 are coprime, they have inverses: $1^{-1} = 1$ (this is obvious) and $3^{-1} = 3$, which we get by observing that $3 \cdot 3 = 9 = 1 \pmod 4$. However, 2 has no inverse: \begin{align*} 2 \cdot 0 &= 0 &\pmod 4 \\ 2 \cdot 1 &= 2 &\pmod 4 \\ 2 \cdot 2 &= 4 = 0 &\pmod 4 \\ 2 \cdot 3 &= 6 = 2 &\pmod 4 \end{align*}

One might ask whether there is any integer $m$ for which $\mathbb Z_m$ has a multiplicative inverse for all non-zero elements. Our discussion about prime numbers gives us a hint.

Theorem 6.12. If $a$ and $m$ are relatively prime and $a \gt 1$, then the multiplicative inverse of $a$ modulo $m$ is unique.

Proof. By Theorem 6.10, since $a$ and $m$ are relatively prime, $a$ has a multiplicative inverse modulo $m$. Suppose that $b$ and $c$ are multiplicative inverses of $a$. Then, \begin{align*} b &= b \cdot 1 &\pmod m \\ &= b \cdot (c \cdot a) &\pmod m \\ &= b \cdot (a \cdot c) &\pmod m \\ &= (b \cdot a) \cdot c &\pmod m \\ &= 1 \cdot c &\pmod m \\ &= c &\pmod m \\ \end{align*} $$\tag*{$\Box$}$$

Corollary 6.13. If $p$ is prime and $a \neq 0 \pmod p$, then $a$ has a multiplicative inverse mod $p$.

This is easy to see since every integer $2, \dots, m-1$ aren't divisors of $p$ and therefore share no common divisors with $p$.

Definition 6.14. When it exists, we denote the multiplicative inverse of $a$ by $a^{-1}$.

Up until now, we've been working in $\mathbb Z$, where "division" "doesn't work". However, we've proved sufficient conditions to create structures where "division" does work in a sense, in that multiplicative inverses are guaranteed to exist for every element. This means that we can solve linear equations like $$6x = 144 \pmod{17}$$ using our usual algebraic techniques. In fact, assuming we were working with the same modulus, it's not hard to solve a system of equations in the usual way (you will be asked to do this on this week's problem set).

However, what if we throw an additional twist in there and were presented with a system of linear congruences with different moduli? Consider the following: \begin{align*} x &\equiv 2 &\pmod 3 \\ x &\equiv 3 &\pmod 5 \\ x &\equiv 2 &\pmod 7 \end{align*}

This problem, solving for $n$, was posed by the Chinese mathematician Sunzi from the third century AD. The following theorem is due to him.

Theorem 6.15 (Chinese Remainder Theorem). Consider integers $a_1, a_2, \dots, a_k$ and suppose that $m_1, m_2, \dots, m_k$ are $k$ pairwise positive coprime moduli. Then the system of equations \begin{align*} x & \equiv a_1 & \pmod{m_1} \\ x & \equiv a_2 & \pmod{m_2} \\ & \vdots \\ x & \equiv a_k & \pmod{m_k} \end{align*} has a unique solution modulo $m_1 \cdot m_2 \cdots m_k$.