CMSC 27200 — Lecture 7

The earliest deadline first schedule $S$ is optimal.

Let $S^*$ be an optimal schedule with the fewest inversions. We assume $S^*$ has no downtime. If $S^*$ has no inversions, then $S^* = S$.

So suppose that $S^*$ has an inversion. Then by Lemma 3.12, there exists two adjacent inverted jobs, say $i$ and $j$. By Lemma 3.13, we can exchange these two jobs to reduce the number of inversions by 1 without increasing the maximum lateness.

But this new schedule is an optimal schedule with fewer inversions, contradicting our assumption that $S^*$ contained the fewest number of inversions.

This proof takes a slightly different argument than "greedy stays ahead". Here, we show that we can transform our solution into the one found by following the greedy strategy, by swapping pieces. This is called an exchange argument.

Single-Source Shortest Paths

Now, we'll take a look at a graph problem. Recall the definition of a graph.

A graph is a pair $G = (V,E)$ of vertices $V$ and edges $E$. A graph is undirected if edges are unordered pairs of vertices. A graph is directed if edges are ordered pairs of vertices.

Informally, edges in undirected graphs don't have a direction associated with them, while direction matters in the edges of directed graphs. Graphs are fundamental structures for representing all sorts of models and networks.

You may recall the notion of paths in a graph. We will be considering the following problem.

The shortest path problem is: Given a directed graph $G = (V,E)$ with an edge weight function $w:E \to \mathbb N$, a source vertex $s \in V$ and a destination vertex $t \in V$, find a shortest (least weight) directed path from $s$ to $t$.

Consider the following graph. We want to get from vertex 1 to vertex 10.

Here, the shortest path from 1 to 10 is highlighted in yellow.

The applicability of finding shortest paths is almost entirely obvious:

One of the observations we should note is that it turns out finding the shortest path from $s$ to $t$ turns out to be about the same amount of work as finding the shortest path from $s$ to every vertex.

There are several algorithms for solving the shortest path problem. The one we'll look at, which uses a greedy strategy, is due to Edsger Dijkstra in 1959.

Dijkstra's algorithm

The strategy is to remember a set of vertices for which we already know the shortest path to from our source vertex $s$. At each step, we choose the vertex that has the minimal shortest path weight from the vertices we've already explored.

    \begin{algorithmic}
    \PROCEDURE{Dijkstra}{$G,s$}
        \STATE $S = \{s\}$
        \STATE $d[s] = 0$
        \STATE $\pi[s] = \varnothing$
        \WHILE{$S \neq V$}
            \STATE Choose $v \not \in S$ such that $r(v) = \min\limits_{u \in S} \left( d[u] + w((u,v)) \right)$ is minimized
            \STATE $S = S \cup \{v\}$
            \STATE $d[v] = r(v)$
            \STATE $\pi[v] = u$
        \ENDWHILE
    \ENDPROCEDURE
    \end{algorithmic}

We are given a source vertex $s \in V$ as part of the input, along with the graph $G$. We then need to keep track of the following information:

We begin with $S = \{s\}$, $d[s] = 0$, and $\pi[s] = \varnothing$. We repeatedly then explore the graph by visiting vertices $v \not \in S$. We add to $S$ the vertex $v$ that minimizes \[r(v) = \min_{u \in S} \left( d[u] + w((u,v)) \right),\] and set $d[v] = r(v)$ and $\pi[v] = u$.

Once we've explored the entire graph, we will have information that allows us to construct the shortest path from $s$ to every other vertex in $G$. To recover a shortest path from $s$ to $v$, we simply follow the sequence of predecessors from $\pi[v]$ until we get back to $s$.

Consider again our example from above.

Here, we've highlighted in blue the edges that were selected by Dijkstra's algorithm.

The question we have to deal with is whether this strategy actually works. In other words, how can we be sure that once we've added a vertex to $S$ that we can conclude we found the shortest path to it from $s$?

For each $u \in S$, $d[u]$ is the length of a shortest path from $s$ to $u$.

We will show this by induction on $|S|$. First, we have $|S| = 1$, in which case $S = \{s\}$ and $d[s] = 0$.

Now suppose this holds for $|S| \geq 1$. Let $v$ be the next vertex added to $S$ and let $(u,v)$ be the final edge. Since $u \in S$, $d[u]$ is the length of a shortest path from $s$ to $u$, so there is a path of length $r(v) = d[u] + w((u,v))$ from $s$ to $v$. We want to show that this is a shortest path from $s$ to $v$.

Consider some other path $P$ from $s$ to $v$ and let $(x,y)$ be the first edge of $P$ that leaves our explored vertices $S$. We can consider the length of the path from $s$ to $y$. By our inductive hypothesis, this is $d[x] + w((x,y))$. By definition of $r(y)$, we have $r(y) \leq d[x] + w((x,y))$. But if $v$ was the vertex chosen by Dijkstra's algorithm, we have $r(v) \leq r(y)$. Therefore, $P$ is no shorter than $r(v)$.

Since the algorithm adds vertices to $S$ at each step, this proposition essentially proves the correctness of the algorithm.