MPCS 55001 — Lecture 18

Getting over it

In the context of a course like algorithms, we are interested in solving problems efficiently. And in less academic settings, part of what you'll have to do is somehow deal with these difficult problems. While you may not be cooking up reductions, you'll need to recognize when you might have a hard problem on your hands and what to do when you do. Unfortunately, we don't have time to get into a lot of this, but it is useful to know that we haven't (mostly) just given up and curled into the fetal position.

So suppose you have an $\mathbf{NP}$-hard problem that you have to solve. How do you deal with it? You have a few options.

Look for special cases

The first step is to figure out if you can solve your problem for a specific case. Often times, a problem might be difficult in general, but exhibit efficient algorithms for special cases of the problem. This can involve modifying the problem space (instead of general graphs, consider only bipartite graphs or trees) or modifying the problem itself (instead of arbitrary cost functions, impose some restriction on the cost function). Examples of this include:

Consider approximation

If that doesn't work, the next thing you can do is figure out if you must solve your problem optimally, or whether there's room for some slack. Many times, an optimization problem may be difficult to solve for the exact best case, but if you're willing to put up with some degree of suboptimality and accept a "good enough" solution, then many problems can be solved efficiently. This approach leads to the study of approximation algorithms, which are algorithms that can guarantee a solution to within a certain factor (i.e. you can get an independent set that's at most half as big as the largest set).

An easy example of this is with vertex cover. Here's an easy way to approximate a vertex cover on any graph.

  1. Find a maximum matching (this can be done in polynomial time).
  2. For each edge in your matching, choose both endpoints of the edge to be in the vertex cover.

This can be expressed more directly as a greedy algorithm (greedy algorithms get quite a lot of use in approximation algorithms). But here's the idea: for each edge in the matching, you cover all edges that are incident to its endpoints, so you know that you've got a cover. However, you also know that you need at least one vertex per edge in your matching in the cover because the edges in the matchings don't share any endpoints. This means that at worst, you've got a vertex that's twice as large as the actual optimal solution. Maybe that's good enough.

Finding such approximations is often but not always possible, and there are a wide range of schemes that can arise, like whether you want a certain degree of approximation or whether you're willing to sacrifice a bit of performance for more accuracy. For instance, while Vertex cover is approximiable by a factor of 2, Set Cover is not approximable to a factor of $O(\log n)$. In other words, you can't get a constant-factor approximation for it.

Try exponential algorithms anyway

But there are also many problems that are not approximable efficiently unless $\mathbf P = \mathbf{NP}$. So what can you do then? At this point you cry and get ready to design a non-polynomial time algorithm. But even here, making use of the correct algorithm design strategies can make a big difference. We've already seen an example of this: the 0-1 knapsack problem. By employing dynamic programming, we were able to get a solution that's polynomial in $n$ as opposed to the brute-force solution of checking all $2^n$ subsets of items—a significant improvement.

One can also leverage developments in technologies like SAT solvers, which are programs designed to solve very large instances of SAT very quickly. While there are no theoretical guarantees on performance (since worst-case is still not polynomial), these often work out well in practice for reasonably large instances. This sort of thing relies on the fact that you know how to transform a problem into SAT and being able to recognize that you have an NP-complete problem.

Beyond $\mathbf{NP}$-completeness

There's a lot left even beyond just dealing with $\mathbf{NP}$-completeness. Here's a bit of a roundup of things we've seen that might be of interest.

Hopefully, this is enough to give you a picture of what else is out there, whether you decide to continue studying theoretical computer science or not.