MPCS 55001 — Lecture 21

Getting over it

In the context of a course like algorithms, we are interested in solving problems efficiently. And in less academic settings, part of what you'll have to do is somehow deal with these difficult problems. While you may not be cooking up reductions, you'll need to recognize when you might have a hard problem on your hands and what to do when you do. Unfortunately, we don't have time to get into a lot of this, but it is useful to know that we haven't (mostly) just given up and curled into the fetal position.

So suppose you have an $\mathbf{NP}$-hard problem that you have to solve. How do you deal with it? You have a few options.

The first step is to figure out if you can solve your problem for a specific case. Often times, a problem might be difficult in general, but exhibit efficient algorithms for special cases of the problem. This can involve modifying the problem space (instead of general graphs, consider only bipartite graphs or trees) or modifying the problem itself (instead of arbitrary cost functions, impose some restriction on the cost function).

If that doesn't work, the next thing you can do is figure out if you must solve your problem optimally, or whether there's room for some slack. Many times, an optimization problem may be difficult to solve for the exact best case, but if you're willing to put up with some degree of suboptimality and accept a "good enough" solution, then many problems can be solved efficiently. This approach leads to the study of approximation algorithms, which are algorithms that can guarantee a solution to within a certain factor (i.e. you can get an independent set that's at most half as big as the largest set).

But there are also many problems that are not approximable efficiently unless $\mathbf P = \mathbf{NP}$. So what can you do then? At this point you cry and get ready to design a non-polynomial time algorithm. But even here, making use of the correct algorithm design strategies can make a big difference. We've already seen an example of this: the 0-1 knapsack problem. By employing dynamic programming, we were able to get a solution that's polynomial in $n$ as opposed to the brute-force solution of checking all $2^n$ subsets of items—a significant improvement.

Beyond $\mathbf{NP}$-completeness

There's a lot left even beyond just dealing with $\mathbf{NP}$-completeness. Here's a bit of a roundup of things we've seen that might be of interest.

Hopefully, this is enough to give you a picture of what else is out there, whether you decide to continue studying theoretical computer science or not.