In the context of a course like algorithms, we are interested in solving problems efficiently. And in less academic settings, part of what you'll have to do is somehow deal with these difficult problems. While you may not be cooking up reductions, you'll need to recognize when you might have a hard problem on your hands and what to do when you do. Unfortunately, we don't have time to get into a lot of this, but it is useful to know that we haven't (mostly) just given up and curled into the fetal position.
So suppose you have an $\mathbf{NP}$-hard problem that you have to solve. How do you deal with it? You have a few options.
The first step is to figure out if you can solve your problem for a specific case. Often times, a problem might be difficult in general, but exhibit efficient algorithms for special cases of the problem. This can involve modifying the problem space (instead of general graphs, consider only bipartite graphs or trees) or modifying the problem itself (instead of arbitrary cost functions, impose some restriction on the cost function). Examples of this include:
If that doesn't work, the next thing you can do is figure out if you must solve your problem optimally, or whether there's room for some slack. Many times, an optimization problem may be difficult to solve for the exact best case, but if you're willing to put up with some degree of suboptimality and accept a "good enough" solution, then many problems can be solved efficiently. This approach leads to the study of approximation algorithms, which are algorithms that can guarantee a solution to within a certain factor (i.e. you can get an independent set that's at most half as big as the largest set).
An easy example of this is with vertex cover. Here's an easy way to approximate a vertex cover on any graph.
This can be expressed more directly as a greedy algorithm (greedy algorithms get quite a lot of use in approximation algorithms). But here's the idea: for each edge in the matching, you cover all edges that are incident to its endpoints, so you know that you've got a cover. However, you also know that you need at least one vertex per edge in your matching in the cover because the edges in the matchings don't share any endpoints. This means that at worst, you've got a vertex that's twice as large as the actual optimal solution. Maybe that's good enough.
Finding such approximations is often but not always possible, and there are a wide range of schemes that can arise, like whether you want a certain degree of approximation or whether you're willing to sacrifice a bit of performance for more accuracy. For instance, while Vertex cover is approximiable by a factor of 2, Set Cover is not approximable to a factor of $O(\log n)$. In other words, you can't get a constant-factor approximation for it.
But there are also many problems that are not approximable efficiently unless $\mathbf P = \mathbf{NP}$. So what can you do then? At this point you cry and get ready to design a non-polynomial time algorithm. But even here, making use of the correct algorithm design strategies can make a big difference. We've already seen an example of this: the 0-1 knapsack problem. By employing dynamic programming, we were able to get a solution that's polynomial in $n$ as opposed to the brute-force solution of checking all $2^n$ subsets of items—a significant improvement.
One can also leverage developments in technologies like SAT solvers, which are programs designed to solve very large instances of SAT very quickly. While there are no theoretical guarantees on performance (since worst-case is still not polynomial), these often work out well in practice for reasonably large instances. This sort of thing relies on the fact that you know how to transform a problem into SAT and being able to recognize that you have an NP-complete problem.
There's a lot left even beyond just dealing with $\mathbf{NP}$-completeness. Here's a bit of a roundup of things we've seen that might be of interest.
Probabilistic and randomized algorithms. We touched very briefly on randomized algorightms, but they show up in many different places, where determinism and deterministic choices make problems difficult to solve. A big open question in complexity theory is whether randomization actually gives us more computational power; i.e. are there problems that can be solved in polynomial time using some randomness that can't be solved deterministically in polynomial time?
Data structures. One aspect of the class that we skimmed over was our use of data structures. Though they play an important role, the design of data structures is a bit orthogonal to algorithm design. However, there are a lot of data structures out there and it's definitely worth exploring them. The study of data structures also lends itself more to more implementation-minded approaches, since you do have to think about representation. They also make great programming practice.
Space. We didn't talk a lot about space. There are interesting questions in both directions. If we ask the question of what we can solve with a polynomial amount of space, the answer is a lot. This corresponds to the class of problems, $\mathbf{PSPACE}$, which contains all of $\mathbf{NP}$. Surprisingly, we don't know whether $\mathbf{NP} \neq \mathbf{PSPACE}$ and maybe even more surprisingly, we don't even know whether $\mathbf{P} \neq \mathbf{PSPACE}$!
Sublinear algorithms. This traditionally means sublinear-time algorithms, but we can throw in sublinear-space algorithms. Roughly speaking, sublinear algorithms imply that you are unable to read or store the entirety of your input. This sounds a bit strange, but we encountered sublinear-space requirements with the vote counting algorithms, while binary search is the typical example of a sublinear-time algorithm. This has become pretty important as we deal with very large amounts of data.
Online algorithms. Similar to space concerns, there are many problems where we have to solve problems or make decisions with incomplete or on-the-fly information. Such algorithms are called online algorithms. These algorithms solve problems where one receives a stream of information and must make a decision based on the partial information received at each step.
Parallelism. We've looked at traditional serial algorithms in this course. But there is a lot of theory work in parallel algorithms and parallel models of computation. Issues involve how to split problems up and how to manage communication and measure the cost of communication.
Decidability. If you want to go way beyond the realm of practical problems that we'd want to solve, you can ask questions about whether it's possible to solve certain problems at all. This is where we get into classifying decidable vs. undecidable problems. The answers to these questions were among the first in establishing computer science as a field.
Hopefully, this is enough to give you a picture of what else is out there, whether you decide to continue studying theoretical computer science or not.