In the context of a course like algorithms, we are interested in solving problems efficiently. And in less academic settings, part of what you'll have to do is somehow deal with these difficult problems. While you may not be cooking up reductions, you'll need to recognize when you might have a hard problem on your hands and what to do when you do. Unfortunately, we don't have time to get into a lot of this, but it is useful to know that we haven't (mostly) just given up and curled into the fetal position.
So suppose you have an $\mathbf{NP}$-hard problem that you have to solve. How do you deal with it? You have a few options.
The first step is to figure out if you can solve your problem for a specific case. Often times, a problem might be difficult in general, but exhibit efficient algorithms for special cases of the problem. This can involve modifying the problem space (instead of general graphs, consider only bipartite graphs or trees) or modifying the problem itself (instead of arbitrary cost functions, impose some restriction on the cost function).
If that doesn't work, the next thing you can do is figure out if you must solve your problem optimally, or whether there's room for some slack. Many times, an optimization problem may be difficult to solve for the exact best case, but if you're willing to put up with some degree of suboptimality and accept a "good enough" solution, then many problems can be solved efficiently. This approach leads to the study of approximation algorithms, which are algorithms that can guarantee a solution to within a certain factor (i.e. you can get an independent set that's at most half as big as the largest set).
But there are also many problems that are not approximable efficiently unless $\mathbf P = \mathbf{NP}$. So what can you do then? At this point you cry and get ready to design a non-polynomial time algorithm. But even here, making use of the correct algorithm design strategies can make a big difference. We've already seen an example of this: the 0-1 knapsack problem. By employing dynamic programming, we were able to get a solution that's polynomial in $n$ as opposed to the brute-force solution of checking all $2^n$ subsets of items—a significant improvement.
There's a lot left even beyond just dealing with $\mathbf{NP}$-completeness. Here's a bit of a roundup of things we've seen that might be of interest.
Space. We didn't talk a lot about space. There are interesting questions in both directions. If we ask the question of what we can solve with a polynomial amount of space, the answer is a lot. This corresponds to the class of problems, $\mathbf{PSPACE}$, which contains all of $\mathbf{NP}$. Surprisingly, we don't know whether $\mathbf{NP} \neq \mathbf{PSPACE}$ and maybe even more surprisingly, we don't even know whether $\mathbf{P} \neq \mathbf{PSPACE}$!
Sublinear algorithms. This traditionally means sublinear-time algorithms, but we can throw in sublinear-space algorithms. Roughly speaking, sublinear algorithms imply that you are unable to read or store the entirety of your input. This sounds a bit strange, but we encountered sublinear-space requirements with the vote counting algorithms, while binary search is the typical example of a sublinear-time algorithm. This has become pretty important as we deal with very large amounts of data.
Online algorithms. Similar to space concerns, there are many problems where we have to solve problems or make decisions with incomplete or on-the-fly information. Such algorithms are called online algorithms. These algorithms solve problems where one receives a stream of information and must make a decision based on the partial information received at each step.
Parallelism. We've looked at traditional serial algorithms in this course. But there is a lot of theory work in parallel algorithms and parallel models of computation. Issues involve how to split problems up and how to manage communication and measure the cost of communication.
Decidability. If you want to go way beyond the realm of practical problems that we'd want to solve, you can ask questions about whether it's possible to solve certain problems at all. This is where we get into classifying decidable vs. undecidable problems. The answers to these questions were among the first in establishing computer science as a field.
Hopefully, this is enough to give you a picture of what else is out there, whether you decide to continue studying theoretical computer science or not.