Consider the two words computer and science. The first’s gifts to the world are huge and range from the lifesaving ubiquity of electronic medical records to the opportunity to get in a virtual bar fight at any time of the day on Twitter. The second word, science, represents an intellectual movement that’s delivered great advances like the polio vaccine, a few trips to the moon and too many others to list. So why does combining the two words into computer science come up so short, especially when we’re looking to staff up our programming teams?
It’s not that the field hasn’t delivered. There are petabytes of LaTeX files out there filled with brilliant ideas like new programming languages, clever search algorithms, machine vision algorithms and a gazillion notions in between. Many of these are wonderful.
The problem is that few of us really need much of any of it. One friend confessed that he’s watched a development team flourish by passing over CS graduates and hiring only physicists, accountants or some other kind of math-savvy person. These new employees are more practical and focused on getting the machines to deliver, which is pretty much what almost all businesses want to do.
It’s not that CS degrees are bad; it’s just that they’re not going to speak to the problems that most of us need to solve. So here is a rather incomplete list of why we may be better off ignoring them.
Theory distracts and confuses
Many computer scientists are mathematicians at heart and the theorem-obsessed mindset permeates the discipline. One theoretician told me that all mathematical proofs are really just programs and vice versa, at least in his mind. He’s not really interested in delivering code that does the work as much as proving his code is correct. Okay.
It’s rare for a CS major to graduate without getting a healthy dose of NP-completeness and Turing machines, two beautiful areas of theory that would be enjoyable if they didn’t end up creating bad instincts. One biologist asked me to solve a problem in DNA sequence matching and I came back to him with the claim that it was NP-complete, a class of problems that can take a very long time to solve. He didn’t care. He needed to solve it anyway. And it turns out that most NP-complete problems are pretty easy to solve most of the time. There are just a few pathological instances that gum up our algorithms. But theoreticians are obsessed with the thin set that confound the simple algorithms, despite being rarely observed in everyday life.
The same problem happens with Turing machines. Dutiful CS students learn nihilistic results like Rice’s Theorem, which shows that we really can’t analyze computer algorithms at all. But Turing machines are a pretty bad model for how our regular machines operate and it’s often very easy to create software that does smart things with our code. Any CS major who doesn’t take these theoretical results with a healthy dose of salt risks giving up when a perfectly usable answer is close at hand.
Academic languages are rarely used
We shouldn’t be surprised that the academy breeds snobbery and a love for arcane solutions. It does this in every field. When I asked one MIT graduate his favorite language, he proudly told me that he was sure I had no idea what it was. Only after pushing a bit, did he relent and tell me it was CLU. He was right.
There are many great ideas out there from those obsessed with languages, but sometimes these ideas end up creating clutter and confusion. If one team member loves some odd feature and starts including it in the code base, everyone else is going to have to learn it. If everyone does the same, the time to get up to speed is brutal.
This is why Google chose the low road when it built Go. The creators insisted that the language should have few constructs and be simple to learn in the shortest possible time. This simplicity helped everyone because everyone knew this core very well.
Many CS professors are mathematicians, not programmers
One of the dirty secrets about most computer science departments is that most of the professors can’t program computers. Their real job is giving lectures and wrangling grants. They understand spreadsheets and grant proposals, not actually doing the research. This is why god gave them grad students. The last time many of them actually programmed a computer was when they themselves were in graduate school. Since then, bit rot and cobwebs have set in and the compilers on their machines probably won’t start.
Many required subjects are rarely used
Data structures are often the main topic for the second course students take in computer science. It’s too bad few of us use many data structures any more. We either plop things in object hash tables or push them into a database that does the thinking for us. It’s still quite useful for everyone to think a bit about the algorithmic complexity, but very few need to worry about B-trees or even linked lists. Not only that, but many of us have realized that we’re better off trusting a standard library than fiddling around with data structures ourselves. It’s too easy to make mistakes. Many organizations explicitly forbid roll-your-own data structures with good reason.
There are any number of other examples of subjects in the classic curriculum that just aren’t that important anymore. Compilers are complex and essential, but the only people who write them are students who are forced to create toy versions in a semester-long course. Even Apple used stock open source tools when it create the compiler for Swift.
Mathematical models take us down the wrong path
Anyone who’s learned database theory has discovered the cleverness of Boyce–Codd Normal Form, the way we break down an elaborate data structure into small tables. It’s all very elegant and efficient — until you wait forever for a response to your SQL query filled with JOIN commands.
Most developer teams quickly learn to “denormalize” their databases to improve performance. In other words, they strip away all of the cleverness and stick the data in one huge table. It’s kind of ugly and wasteful, but it’s often screamingly fast. As for the bloat, disk space is cheap.
Once they start putting their education into practice, many developers spend a few years unlearning all of the mathematical cleverness from their CS courses.
Institutions breed arrogance
We all think we’re right but the very nature of academic degrees are designed to give graduates the ability to argue one’s superiority with authority. This may be true sometimes, but even on the best days it’s hard to know what’s really right, especially in a fast moving field.
One person I worked with loved the “coding standards” he brought to the department and loved every opportunity to cite the standards during code review. These standards amounted to fussy opinions about where to place white space, but once he started speaking of them with academic precision, he started wielding them like cudgels in code reviews. He would ding code with different white space with the ominous claim that the code didn’t meet standards. And so we were all stuck counting spaces so everything could fit some quasi-academic standard.
Many modern skills are ignored
Many of the modern skills just aren’t covered in many departments. If you want to understand Node.js, React, game design or cloud computation, you’ll find very little of it in the average curriculum. An average schools’ course list concentrates on the fundamentals — that is, deep concepts like race conditions that will be part of computing well after words like Node.js or React are forgotten. That’s a noble goal, but 99 percent of what most programmers do is wrestle with the idiosyncrasies of the current buzzword du jour.
It’s very common for computer science departments to produce deep thinkers who understand some of the fundamental challenges without any shallow knowledge of the details that dominate the average employee’s day. This is why companies find it just as worthwhile to hire someone from a physics lab who just used Python to massage some data streams from an instrument. They can learn the shallow details just as readily as the CS genius.
The academic cutting-edge is long in arriving
Machine learning and artificial intelligence are all the rage and many are racing to experiment with them. But they’ve been studied for decades by the CS departments. Does it make sense to investigate all of the ideas emanating from the schools today — or wait patiently until they’re finally ready for general use?
Tenure can breed complacency
There are many wonderful reasons behind the institution of tenure and most professors who enjoy its protections fully deserve all of the benefits for their contributions way back when. The problem is that in fast-moving fields, today’s students have little need for the insights of even a decade or so ago. Yet tenure guarantees that many of the professors will be ten, twenty or even thirty years past the days when they had the greatest insights.
And then there’s the distractions that can lead the professors away. One tenured genius seems to blog endlessly about the various poker tournaments and the bad beats that sent him home. Another has invested in a minor league baseball team and brags about hanging out with the ball players and shagging balls while they take batting practice.
Intellectualism rarely produces results
When I told one tenured faculty member that one of my students landed a job thanks to a few lectures on Angular and React, he smiled and said, “The last thing I want to do is turn this into a trade school.”
That’s fine, but there aren’t so many who can justify spending close to half a million dollars on wondering whether polynomial or exponential angels can dance on the head of a pin. The liberal arts tradition is a wonderful thing, but it encourages a disdain for practical knowledge. It’s all about some deep, eternal truths. But when your company has to ship something next week by the deadline, no one has time to navel gaze and wonder about eternal truths.