Why Isn’t Security a Required Course for All CS Degrees?
I studied information security for my master’s degree. For two years, I took courses in economics, decision making under uncertainty, security architecture, information warfare, and more. I took zero courses in software development. It wasn’t a requirement. It should have been.
A survey of computer science curricula reveals a similar absence in the opposite direction. Core courses in computer science programs typically consist of algorithms and data structures, operating systems, programming languages, calculus, and more. But security is usually an elective, not often a requirement. (If your school is an exception, that’s wonderful! Reply or comment and let me know). It should be required.
Treating security as an after-the-fact add-on has made the arms race between adversaries and defenders worse. The defenders are almost always behind, trying to graft on patches and update configurations in response to new threats targeting flaws that are far too numerous.
There will still be attackers even if software is more secure.
It’s true that secure software development wouldn’t eliminate all need for after-the-fact patches, because software security involves asymmetric risk: an adversary only has to succeed once, whereas defenders have to succeed every time to prevent a breach.
However, at present, defenders are often forced to accommodate how software was designed by developers who weren’t security experts. They could be in a far more powerful position, and we’re missing a powerful opportunity: if developers had a baseline understanding of security best practices and secure coding, taught as part of foundational curricula (in bootcamps as well as universities), the software they write would almost certainly have fewer security flaws.
That would put defenders—security experts—in a position of strength at the outset by giving them a much stronger fortress to defend. Defending a fortress built on high ground is much easier than defending a fortress built in the middle of a flat plain, no matter how many cannons you build after the fact. Any general or military history buff will tell you that.
Let’s not just rag on developers.
Absolutely not. This argument goes both ways. If security experts understood software development better, they could be better partners in threat hunting, patching, upgrading, and writing and performing tests.
In short, neither software development nor security management can function optimally without the other field’s active assistance. And education for future professionals in these fields should take this into account. Security should be an integral part of software development, and security specialists should understand software development at the practitioner level, meaning they’ve actually written software before—even if just a class project.
These two professions are complementary yet often approach problems from artificially separated perspectives. Bringing those perspectives and understandings closer together could close security gaps too—and strengthen resilience of organizations as a whole.
-<>-<>-<>-
Extra, Extra!
Three links from the depths of my bookmark archives; think of these as tangential extras for curious readers:
1. Is giving the secret to getting ahead? - by Susan Dominus in The New York Times - timeless advice at a time when the world really needs it.
2. The definitive story of Steve Wozniak, Steve Jobs, and Phone Phreaking - by Phil Lapsley in The Atlantic, excerpted from the book Exploding the Phone - a slice of tech history from a time that feels really innocent and bygone now.
3. Whatever happened to the phone phreaks? - by Chris Baraniuk in The Atlantic - feels like a companion piece to #2 above.