The Programmed Programmer to End All Programmers
A reflection on choosing a major in the age of AI.

Luke Taylor
I have a big decision to make. As a freshman at the University of Missouri in my spring semester, the importance of fully committing to a major is top of mind. I’m not just choosing between what books I’m going to read or which professors I’m going to listen to for the next three years. I’m choosing my profession, or at the very least, I am selecting my traits and building a set of skills that will be conducive to some jobs more than others. If this is all a video-game-like simulation, I am on the character selection screen, but there is no restart button to press if I lose interest.
To make things particularly tough, I must make this decision in the face of rising artificial intelligence platforms and platform-enabled applications. Past major technological advances (mainly referring here to the Industrial Revolution) threatened blue-collar labor more than the paper pushers and the “knowledge” workers. But the rise of artificial intelligence threatens the latter class: the engineers, scientists, doctors, lawyers and operatives. In the early 1900s, college students didn’t have this same concern when selecting majors. It was the farmers and people working in trades who had the displacement anxiety. College students felt comfy in their impenetrable ivory towers. I don’t feel that same comfort.
As far as historical analogies go, it is comforting to have the benefit of a backward look at the Industrial Revolution. What happened on farms is that the farmers' jobs got easier, but not fewer. Humans were augmented by the machines, not replaced. The farm and factory hands were each individually far more productive than their ancestors, but the suits didn’t hire less, instead they hired more and massively increased output. I have to hope that in general the AI revolution will follow the same course. But I’m not sold on the analogy enough to pacify my concerns.
My foremost interest has always been computer programming. And you’d intuitively think that an AI revolution would make a computer science major the obvious choice for someone so inclined. Obviously, once-in-a-lifetime computer science advances are great for computer scientists, right?.........right?
But fate is sometimes ironic, and it’s possible that we programmed the programmer to end all programmers. Consider that Sam Altman recently said “o3 is currently the 175th best programmer in the world” and that there is no reason to expect the progress to stop given that models from the previous couple of years had started with ranks first in the top million then quickly advanced to the top 10,000. More directly – he tells people that large language models will displace knowledge workers on a grand scale. And we are to believe that AGI is on the horizon, essentially a man-made god that can do everything we can do but better. It is as if the technologists are drawing a line that is sharply going up at increasing steepness and the end of the line is the singularity. At that point humans don’t work anymore but instead we hold index funds and collect UBI payments and just chill, or maybe are enslaved depending on how alignment efforts go. That’s pretty f’ed from the perspective of a college student who hopes to contribute to the world one day.
But thankfully there is good reason to take Sam’s statements (and the statements of his contemporaries) with a grain of salt. First, OpenAI has a massive valuation based more on hand wavey expectations about its future capabilities than its current capabilities/revenue (expectations which Microsoft’s Satya Nadella recently threw cold water on despite Microsoft's major OpenAI investment). With a valuation that massive, and revenue that is tiny in comparison, Sam has to juice up those expectations to justify the valuation. The incentive structure forces partisan promotion over neutral analysis. Second, Sam is measuring o3’s ranking against other programmers based on its performance on standard benchmark testing. That’s like raising Lightning McQueen on a NASCAR track but never letting him leave. He'll excel at driving extremely fast on an oval-shaped track with consistent conditions, but if you take him off-roading or drop him in the city and tell him to go from point A to B, he'll flounder. Similarly, o3 is nearing the top of the class in a particular controlled environment, but it's far from ready to navigate the real world without a human in the loop. A model that’s really good at solving a specific set of problems is a long way from a general intelligence that can adapt to the wide set of challenges a human programmer faces.
Okay, so the programmed programmer to end all programmers is not here yet, but it’s not out of the question. If that timeline comes to fruition, it is worth noting that all knowledge work is equally at risk. AI will be the lawyer to end all lawyers, the financial analyst to end all financial analysts, the only paper pusher in the paper pushing universe and the final winner of the Nobel Prize. In the AGI future, selecting a major is like choosing whether to get gas at Conoco Phillips or the 7/11 across the street. It’s all gas and it’s all the same price.
Let’s get realistic, though. Complete job replacement is a crazy expectation. The real concern is high levels of job displacement. Fewer jobs, tougher job markets, fancy pieces of paper in fancier frames but lots of unhappy college graduates with higher debt to EBITDA than a sh***y PE portco. We still have jobs but they aren’t the jobs we imagined for ourselves when we went to school.
To prepare for the AI future, and to avoid that fate, one can plan in two separate but related dimensions. First, one can select a major that is more resistant to AI job displacement than others. But that is not an easy task. AI threatens knowledge work in general. One piece of paper with words is not that different from other pieces of paper with different words, or from a computer screen that has machine logic on it. It’s all letters, words and screens where those letters or words are organized into coherent thoughts. Sure, one could argue (and this argument is put forward often) that jobs with more of a client services bend are less at risk. But is that really true? Chatbots are increasingly good at handling customer service. And even if it is true, will one major be significantly better than any other for client services skills development? General electives, for example public speaking, handle people skills development. And so do group projects and internships that are common to all majors. Major level classwork is about acquiring domain specific knowledge, not developing client services skills. So, I am left to conclude that there is no way to improve my displacement resistance by optimizing on this first dimension.
Second, one can develop a skill for using AI applications. For LLMs, the relevant skill is prompt engineering. The better you are at communicating with the model - redirecting, pushing, prodding - the more likely the model is to achieve your desired output. Knowledge workers that have better-than-average understanding of how to use AI tools will be the most productive in the AI future, just like farmers that could get the most out of a tractor were the most productive farmers at the outset of the industrial revolution.
But these two dimensions are related. Relative to others, computer science majors are naturally inclined to understand and use AI applications. This raises the interesting point that maybe it’s harder to gain an advantage over other computer scientists by using AI than it is for, say, a lawyer to gain an advantage over non-AI-using lawyers. But thankfully, prompt engineering is a natural language endeavor. And if you’ve heard a computer science nerd talk in public, you’d know that computer scientists on average don’t have a competitive advantage there.
Netting everything out, it doesn’t seem likely to me that any single field is clearly more or less likely to see higher amounts of AI-induced displacement. A level-headed look at current events leaves much room for hope, and I think that if I play my cards right by adapting to AI instead of resisting it, a computer science major is my best bet.