My academic funding was running out and my patience with the delirious pace of start-up culture had run thin. You might say I got fired; I might reply that I was never really hired in the first place. But the point was, I needed to go back on the job hunt. So I logged into LinkedIn and started networking.
Going only by the strength of my notifications, the world was whipsawing back and forth between the thrill of revolutionary new discovery and the gloom of the imminent devastation of everything we hold dear. It was the best of times to look for a job; it was the worst of times to be a young professional. It was the age of unbounded wisdom at our fingertips; it was the age of foolishness subverting our best intentions. It was the epoch of belief in the arrival of a higher power; it was the epoch of incredulity at the credulous fascination with tawdry tricks. It was the season of the dawn of a new Light; it was the season of fall into a hitherto unknown Darkness. It was the spring of hope and change; it was the winter of despair and rapine. We had everything before us to discover, we had nothing before us to ever do again, we were all going direct to the Heaven of plenty, we were all going direct to the Hell of plunder. In short, the period was so much like Dickensian London that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
After a large number of applications and a small number of interviews, I found myself with a job offer at a large, relatively benign local company. And in the latter part of last year, I began a new career as an AI Scientist.
Many things about the new job confound me; many other things bore me; a few things have surprised me; but, mostly, what I have experienced has felt familiar and comfortable.
My experience at my new employer, which I'll refer to as MegaCorp, contrasts starkly with the grindset startup culture of yAI. There, the consistent and omnipresent feeling was that we weren't producing fast enough. There was so much to do! My manager would send me voice memos on his way to and from the office; he took calls in the airport surrounded by his family, and from the beach in the midst of their nominal vacation.
In response, I produced, churned, and sent stuff out, all the way from initial, "exploratory" analysis to the nigh-mystical status of "prod," or production, at which point my work would be maximally accessible to everyone in the company. I, and everyone, made mistakes running along this hamster wheel—which, it was made clear, was OK, as long as we stuck around to fix the inevitable mistakes and keep the machine humming. But the strategy and purpose that motivated our running were rarely thought out, and much of the effort was wasted in recovering from failure that could have been avoided through testing and more careful design.
At MegaCorp, I am insulated from so much of this. I am sitting at a low position in a vast hierarchy. I am replaceable not only in the grand capitalistic sense of performing replicable and transferable units of work, but in the more local and literal sense that my work overlaps to a great degree with something else done by someone else at some analogous but displaced position in our exponentially large hierarchy of people.
The deal here is different than at yAI: What the big, slow corporate culture allows is that things take their time, evolve, and, frequently, get abandoned for reasons opaque to the lower-downs. At some level hidden from me, managers duke it out and pitch products internally. Much effort will be forgotten, and a new assignment will be given. This can be frustrating to people who get attached the significance and import of the work they've been assigned, but the redundancy seems to me intentional rather than pathological.
In both of these jobs, the general goal has been to turn data into a predictive model. In both cases, AI has been a core part of the description. But the actual experience of it has been quite different.
At yAI, it always felt like there was more work than time or people (or, if you will, time times people). Adding sufficiently capable AI to the loop of work makes the loop superficially faster, and, since speed is the only prerogative, it becomes an essential part of the workflow and expectations. Such an environment sensitively depends on the capability of the AI. If it's faster than the median programmer they can hire, then they'll cease to have reason to hire people. And so, to people in a startup, the arrival of AI might really be the falling of the sky.
At MegaCorp, however, there are far more people than real needs. Many of the people are managers, existing in a hierarchy that extends so far that from the point of view I enjoy that the CEO might as well be an infallible, inscrutable god. The daily work at MegaCorp does not consist of finding some solution, any solution, to technical problems, nor deploying those solutions to prod as quickly as possible, nor staying up all night to patch the problems introduced by the unavoidable frictions and problems created by this rush to produce. Instead, the issue is the existence of quarterly reports, and shareholders, and performance reviews. (My first one went well, thanks.)
What will the introduction of AI do to this ecosystem? The question spawns another: What does the manager want? Sure, AI has changed the corporate world in the sense that my manager now sends me dumb ideas and pseudocode typed up (quickly!) by Copilot, and has defaulted all our video calls to what I refer to as "surveillance mode," so that even working sessions with long stretches of silence can be recorded, transcribed, and summarized. But ultimately a manager measures their worth by the number of people they employ, and the complexity of the problems their team can take on. There is a lot of inertia in corporate America that AI isn't going to meaningfully alter.
Of the laundry list of logical fallacies in a recently viral opinion piece from startup founder Matt Shumer, the one that most demands dismantling is one of the early ones. In the describing the whirlwind pace of technological evolution, he writes:
The AI labs made a deliberate choice. They focused on making AI great at writing code first ... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers ... it was just a side effect of where they chose to aim first.
They've now done it. And they're moving on to everything else.
This is simultaneously and variously:
- False: AI labs started off writing shitty poems and boring summaries of cricket matches, not blocks of code;
- Misleading: AI is pretty good at increasing the efficiency of a very limited set of the tasks that are relevant to training AI models, such as distributed computing, but not necessarily making meaningful strides in algorithms or structures the way this suggests;
- Misguided: What things other than "writing more code" does writing code "unlock," exactly?;
- Unsupported: Moving on to "everything else" is a pretty big leap!; and
- Sad: I'm sorry you made a dumb app, fam.
The fact is that AI products are good at writing code now not for any deep or sinister reason but rather because writing code is easy, and training a model on it is easier. There are many valid examples of code in the wild; if you're trying to get your AI to do something, it is easy for you to write an infinitude of additional examples to train your model on. And, crucially, for all of these examples, including the ones you're generating yourself, the correct answers are easy to evaluate and characterize in any number of ways (Did it get the right answer? And: How long did it take? And: How much memory did it require? And: How many recursive calls did it make?). The key is the ability to get concrete, objective answers to the output of the chatbot. This means that you can quantitatively tell your model "You see that difference? I want more of that," and it will reliably do it.
What I've described here is the scheme of reinforcement learning. The "one weird trick" that made a model capable of doing superficially impressive things to Atari games and defeating Lee Se-dol in Go is now being applied to code. This ability to make quantitative comparisons between two model outputs is crucially different from, say, constructing a legal brief. Given two briefs, which is better? There is some degree of baseline ability that allows quick and easy discernment, since there are certain characteristics each must have (like, for example, summarizing all of the material facts), and certain things they must not contain (like, for example, falsehoods, or descriptions of events that never happened). Assuming you can solve for those basic criteria—which itself has proven somewhat more difficult than expected—how do you move any further, from "not false" to "good"? The path isn't as clear because not every task can be converted to a reinforcement learning task.
The lunch table talk at MegaCorp isn't about AI chatbots taking our jobs. It's about the possibility of an AI bubble bursting soon, and how that might impact what we do and why.
As an AI scientist, working with AI engineers on the AI team of a large corporation that controls its own models, we're pretty well insulated from, say, the possible failure of OpenAI, or the zillion silly startups that make calls to OpenAI's models or the other big ones on offer. This is true both because we aren't serving models externally, aside from our deep corporate entanglement with Microsoft, and because our work uses different data for different purposes. If you insisted on anthropomorphizing the situation, you could say that AI took its baby steps in 2022, and has made it through high school, and just left for college. Now every company like mine is simultaneously making AI pick its own narrow concern as its major.
AI is here; this is it; you've seen it; ecce AI. What comes next will be euphemistically sold to you as "personalization" or assigned to me as inculcating MegaCorp's point of view in the models it provides to its customers. (A point of view, I feel like I might as well state, that I don't totally object to, and which I feel far more comfortable with than the one that yAI was trying to bring to market.) My work isn't interesting or meaningful except insofar as it matters to my manager, and my manager's manager, up to the very concrete endpoints of CEO, board, and shareholders. It's very finite.
But the possibility of an AI bubble, precipitated by the possibility of the sudden rise to (electric, scintillating) competence of coding chatbots would absolutely have an outsize impact in one place: the startup ecosystem. Filling in the MadLibs of "Uber for __" or "Changing the world with __" will, sad to say, probably not be remunerative for much longer, if it ever was. And since that's where a lot of money is circulating in this economy, the knock-on effects could be significant.
This does matter to some people, and not to others, so you may be asking: What's the bottom line for MegaCorp? At lunch, I chuckle or nod sagely when people say things like "It's a bad time to be in B2B SaaS," which, well, is probably true. Beyond that, what it means to me—speaking personally, and not on behalf of MegaCorp—is that AI is likely to facilitate a new era of corporate consolidation and the expansion of corporate power. The AI isn't going to become sentient, but, as the cost of software falls more or less to zero, startups will fail and the larger AI companies will lose their monopoly on the abilities of cutting edge models. Big companies will absorb any of the ideas they've seen that are profitable for them to keep, and will repackage and resell them, along with some fancy mumbo-jumbo about empowering customers, if I may dabble in prognosticating sales strategy as well.
From the day-to-day point of view, what I think this means is that people without the means to choose otherwise, or without access to these services at all now, are going to start getting more and more interactions from chatbots. This will look like more and more legal contracts that are 95-percent airtight, more and more healthcare or education mediated by layers of technology. I'm not saying this is good; I'm just saying it isn't a revolution.
By and large, hoeing the row of AI science hasn't felt like cracking open the seal of Pandora's box. My current job has felt more like science than like what I read about AI on LinkedIn or (horrors!) Substack, much more so than, for example, at yAI. As in my scientific life, I spend my days reading papers, writing code, and making plots, only now with the added burden of occasionally complying with some inane demand for corporate ephemera. Every once in a while I ask Copilot to help debug a problem (elements of pandas columns belonging to an iterable just won't stick in my mind, heaven help me), but it's not writing whole blocks of code for me, mostly because I'm not under any pressure to write whole blocks of code as fast as I can. This isn't so different from how my scientific life felt, or how it would have evolved.
The main difference is structural. Instead of writing papers and applying to conferences and waiting for invitations to give seminars, I might give an internal presentation or two. My work, even if my managerial representatives win their pitched internal battles and a little sliver of my labor is eventually borne as fruit on the tree of prod, won't carry outward in any significant way. I won't get the satisfaction of feeling like I'm building on the edifice of all human knowledge according to my own judgment and curiosity, which is how my academic career felt at its best. I liked that, of course, and was proud to put my name on what I did, but it came with its own frustrations, and those moments became more fleeting and infrequent. So, in more ways than the one that is obvious here, I'm reconciling myself to being in my anonymous era now.
It's hard to conclude an article about AI, because it's hard to write about AI, because it's easy to write about AI, because it's easy to make any number of valid observations about AI, but all the facially true statements don't quite feel like they add up to the big things that feel necessary to say. The sad and scary truth is that no one knows what the future holds. Plus ça change. But just in case, I'd start thinking of a Plan B if you're in B2B SaaS.






