Skip to Content
The Machines

The AI Floodwaters Are Rising Fast

Website of the search engine lycos
Schöning/ullstein bild via Getty Images

My first favorite search engine was Lycos. Why? There’s no single explanation. I know it had a dog logo, which very possibly made a positive impression on my younger self. It had an exotic-sounding name, so that was a point in its favor. But I just felt like it fit my vibe. AOL? We didn't pay for that. Ask Jeeves? OK, boomer (is what I would have said if I was sassy and/or prescient). Alta Vista? Yahoo? (Apologies—Yahoo!?) I dunno; they weren't for me.

In some small way, there was some identity-making going on. But nothing that happened on a computer at that time felt very consequential to me. What were computers for, after all? I didn't really play video games, aside from a brief phase of getting pretty into Civilization (I or II or III? I have no idea). I remember a cousin showing me Myst and being impressed but confused; I remember playing Doom, probably with a sibling or a neighbor. But all of the stuff that happened on the computer seemed a little pointless, a little dull, a little like work. Observationally, people used it to stare at spreadsheets, and to write stuff. (Lame!) Games were on disks (actual floppies, nominal floppies, or CD-ROMs). Otherwise, it was just a boring work box.

The internet seemed like the exception. First of all, it required a bunch of clanky MRI/subway sounds to connect to, and it also required all members of the household to temporarily forfeit the ability to communicate via telephone. Writing that out now, in this era, feels psychotic. But that tradeoff seemed somehow full of promise. Searching the internet felt different than video games or spreadsheets: it opened up possibilities, even if many of them were dumb. So using the dog-logo search engine over the exclamation-point one to navigate that openness felt like a way of expressing myself, however so slightly.

I do remember a classmate who was pretty adamant that Google was superior to the other search engines. My first impression, if I am a reliable narrator, was that it had a nice, clean landing page. I also got the sense that it found information somewhat more reliably than the other search engines, but not much beyond that. "A little bit better" than "really sucky" still sucked. Circa Y2K Google wasn't, like, jaw-dropping. You had to figure out how to ask it a question—it didn't glean that information from you. (Also lame!) At the time, this preference felt like something to argue about. Now, with the benefit of hindsight and some personal growth, I can finally admit (a Defector exclusive) that Lycos was perhaps a bit worse.

But little did I know that we were pioneering something new. It might be hard now to fully inhabit the feeling that “search” was a tech development separate from the development of the web itself, but it was. And many people were having the same dissatisfying experiences I was. In her book Close to the Machine, with charmingly antiquated capitalization that gives away the reveal that this was written in 1997, Ellen Ullman reflects on the culmination of a big project. She has begrudgingly let a client convince her that the project should have a web interface, and she observes users trying to connect to her work:

When I watch the users try the Internet, it slowly becomes clear to me that the Net represents the ultimate dumbing-down of the computer. The users seem to believe that they are connected to some vast treasure trove—all the knowledge of our times, an endless digitized compendium, some electronic library of Alexandria—if only they could figure out how to search it properly. They sit and click, and look disconcertedly at the junk that comes back at them. Surely it must be their fault, they reason; surely if they just followed the right links, expressed their query more accurately, used another search tool, then pages and pages of interesting information would soon be theirs.

Good news/bad news, Ellen: that was not quite the ultimate dumbing-down.

As the internet grew, searching became essential, and then habitual, and then became one of the most fundamental actions we do with it at all, something we do all the time, in many different ways, almost without thinking about it. But over time, we adapted to search, and we incorporated our mishits into our experience of the internet. We figured out how to ask the search engine the right question to make it give us the information we wanted. Or we assisted elderly relatives who didn't want to figure this out, or snarkily sent lmgtfy links to friends who asked us dumb shit. Younger readers, you may have to trust me here: There was a time when responding to a question with lmgtfy was recognized as funny.

In the process of adapting to the ability to search, all of us became programmers a little bit. We let the artificial into our lives: Each of us spent time figuring out how to elicit the information we wanted from the machine. Nowadays you could call it "prompt engineering." But it's just something we did. Typing a little bit of non-grammatical English, with some quotes scattered here and there, and maybe an AND or a NOT or a "site:" sprinkled in—it didn't feel like it had much to do with anything real.


Like it or not, our worlds are much more tech-based now than they were in 1997. Programmers and non-programmers alike have daily (or more frequent) experiences with technology which, cumulatively, are changing how we think and behave.

For example, I don't use speech activation on my phone; maybe you do. It's convenient! I get it. But it deepens a psychosocial bond that doesn't reciprocally exist. In particular, hearing a human(oid) voice leads to people thanking their chatbots. But this in turn teaches us that saying "please" is a tool to get a better or faster response. Which is unfortunate, because saying "please" and "thank you" are phrases that mean something ... else. They mean what they mean, and those meanings evolve over time. Their meanings are tonal and nuanced. You might be rolling your eyes and saying “please” right now, in fact. But the premise of the large language models underlying these chatbots is that these words—all words—mean something simple. (For my nerds: These words are first cast into an embedding space; the transformer architecture then acts on the entire sentence contextually, in such a way as to potentially unravel superpositions of meanings embedded in each word vector. My assertion is that this whole process involves more flattening of context and meaning than, to take an example, talking + gesticulating.) And the more we insist on assigning a transparent value to a word, or a small number of fixed values, the more we cheapen our language. And the more we interact with artificial intelligence in these ways, the more we contribute to that flattening.

To put it another way, search is to the internet as chatbots are to artificial intelligence. In both cases, the substrate is vast and monumental and difficult to comprehend. The portal to the substantive thing may seem insubstantial and insignificant, easily dismissed; that would be a mistake. The big changes are underway: They are happening, now and rapidly, and our access to those changes will overwhelm us if we aren’t prepared.

Here’s an example in the code-writing workplace. In the past (meaning: a year ago), the process of understanding how a piece of someone else’s code worked followed a certain trajectory. First, you’d use it on some simple test cases. Maybe it did something you couldn’t do on your own, like convert an image document to a bit string. Then if it did something unexpected, like crap out when you fed it a multipage image, or if it gave you an error when you didn’t expect one, like when you used a different file format, you’d try to read the documentation. Ideally, documentation is a prose version of the code that is so complete that you, the reader, could rewrite the code from scratch given only the documentation—we’d call it pseudocode. But a lot of the time, the documentation would be imperfect (it feels a bit absurd to rewrite everything twice), incomplete (no one documents every single function they write), outdated (when you update the code functionality you often forget to change the documentation, or postpone it if you think more changes are soon to come), or just wrong (more often than you might think). If the documentation let you down, as it inevitably did, and you really felt stuck, you’d ask a knowledgeable friend or colleague. Then if none of them knew the answer, you’d post on StackOverflow (which grappled with the AI problem in its own, interesting way). If your question was simple, an army of nerds would race to answer your question quickly (in search of those most valuable of commodities: upvotes). If it wasn’t, or if it was poorly phrased (because you didn’t know what you were doing), it might take a while, or never be answered.

Today, the single most useful thing that a chatbot does for me is act as interactive documentation. I can import a new package, run some tests, encounter a problem, tell the chatbot, “I’m getting this error, why?” and it will give me an answer. In a lot of immediate ways, this is great—poring through poorly documented code is a pain. Having something CliffsNotes it for me is a relief. But I have noticed that the quality of these answers have increased dramatically over the past year. Surely part of this is capability. But, analogous to how I learned how to Google something, surely part of this is a change I and my colleagues are undergoing. Now, when I ask colleagues a question, they’re increasingly likely to shrug and tell me to ask a chatbot.

There are many more ways that AI is entering the tech workplace, in ways that will probably only feel more distant from non-tech reality, but which are increasingly meaningful for actual tech workers. But the key point to emphasize is that as it stands now, the penetrance of AI into the tech domain is no longer a question of further development, or dim prognostication of very long trajectories, or pure hype—let alone some mystical, distant breakthrough. All of this is a distraction from the fact that the tablecloth is being pulled right now. All tech is going to become AI in some way, because we are going to let it, because it is convenient, but mostly because AI doesn't need coffee breaks. If it seems like it's going slowly, or if you haven't noticed yet, that's only because of (corporate) inertia, not because of sensitivity or care or concern, nor because the underlying rate of change is slow, nor because anybody is concerned about setting money on fire to gamble on cornering this potentially vast market.

The implications of this are potentially profound. My productivity has surely gone up as a result of having a chatbot interpret and smooth over poorly documented code—but my familiarity with codebases a little bit removed from my bailiwick will surely go down (if it hasn’t already). I won’t really have to understand what PackageA does if all I need from it for my current work is Function1. But, from a productivity point of view, what if there’s a really sick Function2 that I’ll never notice? What will I not think of in the future when I’m designing something new? And from the human point of view, what am I losing by not talking to my colleagues or the StackOverflow army about my problem?


Writing code, as I said in my last piece, is formulaic. In some sense, it is language as formula. You need to use your keyboard to write something so unambiguous, so definite, so comprehensive, that, when you let it loose on the data you've prepared, it will not break, or try to divide something by zero, or access more memory than you want it to, or any number of other catastrophes, no matter what data it encounters. But we are at a precipice right now.

In a missive on a site I won't link to, Andrej Karpathy wrote "the hottest new programming language is English." To some dorks, this was an earth-shaking concept: We wouldn’t have to call up all the facts and tricks we kept stowed away in our programmer brains to write a new piece of code (or document or fix or update an old one). In fact, we wouldn’t have to have a separate programmer brain at all. But the corollary to that is: Our language brains would start mingling with our programming brains. By saying this in 2023, Karpathy was quite a bit premature (what a friend called pulling a de Tocqueville: make so many wild predictions that some of the big ones hit, and everyone is so impressed by those that they forget the wrong ones), and it is still not entirely true as of this writing—but it is close to not being wrong. And this is worrying, because language does—should do—so much more than that, and blurring the boundaries between language and code flattens the former more than it enlivens the latter.

Every step in the development of this technology has had tradeoffs. Signing onto the internet used to mean a temporary refusal of all other communication while the telephone line was engaged. Searching the internet required rewiring some part of ourselves. Having instant access to infinitely re-scalable and re-orientable maps has surely changed our sense of direction and place. And using chatbots—in work or in the kitchen—will change things yet more. And, consequently, part of me fears for the compromises we will start to make to elicit the right responses from the chatbots, and the spillover this will cause from the realm of tech to the real one, insofar as we can distinguish those anymore—the implications of pleading now for a really efficient piece of code, O great bot! Or, perhaps soon: please read this CT scan very precisely, radiology assistant!

If it feels like AI is encroaching on our turf, the reality is that we met it halfway. This has been prepared for a long time, and now the first skirmishes are breaking out. We have met the enemy, and maybe it feels dumb, easily vanquished. Maybe it feels like chatbots are silly, and pointless, and fizzling out. But they are the epiphenomenon, not the phenomenon itself. And it seems to me like a very big tide is coming in. It’s going to lift some boats, in the same way that search made knowledge more accessible, maps prevented people from getting lost, and social media facilitated the opening of many peoples' worlds—but it’s going to erode some of the shoreline and swamp some habitat, just as having search in our pockets made party conversation more boring, relying on maps for directions has stressed infrastructure in unexpected ways, and social media first fragmented and then exploded a shared notion of truth. 

The underlying changes are big, and real, and ongoing, and chatbots are only our first glimpse of whatever is on the way. As the tide comes in, maybe it’s time to start thinking about what we want to move to higher ground.

If you liked this blog, please share it! Your referrals help Defector reach new readers, and those new readers always get a few free blogs before encountering our paywall.

Stay in touch

Sign up for our free newsletter