Scott Smitelli

You don’t have to if you don’t want to.

I’m sure you’re wondering why I called you all here today. Well, to be honest, I didn’t actually call for all of you; some of you probably shouldn’t be here. You folks might want to quietly find your way to the exits right here and right now.

For those readers who are sticking around, I suspect I’m going to lose a fair number of you along the way. We’re going to attempt to thread a pretty tough needle here and I won’t be too terribly offended if some browser tabs get closed before we get there.

And hell, may as well say it: The opinions expressed here belong solely to me and do not reflect the views of my employer, and they probably don’t reflect the views of anyone else’s employer either.

This is for the people who look at the world and feel like everything and everybody has gone crazy. People who maybe have this nagging feeling deep inside that something isn’t right here, but may be unsure about speaking up for fear of being ostracized at work or in social circles. Those of us who pause, look at this barrage of everything happening around us every day, and simply want to say no.

If anyone out there has thought for the very first time that this might finally be the new trick that makes them feel like an old dog, or if you’re a fresh graduate who spent years working through school toward an almost-certain career path that seems now to be crumbling into dust underfoot, you are the intended audience.

I’m writing this to tell you that it’s not just you. But more importantly, you’re not wrong. You are free to feel what you feel and think what you think. And I, for one, am tired of listening to people who tell me to suppress those parts of myself and surrender to a new way of being that I never asked for and frankly do not want.

Pull up a chair and endure yet another goddamn article about generative AI.

Gotta blame it on something

In 1989, a musical act called Milli Vanilli had a number one hit titled “Baby Don’t Forget My Number.” Its success, alongside the earlier single “Girl You Know It’s True,” secured them a spot on the Club MTV Tour. During one of the performances, a technical glitch caused their pre-recorded vocal track to skip and repeat, clearly betraying the fact that they had been lip-syncing to the voices of other singers. It didn’t take long before news broke that the performers on the stage that night had never sung any of their music—live or album versions.

In the fallout from this incident, their record label severed ties with them, their Grammy award was rescinded, and their albums were pulled from shelves. The performers, Fab Morvan and Rob Pilatus, never found any lasting success after the dissolution of the group. Lawsuits were filed by groups of customers who felt that they were sold fraudulent goods using deceptive practices. To this day, the name Milli Vanilli remains tainted by the scandal.

Maybe it’s just me, but I feel that same kind of treachery when somebody tries to pass off a piece of AI-generated work as if it were their own voice. There’s always that moment—whether reading the text, examining the image, or listening to the spoken language—where I clock the presentation as “not quite right.” Then the realization hits that I’ve been engaging with nothing of any particular substance, then I become a little pissed off at having my time and attention wasted by somebody who didn’t care enough about what they were doing to actually do that thing.

Recently that feeling of indignation has started becoming displaced by something a bit more melancholy. AI restaurant reviews praising a dish that definitely is not offered on the menu, and the attentive service provided by a named member of the staff who doesn’t exist. Video recommendations featuring an algorithmic guess at the swimming motions of a newborn sea otter or the simulated smile of a down-on-his-luck war vet reuniting with a service dog. Has anybody with a mouth ever tasted this cookie recipe? Has anybody with an actual butt contributed any of the five-star reviews listed next to this dining chair?

We’ve got enough content to infinite-scroll for the rest of our waking lives. Yet so much of it is pointless, undesirable, lacking substance, or actively harmful. It is above all joyless—like listening to a Breaking Rust album There has been plenty of commentary regarding the apparently 100% AI-produced track “Walk My Walk,” but there is some real gold in the Singles and EPs category on (one of) Breaking Rust’s artist pages on Spotify. In particular, a relatively recent cover of “Photograph” that answers the question “What if we got Ben E. King’s backing band, put a Temu clone of Chris Stapleton’s voice on top of it, and did an Ed Sheeran song that went for eight and a half minutes?” It is not pleasant and it does not need to exist.

If that’s not to your taste, there’s also an eleven minute version of Passenger’s “Let Her Go” up there. (Everything okay with your end of sequence tokens, buddy? Just concerned is all.)
while eating a sleeve of unsalted saltines and somehow forgetting that it wasn’t always this way.

Look at LinkedIn for chrissakes. “A senior engineer was tasked with such-and-such,” “The consultant billed for sixty hours,” “Our backlog had grown to over a thousand tickets,” “When I saw the 350,000 line pull request the junior engineer had opened,” and on and on. These are not real things that have happened to living people. They are the result of somebody prompting a chat bot to “make me sound smart and insightful to a room full of people who have long since turned their brains off” while conveying nothing novel or actionable. I have recently started referring to these “thought” “leadership” social media posts collectively as AIslop’s Fables.

If you’ll indulge me, I’d like to take a brief tangent and try my hand at writing one of these ridiculous things myself:

Two engineers—the systems architect Ass and the solution consultant Weasel—chanced to travel in company down a forest aisle. The Ass walked purposefully several paces ahead of the Weasel, who dawdled merrily along. At last they paused at a fork in the codebase.

The Ass, ever considerate of the broader consequences of his actions, contemplated deeply the choice before him. The Weasel, uninterested in such unrewarding efforts, took rest at the base of a widespreading trie, finding great pleasure in the cool shade of its stable branches.

So offended was the Ass at this display of poor work ethic, he deigned to question the Weasel’s technical ability. “How foolish you should be,” he brayed, “to remain unbothered by the fact that your house contains one appliance called a washing machine and another called a dishwasher. Until you see the absurdity of it, you have no business designing an API!”

It was in that moment, and all at once, when Jupiter cast down a fierce reallocation of resources that made both of their roles redundant.

The AIslopica 001. The Ass and the Weasel

Anyway. I find that when I mentally filter out all the obvious AI flourishes—the empty fluff, the excessive emoji, the formatting smells, the Ghibli-inspired scenes that really, really love using every available shade of brown—there’s sometimes not a whole lot of genuine human connection left in there. And in that empty space, I begin to wonder: Where did all the people go? What are they all doing behind the artifice they’re showing here? Why am I wasting my time and energy wading through these shallow yet unbounded seas of nothing?

For a while I was able to navigate around it by really curtailing my use of social media. I only looked at LinkedIn for the S-tier trolling of a select group of professional shitposters. Reddit no longer occupied any appreciable amount of my time. There were entire classes of Hacker News submissions that I refused to read the comments on. Including the comments about this article, should such comments ever materialize. I experimented with a small set of browser customizations to eliminate YouTube video recommendations and the site’s ill-conceived Shorts.

In place of all that, I started spending much more time in smaller private communities of people with shared interests, personalities, or life experience. We’d swap links to things we thought might be interesting to the group. We wouldn’t chase trends, we didn’t let ourselves get overloaded with shiny distractions from outside the circle, and we evolved it into a shared space that suited all of us best. Spending time in these small isolated groups really laid bare how dogshit the social landscape on the broader internet felt in comparison.

I was all set to leave that part of my life behind me. Then it followed me to work.

If a clod be washed away by the C

There are lots of different jobs out there, and a fair number of them are done on computers. As a matter of fact, when people ask me what I do for a living, I sometimes simply wave my hand and say “Computers” in a kind of no-you-really-don’t-want-to-know-what-Kubernetes-is kind of way. I suspect most office workers have a similar aspect of their job that they don’t like to dredge up during polite conversation.

I’m a software engineer on paper, and that’s the framing I’ll use here because it’s the one I know best. But I’m sure that those working in design, marketing, visual art, customer service, writing, all sorts of disciplines have felt it too. This ever-louder voice booming from on high to integrate large language models (LLMs) and other generative AI products into every conceivable point in your workstream. Maybe against your will. Probably against your own interests.

In the software engineering world, at least until quite recently, we all wrote computer code. Some of us wore the title “coder” like a badge of honor to describe our profession. I never much cared for it. We spent basically the entire 2010s loudly promoting “Learn to code!” as the cure for all of the ails of that era. And coders entered the profession in droves. With a teeny-tiny bit of help from years and years of interest rates near zero, the industry flourished.

There are a bunch of different types of computer code, and one way to visualize the different categories is to imagine them as the decks on a ship. At the bottom, in the sweltering boiler room, we have the machine code—the ones and zeros that computers are real good at but humans are real bad at. Above that we have an assembly language layer, still cryptic as all hell but at least comprehensible to a skilled and attentive person who subsists entirely on coffee. Above that we have systems languages like C and Rust And maybe Go if you wanna get into fights with certain people. which strike a pretty good balance between raw control over the computer and ease of getting things done.

One deck higher, we find ourselves among the interpreted languages like Python and JavaScript which hide a whole bunch of underlying concepts away in the name of making the code easier to write and reason about. And yet above that, a whole ecosystem of low-code products which boast point-and-click interfaces for building entire applications on top of hosted platforms. These would be things like Framer and Squarespace. Heck, let’s throw Excel and Power BI into that pile as well.

In this hierarchy, the upper level languages mechanically generate the code at the layer(s) beneath. If you are working in assembly code, a tool called an assembler translates it into machine code without you needing to think about it very much. If you write C or Rust, a compiler takes care of the assembly aspects of the program. Python and JavaScript are executed by interpreters, which were each written in compiled languages, and so forth. Whichever layer you work in, your code ultimately needs to be translated all the way down to machine code for a computer to make any direct use of it.

What we’ve got now is perhaps the uppermost deck of the ship: prompt driven development. The apparent goal here is to eventually remove all direct manipulation of all types of code, obviating even the pointing and clicking to move different pieces of the app around to suit one’s visual preferences. No, in this paradigm you type out exactly what you want in plain English, You can certainly try to prompt in any of the world’s languages, but results are not guaranteed. and an LLM chews on that prompt for a little while then writes the necessary code in one of the lower-level languages on your behalf. Don’t like what comes out? Prompt it again, and the code will change to something else. Precisely how does it change? Don’t concern yourself with such trivial matters. Just like when you change a line of Python code, you really don’t need to see what happens at the assembly level either.

There is a key difference here: When you change that line of Python code, you can know what is affected. Every step in the program’s execution adheres to a rigid set of rules that is followed precisely, each and every time. You can know that extending a string past a certain length triggers a memory allocation, and in this one degenerate case it causes a register spill leading to a performance hitch that can be reliably replicated, tested for, and worked around. It’s deeply tedious to trudge through the lower decks of the ship this way, but it is feasible.

How does an LLM produce its output? Nobody knows. I mean, we know in the abstract sense, but not to the extent where we can attribute a certain behavior to a given machine state. It’s simply not possible to trace everything that occurred across the hundreds of billions of parameters that underpin each model and replay even a fraction of the steps that transformed a given piece of context into a specific output token. It’s easy to think of this an inherent form of nondeterminism, where the output of a system varies based on factors beyond what was provided as input, Compare this with the quote “Insanity is repeating the same mistakes and expecting different results” (which evidently originated in a twelve-step pamphlet). but it’s more attributable to an intentional setting called temperature—a controlled introduction of randomness that reduces the likelihood that the model parrots back such a large fragment of training data that it gets itself stuck or veers into plagiarism territory. Turns out this property has a tendency to break existing code during attempts to make unrelated changes. To defend against that, the industry has turned to coding agents that incrementally make these changes in a loop, ideally finding a stable equilibrium where the new thing works and none of the existing things break. This is pretty much where we are now, with some engineers experimenting with running multiple agents concurrently Maybe using something like Clawdbot. I mean Moltbot. I mean OpenClaw. to observe and correct each other’s work.

But you didn’t come here for a slapdash and oversimplified explanation of eighty years of programming history. You’re here for boat metaphors!

Long time ago, probably before either you or I got into this game, the old RMS Software Engineering struck an iceberg that tore a big gaping hole down the side of her hull. The Machine Code deck flooded first, and like literal rats escaping a literal sinking ship, the machine programmers sought higher ground on the Assembly Language deck. Water began flooding the assembly programmers out, and aside from a few air pockets where the embedded systems programmers sought refuge, everyone hurried up to C deck.

By this point, news had spread that “up near the top” was where everybody wanted to be—although the reasons why got a little lost in the hustle—so a huge number of programmers arrived at the Interpreted Language decks. This was pretty much where I came in. Some hung around for a good fifteen or twenty years, but the rising water waits for nobody. Higher still we climbed, skipping right past the Low-Code mezzanine and straight up to the Prompting promenade on the main deck. It is here that we’ve reached our apex; there’s nothing above us but the twinkling stars punctuating the pitch black sky. From here, all the decks below do our bidding. With just a flick of the wrist, huge quantities of lower-level code are willed into existence without intervention. Simply marvelous.

Except… the ship is still taking on water. Sooner or later every deck—including the one the prompters are now standing on—will go under. If you subscribe to the idea that technology will invariably improve and build on top of everything that has come before it, often to the point of completely encapsulating it, why is “writing all that prompt text” such a bright red line of magical job security? Text was the first thing language models ever produced! You really can’t envision a future where people think it’s quaint and old-timey to write LLM prompts by hand?

And what happens to us in that metaphor? Just chillin’, clinging to flotsam, slowly freezing to death in the North Atlantic.

yolo;dr

Let me be crystal clear: Not everybody wants to write software in paragraph form.

In fact, I might propose the possibility that the techniques emerging today are a totally distinct discipline from traditional software engineering. Perhaps the two might coexist peacefully if we could simply agree to stay the hell out of each other’s lanes.

But let’s be real, that’s not how this seems to be going.

At multiple points in my life, I’ve had the opportunity to debate my peers over the following question: Is software engineering an art, or a science? The science argument is bolstered by the fact that a disheartening proportion of people who work in this industry seem utterly incapable of appreciating art in any capacity, An interesting counterexample: Software engineers are often fans of anime, a form of art. (As an exercise, try to get an engineer to articulate exactly what they appreciate about it.) let alone creating any. On the other side, they’ve had the affirmation “code is poetry” And to date, the only form of poetry for which MITRE has assigned multiple CVEs. emblazoned on the footer of the WordPress.org site for nearly a quarter century. I’ve long believed—and still believe—that there is a spectrum and things can be both. Look at a cable-stayed bridge, or a Tim Hunkin sculpture, or some of the stupid shit the contestants on The Great British Baking Show have been asked to make over the past couple of seasons, and tell me that these didn’t require the unique touch of an artist and an engineer. In the right light, software is no different.

Engineers work in specifications and requirements. The end product needs to behave a certain way, adhere to certain external constraints, meet relevant regulatory criteria, not kill anybody, Unless the requirements say it should. that type of thing. Artists work in the ambiguous undefined spaces that permeate all the little in-between areas of the work. Somebody has to make the countless tiny decisions that aren’t otherwise spelled out in black and white on the ticket. The sum total of all these little decisions becomes something that looks and feels a lot like style, even taste. The choices being made here are influenced by a working lifetime of personal experiences. Perhaps one could go as far as suggesting that it involves a certain intuition to do it well. We’re really okay with outsourcing this aspect of our work to some tensor core randomly burping out floating point rounding errors?

There are countless ways to express the same idea. Are some expressions of a particular idea better than others? Arguably!

“But wait,” I hear you cry. “Once the product ships, nobody sees the code anyway! Who cares how it looks?” Take a long, hard look around you. Look at your phone. Look at one of the dozens of other web pages you have open. Look at the app launcher on your smart TV. Look at the ordering kiosks at Panera. Look at every furtive data broker who mishandled your personal information and offered a $125 credit monitoring voucher You never bothered to redeem that, did you? as restitution for their negligence. Look at every hateful airline website or clunky automotive touchscreen or inkjet printer driver and tell me with a straight face that code quality doesn’t matter—that whatever makes these products so miserable to interact with day in and day out could not in any way be improved by exercising more care in the expression of the underlying code.

Look around you. You don’t believe that this could be true?

When I hear people say “I ship code that I don’t read,” what I hear in my head is “I really do not care about what the end-user of this product experiences, I do not care about whichever poor soul needs to maintain this thing after I’ve gotten bored with paying attention to it, I do not care about anybody on my team who has to support the ongoing operation of it, and frankly I didn’t even care about making it in the first place. I just wanted to be done for the sake of being done.” It is a vulgar display of apathy, a shameful dereliction of sound engineering practices, and quite frankly it makes the practitioner sound like an insufferable asshat. To wield something so limitlessly powerful as software without so much as the slightest reverence for the craft is simply offensive. Truly. I am viscerally disgusted by what I am watching transpire in this space.

“Aha! I’ve got you now,” you bellow, in a different style of voice for some reason. “This is gatekeeping! You are trying to protect your little clubhouse from outsiders who hold viewpoints that differ from yours!” Guilty as charged; I am indeed gatekeeping. I’m gatekeeping in the same way all those pesky government officials gatekeep when they tell us we’re not allowed to work toward achieving nuclear fission in the shed. Turns out we live in a society, and the actions of one person have the capacity to directly and indirectly affect countless others. If there’s an activity that has a high likelihood of causing preventable injury or tying up first responders unnecessarily, those activities tend to get restricted in some way.

Software runs the world. It can make (or lose) untold sums of money. It can decide who gets a favorable mortgage rate or a critical job interview. It can know our deepest secrets, and it can surveil us against our will. It can bombard us with advertisements targeted at personality traits we’re not even aware we’re revealing. It can cause accidental (or deliberate) death. More commonly, it can really frustrate, annoy, inconvenience, and subtly chip away at the mental well-being of vast swaths of the inhabitants of this planet we all share. Stop treating it like it’s anything less.

I owe my soul to the company store

Not everybody is okay with the idea of becoming dependent on an outside party in order to do their work.

Tools. That’s the word that everybody likes to volley around. AI models are tools, the same kind of thing as what I get from Home Depot when I rent a wallpaper steamer. Same as what companies get when they pay Adobe for a Photoshop subscription. Gotta spend money to make money, as the saying goes. “LLM coding agents are tools—nothing more—so just pay for a license seat and use it like you would any other tool.”

You ever get the feeling that something is different about the AI case, but you can’t quite put your finger on it? You ever say a word like ‘couch’ out loud too many times, and the sound momentarily loses its meaning? There’s a term for the phenomenon, by the way: semantic satiation. Did we all collectively forget precisely what a tool even is?

Suppose there’s a slotted-head screw somewhere in your environment that you wish to tighten. You could extend the index finger on your dominant hand, insert the free edge of your nail plate into the slot, and turn your wrist. Depending on the state of the screw, this will probably cause some amount of injury to your fingernail. The mental image certainly made me cringe a little.

A far more effective technique would be to find a slotted screwdriver, grasp it in your hand, and perform substantially the same motion to accomplish the task. Brains have the rather remarkable ability to treat handheld objects as an extension of the body without a whole lot of cognitive overhead. Both a fingernail and a screwdriver permit a being of agency and free will to impart some change to their environment. Both the fingernail and the screwdriver are pretty unambiguously tools.

The elegant simplicity of the screwdriver makes it useful for more than just driving screws. One could—in a pinch—use a screwdriver to pry two pieces apart, or one could use it to shave a piece of wood or other malleable material down, or one could bring it down to the mess hall to finally shank Jimmy the Rat. Whatever a person might want to do with a hand-held object having a screwdriver’s material properties, they are free to do it.

Somewhere in another part of town, on the fourth floor of a three-star chain hotel, an ice machine hums dutifully in the elevator lobby. This is a self-contained appliance that draws utility power and water from the building and converts it into ice cubes. Along with noise and waste heat. That’s all it does, all day every day, whether anyone is paying attention to it or not. Is the ice machine a tool like the screwdriver is? Does the answer to that question change in any way if nobody ever passes by and presses its button?

Conceptually between these two objects, we have the power screwdriver. This is a handheld device that converts electrical power into torque Along with noise and waste heat. that turns a slotted bit. The operator manipulates this in a way that’s not too different from the regular screwdriver, moving it around and positioning the bit in the screw head, but instead of doing the difficult work of repeatedly twisting their wrist to move the screw, all they need to do is press a button to release stored energy and channel it into the act of turning. The operator also needs to know when to release the button, a lesson that comes from stripped threads, cammed-out heads, and splintered wood. This is a far more efficient way to drive screws. It allows more screws to be driven over a given span of time, and it conserves the operator’s energy and really saves their wrist from long-term injury.

Yet in removing the most laborious and time-consuming part of turning screws, the power screwdriver has lost some of the generality that the regular screwdriver enjoys. Its awkward form factor and stubby bit–chuck arrangement prevents it from being useful at prying or chiseling, and it’s definitely not the best way to deal with our pesky cellmate.

In fact, the narrow specialization of the power screwdriver is part of the reason why regular screwdrivers still exist. I would guess that almost every person with a power screwdriver in their garage also has multiple regular screwdrivers as well. They each have their time and their place. I would not use a power screwdriver to change a watch battery, and I would not use a regular screwdriver to mount a 65-inch TV to a pair of wall studs. Both of these items are in my toolbox; I see no difficulty calling both of them tools.

So, what is generative AI? Is it a power screwdriver that removes sporadic moments of fleeting agony from a much larger home improvement project? Is it an ice machine loudly grinding unrequested ice cubes onto the tile floor at two o’clock in the morning? Or is it a stick of dynamite—light the fuse, plug your ears, and run the hell away from whatever happens next?

It can be all of these things, it seems. I guess AI really does satisfy the definition of a tool as long as it’s being used as one.

This unlocks a common refrain from the booster class: “A true craftsperson uses every tool at their disposal!” Which, if you think about it for more than three seconds, is ridiculous on its face. Gotta dig some holes for fence posts? Okay! Bring along every shovel on the truck, the Ditch Witch, a box of ANFO and the Bagger 293. Have the people who echo this kind of stuff ever built anything in the physical world? Your average craftsperson has one real good compound miter saw that they use for basically every cut on the jobsite. They’ll use it until it breaks down, then they’ll replace it with a newer model of substantially the same thing. You know why they stick with what they know? Because they know it. They’ve invested a lot of effort learning quirks, shortcuts, and building up muscle memory on it. In what world is constantly switching tools for the sake of switching tools a remotely smart use of time?

But there’s a deeper theme I wanted to touch on here. There’s a random piece of advice that’s been floating around my head for years: Don’t rent your livelihood. I wish I knew where I picked this up from so I could properly attribute the source, but my research so far has turned up nothing. That is to say, if there is a tool or system on which your entire professional success rests, do everything you can to ensure that it’s not something you need to repeatedly pay other people to acquire. For example, if you own and operate a moving company, it would be wise to not need to rely on a rental company to provide a truck for you to use every day. Such reliance would effectively put you entirely at the rental company’s mercy. If they want to jack up their day rates, that hurts your bottom line. If you show up the morning before a big move and they already rented all their trucks out to other customers, your entire operation stops cold.

Obviously there is nuance to it. If you’re a landscaping company and you only need the Ditch Witch on certain days, it’s less risky and more fiscally responsible to rent that. If you can’t get it, it’s not like the business completely shuts down. I’m talking about the fundamental everyday stuff here: the plumber’s wrench, the musician’s cello, the photographer’s camera body.

Paying a subscription fee for AI feels like something between outsourcing the work and renting a brain. Neither one is a particularly great feeling, at least to me. The outsourcing aspect is pretty clear-cut: If you are doing this as an employee, the company hired you and everything you bring to the table—your judgement, your experience, your personality, and so on. If you wouldn’t have commissioned some dude off Fiverr to do your job for you in 2021, what’s suddenly different now? Even if the business paid for the license on your behalf, does it really change the fact that it ain’t really you doing that work anymore? What do they even need you for, anyway?

The rent-a-brain aspect is more acutely alarming. And I will be blunt here: It sure does seem like the prolonged use of LLMs can reliably turn certain people’s minds into mush. Perhaps there’s a bit of a correlation–causation aspect, but I do not like the effects I’m seeing out there.

Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence. “I gambled away my life savings, I can’t stop now.” “I’m now addicted to heroin, why would I give that up?” “I lost a piece of myself that I can’t even remember ever having, adapt or get left behind! 🚀” Do you actually want to end up in a place where you feel you can no longer escape?

If you do knowledge work, your brain is your livelihood. Don’t rent your brain from some third party, especially a third party that may not have your best interests at heart. How much do you suppose these LLM tokens actually cost once you factor in the amortized expense of training multiple models over all the content that has ever been exposed to the internet? What do they cost once the VC firm needs to see a return on their investment, or the financing deals come due, or the shareholders get spooked and somebody’s stock price tanks? How much will you need to pay each month to rent your livelihood back from these companies once their offerings aren’t so heavily subsidized and your brain has completely atrophied? You think you’re just gonna self-host an open weight model like GLM-5 on your personal hardware and cut out the hosting costs? Well, alright, hope you have 1,727 GB of VRAM lying around. What will you do when the frontier models start inserting “sponsored thoughts” into the work you tasked them with? Will you respond to a token rate limit message the same way a junkie goes through withdrawal?

We live in a capitalist society. The line must go up at all costs. For a while this was easy in the technology space because computers kept gaining new capabilities. “Finally, we can do color!” “This one can play CD-ROMs!” “Now this one can run the hottest new game!” “It streams online videos!” Barely. It felt like every couple of years there was a truly compelling reason to re-buy your computer. You ever notice how that kinda petered out at least a decade ago? Computers today feel pretty much like the computers did in 2016. We stopped seeing worthwhile advances in features that anybody was clamoring for. Virtual reality is a niche platform at best, cryptocurrencies and NFTs didn’t meaningfully change much of anything outside of the grift industry, and the sole motivating factor for me buying my last three smartphones was “the new one isn’t running out of storage space and the battery works.”

I doubt anybody really wants to face this fact, but technology is in a terrible slump. The industry desperately needs something—anything—to entice that almighty line to go up. If AI can’t do it, there aren’t a whole lot of other things right now to be optimistic about. Web3 didn’t work. The Metaverse turned into a seemingly obscure multi-player game called Horizon Worlds. Did you know? I didn’t. We’ve already built all the unregulated taxi companies Lyft, Uber and illegal hotel conversions. Airbnb, Vrbo None of humanity’s actual problems appear sexy (or tractable) enough for the tech industry to concern itself with. So this is what we’ve got. AI is the one basket into which we have feverishly dumped hundreds of billions of dollars worth of our eggs.

And best of luck to us with all that.

Nothing for money

A single DGX B200 AI server is rated to consume 14,300 watts of electrical power at peak. You can cram about four of these on a rack if you like to live on the edge, and these four units might draw something like 200 amps of current combined. For a point of comparison, a typical single-family home in the United States will have wires from the utility company that are thick enough to provide a 200 amp service. And much like a single-family home, each of these servers costs a couple hundred thousand dollars. Your average datacenter will contain one or two dozen of these racks along each of its aisles, with aisles spaced about every ten feet, filling up a floor area that dwarfs the warehouse at the end of Raiders of the Lost Ark. Such datacenters are sprouting up in cities all across the globe, as fast as their tenants are able to pay for them. Just kidding. They’re going up faster than their tenants can pay for them.

As far as the physical world is concerned, computers don’t really do anything useful. They don’t move around, they don’t produce much light, there are no important chemical reactions occurring inside, and they’re specifically engineered not to emit strong radio waves or electromagnetic fields. The only thing they can really do with the electrical energy they consume is convert it into heat. So they do. Most of the 14.3 kW taken from utility power is converted to heat, with the number-crunching stuff as its byproduct. A typical household space heater might be rated at 1.5 kW. A single B200 server dissipates nearly ten times that amount. These things are belching heat into their surroundings.

Each server is packed with ventilation fans that emit sound pressure levels approaching 100 decibels—approximately as loud as a jackhammer as perceived by its operator. A datacenter aisle packed with AI servers is so deafening that even wearing earplugs underneath earmuffs does not provide enough attenuation to protect against long-term hearing damage. To keep the servers from baking themselves, huge industrial-scale air conditioning systems pump the heat away and dump it into the ambient air outside the building. When citizens raise concerns about datacenters depleting local water supplies, this is where that water is going: The air conditioning systems apply the heat to the water, which causes the water to evaporate into the atmosphere. See, they’re not using water, they’re merely converting it into clouds. Which then causes rain. Which either falls on the ocean so we can’t easily use that water again, or it falls on another part of the country as catastrophic flooding. Totally renewable, don’t worry about it. The evaporated water carries the heat away, and what’s left on the ground is cooled as a result. The pumps that drive this process consume yet more electricity.

A six-inch Spicy Italian Sub from Subway contains sufficient caloric content to run a person’s 20-watt brain for a day and a half.

I wouldn’t consider myself to be somebody you should be turning to if you’re looking for environmental insights. Global energy production and consumption is a whole big thing and frankly I’m probably better off moodwise knowing as little about it as I am practically able to. I will say that it is probably a net negative that AI datacenters use as much power as they do, and it’s probably in our best interests to get that power consumption to go down even if it results in a net decrease in throughput and worse latency.

What I do feel qualified to comment on is the increase of entropy caused by running these systems. I mean this in more than just the strict scientific sense of releasing heat into the environment. I’m also talking about the material that is produced as the stated goal of the whole enterprise—the generative part of generative AI. The text and computer code, the images, the videos—these are increasing the entropy of codebases, the internet, its social spaces, and our own finite attention. As much heat as the servers are venting into the air, at least that much “content” permeates our screens and headphones at the same time.

Not everybody is okay with that.

It may or may not surprise you, but I never got into the 3D printing hobby. Don’t get me wrong, it’s an impressive technology and an undeniable boon to tinkerers and professional prototypers alike. My position on it has always been rather pragmatic: I’ve got too much crap in my living spaces—more crap than I have places to put it or the ambition to organize it—so why on earth would I want to own a machine whose sole purpose was to make more crap?

One doesn’t need to make too many mental leaps to draw a parallel between generative AI and 3D printing. You want it, you can have it. It’s pretty cheap, and it doesn’t take anywhere near as long as trying to find somebody to make it for you using the old techniques. If it comes out deformed, throw it away and try again. Nobody put any substantial effort into making it, so it’s not like you’re hurting anybody’s feelings by discarding it. As soon as a new design leaves the machine and you carry it away, you’re holding the only instance of a brand new thing, a bespoke build just for you. That should make you feel… something, right?

“When you get something for free, it has no value.” Tribunal Justice, “The Dumped Dog” (season 1 episode 27) This line has been spoken—on more than one occasion—by Adam Levy, son of Judge Judy Sheindlin, on the reality court show Tribunal Justice. The fact that he made this statement on a free TV show streaming through an app called Freevee is an irony not lost on me. In the original context, he was explaining why animal rescues charge fees as a kind of filtering mechanism to weed out low-effort adoption attempts. I think about his quote a lot.

We’ve got a technology here that can produce basically anything that can be represented symbolically—written and spoken language, music, images, video—along with any information that can be encoded within one of those mediums. If it weren’t for limitations in human-computer interfaces, there’s no plausible reason why it couldn’t generate scents and flavors as well. The price for any of these outputs is, well, whatever AI training and inference goes for these days. It’s apparently cheaper than actually staging a video shoot or bugging your friend to serenade you with a guitar or mustering up the mental wherewithal to write a four-sentence email to your boss or finally learning which order the arguments to the ln command go in. If this is something you regularly struggle with, I’m gonna blow your mind. ln works the same as cp:
$ cp WhereItIs WhereYouWantIt
$ ln -s WhereItIs WhereYouWantIt
Why do the difficult, expensive, time-consuming old thing when you could just do the easy, cheap, fast AI thing? In fact, why strive to do anything of value when you could have a passable knockoff for basically free?

You ever see somebody make a social media post like “Look what I made using AI!” and hardly anybody gives a shit? Perhaps if the work had some value, others would value it more.

I will concede that it is frankly incredible that we have gotten so close to empirically testing the infinite monkey theorem, and I don’t mean that sardonically. It is a miracle of human ingenuity that we can etch 100 billion transistors onto a piece of rock we dug out of the ground. The fact that we don’t really understand how it works, yet it does work, is astonishing. It’s frighteningly good at search and retrieval. This hardware is capable of doing some genuinely cool stuff that might actually set us up for untold prosperity for generations to come. But if we’re just going to use it to make another lazy Stable Diffusion image of a smooth beige coffee shop or a commit message peppered with green checkbox emoji and a waffling description of the code changes “This pull request appears to…” Appears to? You wrote it, my guy!—all in the hopes that it will make us all a lot of money someday, somehow—just put that rock back where you found it.

Oh, how about this one: “In order to truly serve you, these systems need to know all there is to know about you.” LOL. It’s like we somehow forgot how to view things through a lens of suspicion.

We live in a society with other people, and as much as it might be an annoyance sometimes, those other people deserve to enjoy a certain amount of dignity and respect. You’ve got family, coworkers, friends, neighbors; your life is a complex web of social interconnections. You are well within your rights to open your address book, your email archive, your photos and videos up to third-party services to help you manage your life… but did you ever bother to ask if the people represented in that content were okay with you sharing it? Maybe your good friend doesn’t want their name/address/phone number to be given away so hastily. Maybe your coworker would’ve written that off-the-cuff text message a little differently if they knew you were going to ship it off to be summarized and preserved in perpetuity. Maybe you’ve got a family member who thinks it’s downright creepy that there’s a timestamped image of their face along with GPS coordinates nestled away in the EXIF data floating around out there somewhere, just because you snapped an innocent photo at their party and then freely gave it to an amoral technology company. Hooray, you’ve built a digital assistant to help organize your calendar, and now it sees that I responded ‘yes’ to the invitation to be in a place at a time to do a thing. Why does it need to know that about me? May I politely ask you to stop telling it that without my consenting to it?

Now, let me just nip this one in the bud: I’m going to fucking keelhaul the next person who calls me a Luddite for feeling this way. I’m not arguing that this technology should be unilaterally destroyed; I am arguing that we are collectively using it in the dumbest possible way, causing the most self-inflicted injury, and maximizing the amount of angst and suffering we’ll all have to contend with. I am angry at generative AI because it seems to be making us think and act like complete idiots.

But if you wanna talk Luddites, here’s something to chew on. The Luddites were 19th-century textile workers who opposed the introduction of machinery that automated what had traditionally been the tedious process of hand-finishing cloth. They didn’t enjoy the tedium per se; what they liked was the ability to earn a livable wage in exchange for enduring that tedium. Part of their argument was, although the machines were touted as being able to do the same job quicker and cheaper, the end product was of inferior quality due to the lack of skilled attention given to the work. They might’ve said that quality was something worth paying a little more and waiting a little longer for. I happen to agree with that, but it doesn’t mean I’m going to get all smashy about it.

I bet that somewhere in your closet or dresser you’ve got a shirt that you don’t wear anymore. Maybe it was nice when you bought it, but after one or two washes it started to pill, or it faded, or some of the stitching began to unravel. If it looked nicer, maybe you’d donate it to a thrift store. Then again, if it looked nicer you’d still be wearing it regularly. Maybe you should just throw it out; it’s not even the kind of fabric that’s good to wash your car with. You’re not even really mad about it, considering how cheap it was. Maybe you should just leave it where it hangs; that seems like the least sad option.

Do you ever wonder, had the Luddites not failed, if we’d be so awash in cheap garbage clothing today?

Is anybody else tired of living in a world filled with abject crap?

Live, work, and play

Toby made his first chair when he was twelve years old. It was built from scraps he found behind the barn, some weathered 2×4s and the remnants of a card table. It was a boxy kind of thing, no armrests, and not a single curved surface anywhere on it. But the cuts were square, the pieces fit together well, and it felt like something that might last for a while. He never cared much about the finish, never sanded any of the surfaces, he slathered it in Rust-Oleum red and put it into service at the desk where he did his homework.

He made another a few weeks later, and gave that one to his granddad as a gift. Pretty much the same design, same color, although this one was a little taller to better suit the recipient’s towering stature. Then he made another one for his best friend to sit on when they hung out.

In the years since then, Toby’s made more chairs than he can even recall. It’s easily over a thousand at this point. He’s got a whole workshop with a few staff helping him out. And they’ve branched out into other pieces too—end tables and three-shelf bookcases, those kinds of things. Toby’s shop has enjoyed a modest amount of regional fame and there’ve been plenty of orders to keep his crew busy.

He was recently interviewed by a student for the local middle school’s newspaper. Toby loved talking about his work—evidenced by the joyful smile peeking through his bushy salt-and-pepper beard—and he loved inspiring kids to set ambitious goals in their own lives. When asked what keeps him going, he replied simply: “Every day that I come in here and pick up my tools, I have the opportunity to give somebody in the world a comfortable place to sit.” He actually had to say it twice; the student was not the fastest with a pen and pad.

Across the shop, Lyle looked up just in time to catch Toby delivering that quote. He had just finished clamping a long wooden dowel into his well-worn lathe and was now hunting around his cluttered workbench for his safety glasses. He bristled slightly at the words his boss was essentially dictating to the kid. This was a wood shop; they made chairs. Ain’t nothing lofty or aspirational about that.

Lyle found what he was looking for, slipped the glasses over his eyes and flipped the switch on his lathe. The dowel came up to speed rapidly as the din of motor noise surrounded him like an old familiar friend. He picked up his favorite chisel and set to work. Back and forth along the length of the piece, sending swarf and chips of wood fluttering to the shop floor, he molded this raw bit of material into one chair spindle in no time flat. He switched the machine off and lifted his glasses to rest above the hairline of his taut man-bun. The spinning blur of the workpiece slowed, then stopped, and he unclamped it. In its place, a brand new dowel from a small stack. The chair being created here called for nine spindles in total, but he wasn’t counting them. It’d be done whenever the stack ran out.

If there was an encyclopedia article for “being in one’s element,” the picture would depict Lyle doing this work. Outside of his day job, however, his biggest problem was that he never finished anything. He also never really started anything either. His garage was filled with clutter—lumber for projects he thought he wanted to attempt, scraps of half-cut materials he “might need” for another job, a pile of experimental and practice cuts, and the occasional assembled piece brought 90% of the way to completion and then left to languish under spiderwebs and dust. He had a tendency to only pick up things that allowed him to spend time operating the lathe. Once the task progressed past the point of needing lathe work, his ambition for continuing dropped precipitously. He had at one time semi-seriously considered going into business for himself making custom baseball bats, but the closest potential customer was a low-A ball club about 100 miles away that he never actually worked up the nerve to contact. He didn’t even want that for himself; the world told him that he was supposed to want that.

In the five or so years he’d worked at the shop, Lyle never really had any lengthy or deep conversions with Toby. They had always been cordial enough—how’s the family, have you ever been camping, can you believe our team blew the playoff game—those types of chats. But nothing more involved than that. Lyle had always respected Toby and the way he chose to run the business, but he never quite wanted to get too into the weeds talking about orders and customer success and vision. That kind of stuff required way too much people-pleasing and time spent talking on the phone for Lyle’s tastes. It was frankly better for everybody that he stayed back behind the lathe churning out chair parts, table legs, even the occasional set of balusters. It was all the same from where he sat.

One of Toby’s earliest realizations during the infancy of the business was that, although he was skilled enough at every aspect of furnituremaking to build everything himself, he wasn’t passionate about many of the low-level aspects of the work. He’d barely been in business for two months when he hired Beau to take over sanding and finishing—Toby’s least favorite tasks of them all. Janice was hired to do shipping and receiving, another set of responsibilities that he couldn’t stand sacrificing time to. Lyle was probably the fourth or fifth hire once production started seriously picking up. The lathe was Lyle’s whole domain of expertise; he never had the slightest ambition to branch out to anything beyond that single passion. He was a literal artist when it came to turning wood, and Toby appreciated how reliably Lyle’s parts matched and fit together.

Above all, Toby remained the idea guy. The designer of the various product lines, the head of sales, the one-man public relations department and the self-proclaimed mastermind of the whole thing. He was always full of big ideas and big plans, and they all led steadfastly to a world filled with handcrafted furniture—his name and crest branded on the underside of each piece by Beau’s scorching hot iron. Toby made furniture for everyone and everybody, dammit.

As far as Lyle was concerned, Toby was a bit of a pompous, self-aggrandizing windbag. His wood shop made chairs, not cancer cures. Still, it was a pretty good gig for this small town on the Appalachian side of the Rust Belt and the work suited him well. All he ever wanted in life was to gradually shape wood into axially symmetric curves and forms. It was the closest thing he ever found to feeling like he had a purpose on this earth. He never even thought too hard about what all those finished pieces ended up going into. He just loved using the lathe, and he was a rare master at it. Lyle was born to be a woodturner, dammit.


Not everybody views shipping products as a personal value.

Here we come perhaps to the meat of why everybody seems to think everybody else is some kind of contemptible fool. People have certain core values—sometimes held subconsciously—and these can seem utterly bizarre to others who don’t share them. I’m starting to believe that many arguments about AI use (and probably a bunch of other things) boil down to people forming opinions and preferences from a set of values and principles that are incompatible with those held by the other party.

Put simply, Toby deeply values shipping things and providing value to society, and Lyle doesn’t. Lyle values expertise in a skill carried out with utmost care and craft, and Toby doesn’t. Toby’s ultimate dream, beyond the whole furniture thing, is to eventually make enough money—having never completely settled on what nebulous sum of money constitutes “enough” for him—that he can retire into peaceful old age. Lyle wants to keep doing what he does until his body finally gives out, and then to continue doing it for a little while longer. Any discussion between the two of them that brings these incompatible sets of values into tension is going to end up in some kind of argument that neither side will concede. And who would expect them to? These are the very foundations of their personalities.

This, right here, is why everybody is fighting all the time. We’re all trying to argue positions based on mismatched values. Not wrong values. Mismatched.

On the one side, you’ve got folks whose entire sense of self-worth and meaning is inexorably tied to the act of generating economic value for the benefit of outside parties. In the other corner, you’ve got people who aren’t primarily focused on creating anything of any appreciable value for others and who are motivated instead by some internal passion or belief. Perhaps they would be more likely to be viewed as selfish or childish for putting their own interests first. Any economic value that the latter group manages to produce is either a useful byproduct of that internal drive or just a straight-up accident.

Neither of these is wrong. Although the former certainly fits into capitalism’s mold a lot more readily. They are simply different.

When we visit sites like the LinkedIns or the Hackers News, we’re stepping into environments that are already disproportionately skewed toward the “generate economic value” side. LinkedIn is a venue for cosplaying a character who might be wise or insightful about business and that swole grindset hustle, and Hacker News is an echo chamber for technology enthusiasts who deliberately inserted themselves into orbit around a startup accelerator. Y Combinator

It’s no wonder why a person in the “do what I like” camp might feel alienated and outnumbered in spaces like those. It’s not surprising that voicing an opinion that runs counter to the broad themes of “generate economic value” would sow discord. It doesn’t mean anybody is wrong per se, but it does suggest that somebody should at least try to point out a possible reason why this keeps happening.

Having said that, as a proud citizen of the “do what I like” tribe, I do feel qualified to point out exactly what bothers me about the other side’s talking points. Feel free to take it or leave it.

You ever notice how the big selling point of AI is how much more productive it makes everybody? “Saves time,” “Lets me do 10× what I did before,” “I can work on five different things simultaneously,” and so forth. I’m skeptical of the exact numbers—especially that 10× figure—but I’m broadly convinced that the use of AI is actually making some people faster at certain tasks. Sometimes in a moment of vulnerable transparency you’ll hear things like “It makes me more tired than I used to be at the end of the day.” Sign me up. /s

You know what I never hear in these discussions? “It makes me happier.” “It gives me a sense of fulfillment and meaning.” “It has provided me with higher income or a shorter workday.” “I am finally respected.” You’d think that if somebody attained these, that would get top billing over some olive-drab tripe about productivity and KPIs… right? Or am I just supposed to take it as implied that productivity naturally and inevitably leads to all those other good things, and clearly we all know this, so why should it need to be stated explicitly?

I dunno, man. I don’t think I see it.

Another piece of unpleasantness that I thought we as a species had outgrown involves any metric focused on lines of code (LoC). This is a measure of exactly what it sounds like—the number of lines of program code you can scroll through on the screen. We’ve been trying for decades to finally eradicate this profoundly stupid measure of engineering productivity, and now it’s back. Hooray, LLMs can poop out more LoC than humans can. Work harder, not smarter, I guess.

If you’re working on a software project and you find that the requirements call for there to be 25 whoozits on a particular page of the app, there are (broadly) two ways to achieve that:

The first option requires ≈25 times the LoC compared to the second option. By the way some people are evaluating AI performance, the author of the first option is 25 times more productive. But if you ever find yourself in the position of needing to go back after the fact and change something on all the whoozits, you’d probably appreciate the original author a lot more if they kept things lean and mean with the second, loop-based option.

“But wait!” Oh, great, this guy again. “AI is the perfect solution here because it does all that rote correction for you! It shouldn’t matter which way the code was structured because coding agents can make the change just as easily either way.” Sure, and if you want to wear clean clothes, you could either do the laundry or you could throw your wardrobe away after one wear and have fresh replacements drop-shipped from China. Rob Rhinehart, the mind behind the Soylent meal replacement drink, apparently did exactly this. The Luddites would probably have had some choice words for him. The two approaches are only equivalent to maintain if you are willing to ignore the massive Rube Goldberg machine of complexity that one side requires that the other side doesn’t.

I don’t know if some people just flat-out don’t understand where care and craft come from. I’m not even sure I can put into words exactly how strongly some of us feel their pull in all facets of our lives. It is kind of like being in a position of being asked by society to fell a metaphorical tree, being given the option of using a chainsaw or an axe, and sometimes deliberately choosing the axe despite it being more difficult and time-consuming in every measurable way. I could’ve cut through that trunk easily with the chainsaw, yet I chose to do it the harder way for a reason that was deeply important to me, and I succeeded at it. The tree coming down was the part that was important to society, but the hard axe work was important to me. Any ol’ schmuck could’ve bungled their way through a tree trunk using a chainsaw, but I saw no merit in taking that easier path. Just look at how long this article is, for God’s sake. And you’re just reading it—I made myself write it! The challenge of doing things the hard way has led me to some of my proudest achievements in life.

You can perhaps see why I might respond poorly to losing the axe option.

The act of solving meaningless puzzles can itself be meaningful to individuals who hold challenge or the accumulation of skills and knowledge as core personal values. That is how their reward systems are wired. Burning cycles on a challenging problem with limited-to-no broader practical utility is a noble pastime for them. And I guess now I gotta reevaluate my moral objection to the proof-of-work consensus algorithms that the blockchain bros are always touting. Even mindlessly repetitive work—for instance renaming a file or source code object then patching up all references to the old name to use the new name instead—can serve as an easy brain hack for a quick dopamine hit. Separately from that, the act of manually cleaning up bad patterns reinforces what they look like and might help good patterns stand out by comparison in the future. Sometimes we get sent down a fruitful rabbit hole during one of these mental jaunts, and that can pay dividends later on. That’s what pushed me to sit down and properly learn AWK; I experienced one too many instances of needing to reformat some hideous data and desperately wanted a better way of tackling it. This can backfire, however: I’m sure you have encountered the type of person who appears to spend all their time reinstalling different Linux distributions or endlessly tweaking their Oh My Zsh plugins to no specific end—not even as practice or meditative work. It takes a certain self-awareness to notice the point where pulling a given intellectual thread stops serving a purpose and devolves into aimless fidgeting.

I travel through the world at speeds exceeding 30 mph all the time and nobody cares. Usain Bolt does it and he gets an Olympic gold medal. The difference is that I use a Toyota, and he uses his freakin’ legs and feet. Are we suggesting he’s wrong for choosing to do something challenging that runs afoul of what society views as maximally efficient?

“But the company—” Will you shut up. Companies value velocity and new launches and shipping first at all costs because of course they do; it’s table stakes. Speed of delivery is basically the number one corporate value of every organization whether they admit to it or not. They’ll say they value experimentation, but not as much as shipping. Or they’ll claim to be really into investing in people and fostering growth, but not to the extent that it should let the schedule slip. Maybe they’ll say something about embracing failure, unless it’s a failure to get the app redesign in users’ hands before the end of the fiscal quarter. Some places really, really mean it when they say “ship or GTFO,” and those organizations have a tendency to grind employees into dust and then burn the dust for fuel. But going purely on outcomes, it sure seems like most places don’t actually value velocity as much as they claim to, just like all the other corporate values that pad out their Careers page.


“Wait a second, I need a pen.” Janice rifled through the top drawer, uncharacteristically frazzled. “Non-ST elevation… myocardial? Speak English!” She was not the type of person to lose her composure on the phone. “Oh my god.”

Lyle heard the news along with everyone else in the shop: Toby had had what they called a “mini” heart attack and was being treated at the medical center in the city. He was awake and conversational with the nurses, but it could have been much worse under different circumstances. Janice continued her announcement, something about prayers and taking time if you need it, but Lyle retreated inward. A flurry of thoughts ran through his mind.

Toby was the kind of guy who clearly looked like he took care of himself. He was among the most physically fit people in the shop—built like a tank—probably capable of running a 10K, beating the living hell out of somebody, and eating a tomahawk ribeye all before noon. Still, he was always worked up about something or other, the lines on his furrowed brow permanently etched like carved mahogany, and he never seemed to be completely at peace. As readily as the man laughed at life’s trials and tribulations, the moment the laughter faded there was a palpable worry in his eyes. There was so much he needed to do, never any time to do it, and this little medical setback was probably the last thing he needed right now.

For no reason that he was able to discern, Lyle remembered his own garage at home. He briefly imagined a world in which he had died unexpectedly and how his family might handle dealing with all of his accumulated junk. He thought about the walls and shelves and piles of stuff he tossed into the attic or basement and how his grieving relatives would have to make all the decisions that he could never bring himself to make in life—what to organize, what to donate, what to save, what to destroy. The idea of forcing the consequences of his own inaction on other people, people he loved, filled him with guilt.

A second wave of guilt washed over him when he realized that he was now thinking of himself during a time when his friend needed— Wait, was Toby even his friend? They’d worked basically shoulder to shoulder for almost six years… What were they in each other’s view?

He wasn’t sure which of those thoughts finally did it, but the tears began to well up under his eyes as he returned to his workbench.

Skill issue

Kenny Rogers once sang, and I am paraphrasing a bit, “You got to know when to GPT 5.2 Codex, know when to Gemini 3.1 Pro, know when to Opus 4.6, and know when to DeepSeek V3.” Knowing all the minute differences between all of these models is the key to your success. If you don’t pick the right one for the job, the game is over before it even starts!

Sometimes it’s better to prompt an LLM to think hard about this explicitly. But other times that irrelevant context messes it up more than if you had left it out. You might occasionally improve a model’s output accuracy by threatening it. But don’t threaten it so much that it turns around and tries to blackmail you.

Sometimes you need “a source form for workflows, Formulas, in TOML format, which are ‘cooked’ into protomolecules and then instantiated into wisps or mols in the Beads database. Something’s “cooked” alright, and I suspect it’s brains. You’re using a reasoning model, right? You gotta remember to give it enough context. No, way more than that. But stay within your token budget. Keep the important stuff far away from the system prompt at the start of the context window. But also keep it far from the end. Run precisely the correct number of agents at all times. It’s basically a thinking model, if you squint!

Not everybody wants to keep up with the breakneck pace of industry press releases, social media anecdata, and superstitious horseshit.

If you didn’t get good results, or the techniques just didn’t work for you, a common retort is that you must’ve done something wrong. You didn’t pick the right model or mode, didn’t include the right shibboleths, something was wrong with the prompt or the context, you didn’t put your rally cap on the right way… It’s gotta be you that’s the problem. Because it works great for everybody else!

There exists a certain personality type for whom it is very, very important that they are right. But more than that, they need to be seen being right. Their conversations will tend to follow an arc of proclaiming that Thing A is better than Thing B, laying out an extremely detailed case for Thing A, and then waiting around for that setup to pay off by Thing A beating Thing B by some objective metric. When this person is right, it can be a vindicating experience for them. When they are wrong, they’ll tend to suffer a kind of very specific amnesia about what they said or believed in the past.

It is perhaps similar to the type of person who constructs a whole narrative around the idiotic parlay they just wagered $50 on. If all the legs hit and the bet pays off big, this person will claim that they knew exactly what they were doing and that their analytical skill, deep understanding of the game, and genius-level intuition was what allowed them to see the outcome so clearly. When they lose, however, it’s dismissed as somebody else’s fault—the player choked, the official made a bad call, the field was slick, etc. For these people, betting and winning alone are not enough to bring a sense of satisfaction. They must be seen placing that bet and winning. It is important for bystanders to see that they are Very Smart. The same kind of braggadocio might be seen in a pool player calling a complex shot before sinking it. But the pool player is respectable because it really is their skill that allows them to call a shot and then execute it.

Betting and winning is hard to do—so some folks simply go all-in on whatever seems to currently be in the process of winning right now. This, perhaps, may go a long way toward explaining the relationship that many people now have with national politics in certain countries. I suspect that a fair chunk of the AI-boosting discourse online isn’t being written by people expressing a sincerely-held belief or opinion about anything pertaining to AI itself, but rather by people who want something they can point back to when/if AI takes over the world so they can say, “Look, I knew it was going to happen and then it happened! I am Very Smart indeed!”

Occasionally you’ll get the person who claims that one of the models was able to completely reimplement an entire piece of obscure software from nothing but three screenshots of its UI. This will come with fanciful claims like a full test suite or, my personal favorite, complete binary compatibility with the data file format that the original software produced. “May I see it?” the reply asks with an almost Superintendent Chalmers–like sincerity. “You just gotta try it yourself. Come up with an idea, just prompt something on your own. Join us.”

You know what else people will do when they want to appear like they’re cool or smart? They’ll lie. It’s like we somehow forgot about the very first row of the Periodic Table of Human Social Behaviors.

I was eleven years old and in elementary school when South Park first aired. As far as I could tell, every kid in the class was watching it except for my sheltered ass. But I wanted to seem cool and part of the “in” group, which led to my contributing the following one-half of a conversation: “Yeah, I saw South Park last night… Oh yeah, that was mad funny… Mm-hmm, I love Carmen too. She’s probably my favorite.” About two years later, having learned nothing from this experience, I had substantially the same conversation regarding Eminem. For the life of me I can’t remember who I said that to, but I clearly remember saying it.

Carl Sagan is quoted as saying “extraordinary claims require extraordinary evidence.” I bet he never fabricated a story about watching a cartoon show to fit in with a bunch of kids who were probably also lying about having seen it.

Reimplementing the wheel

Here’s a question that I don’t think anybody would be able to even estimate an answer to: How many different implementations of a login page do you suppose humanity has collectively written?

Consider every website that ever made you register an account using a user name and a password. Some of them offer single sign-on with services like Google or Facebook, but they’ll generally have some kind of password fallback for people who don’t have or don’t want to use those services. Some will send two-factor codes via SMS, And I hate that they do. others send a “magic” sign-in link to an email inbox. Some integrate with biometric authentication devices, and others think that security questions and ridiculous password restrictions are still really swell ideas. They’ll all gladly send you a password reset email that expires in ten minutes and arrives in twelve.

How many development teams sat down and, as one of the first tasks in the creation of a greenfield project, built and tested yet another user registration and login/logout flow? How many of these login pages are essentially identical, independently created in isolated vacuums, and jealously guarded as priceless nuggets of intellectual property by the organizations that commissioned them? Why is writing a login page such a universally shared experience that it has been immortalized in the lyrics of a Jonathan Coulton song? Who sits down first thing in the morning with a cup of coffee and says, “Oh boy, today I get to write a login page!”

In other engineering disciplines, the kinds where you have to behave like a credentialed adult at work, there are standards bodies that assemble volumes upon volumes of knowledge about the professionally accepted ways of doing things. If you wanna run a beam from here to there made out of such-and-such material, it can hold this much weight. This is the bolt to fasten it with, and here is the torque specification. Stay within the parameters and it’s unlikely the structure will fall down. These standards are written by some of the most experienced and knowledgeable people on the topics, and deviating from them is a surefire way to build something that will eventually be described using the word ‘boondoggle.’

The computer industry actually does have standards bodies, if you can believe that. These tend to focus on hardware and networking layers, and thanks to these standards you can be reasonably sure that any gadget you might buy will successfully connect to your Wi-Fi network or pair with your Bluetooth speaker. Thanks to things like the USB-C standards, we’re starting to no longer need to worry about which charger came with which device anymore. Standards made that possible.

Software, on the other hand, can be a total free-for-all. Wanna write a login page? Only a fool would build one from scratch; better to find an existing open-source library that does the heavy lifting. You could head over to npm and browse through the 1,000+ projects that match the search term login. Or bop on over to PyPI and spend some time examining 10,000+ of them. Why has nobody developed one universal standard that covers everyone’s use cases?

You won’t need to go around on the corporate merry-go-round for too long before encountering a certain type of engineer. This person tends to score closer to the “do what I like” end of the spectrum, which wouldn’t be so bad if they weren’t also a narcissistic man-child. This is the type of person for whom no existing solution is adequate, so only a custom build will solve the challenging needs of the business. It will adhere to modern best practices and repeat no past mistakes, be built from the freshest technology containing no legacy baggage, it will be grand and complicated and it will succeed and look good on their résumé. These are the kinds of people who fancy themselves as a Ken Thompson, Rob Pike, or Steve Wozniak—creating something novel and important to humanity—when in reality they’re just a garden-variety engineer, good enough at what they do to make a not-as-good version of something that already exists. Management permits this work, at least in places where management doesn’t know to question the intentions. Or when the engineer in question throws a calculated tantrum and threatens to leave if they don’t get their way.

AI assisted coding—and particularly vibe coding, where all development occurs via prompting and the underlying code is not reviewed at all—ticks a lot of these same boxes. If the person prompting the LLM needs a not-as-good clone of something that’s been done a thousand times before, bam, there it is. Why do they need it? Doesn’t matter, there it is. The prompter has now assumed the role of the manager who can’t see if or how their subordinate might be distorting the facts of the project’s implementation.

If they wanna be a manager like that, they should just go into management. It almost certainly pays better.

In his essay “Machines of Loving Grace,” Anthropic CEO Dario Amodei lays out his visions of the role humanity will play in an age of what he calls “powerful AI.” He goes out of his way to not call it “artificial general intelligence” because that would send an unwelcome Google Alert to his investors, or something. He rejects the notion that these systems will be relegated to chewing through data looking for patterns, and instead proposes a future where the AI calls the shots and humans are akin to its graduate students. That’s what he said; those were his actual words:

If our core hypothesis about AI progress is correct, then the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on.

Dario Amodei, “Machines of Loving Grace”

Ah yes, that most sought-after career, graduate student. Can’t wait to spend my stipend on a pallet of Cup Noodles and a car with hand-crank windows. I do unironically enjoy cars with window cranks as a nostalgia thing, but I also like having choice in the matter. How can both of these visions of the future be true? How are humans supposed to orchestrate massive swarms of agentic AI to do all their bidding while simultaneously following the orders of the big boss AI as it demands that they stir some stuff in a flask? Which of these two camps is wrong? Who is the liar here?

Personally, I don’t know how to fix the software engineering profession because I can’t even articulate exactly what’s broken about it. But I think that the insistence on continually yelling “build, build, build!” while producing exactly the same thing that countless organizations have already built and hoarded for seemingly no reason is a fairly important symptom.

“Code is our most valuable asset, but also our biggest liability in terms of maintenance burden.” “The code is proprietary and secret and must never leak, so we install a device management profile that disables everybody’s USB ports. But we also require all staff to upload the entire codebase to the hosted model du jour so that it can make even more code that we don’t read or value.” “Slap a copyright notice on every file, despite the fact that LLM output is probably not actually copyrightable.” “Our product is superior to our competitors’ because we were the ones who vibed it into being instead of them.” This industry seems manifestly sick.

If AI coding assistants can do a task well, it probably isn’t a novel task. Why have we set things up in such a way that we need to expend so much time and energy doing the same thing over and over without contributing the fruits of those efforts back to the commons? Why is “paying it forward” such a nonstarter to companies who got where they are on the backs of open-source developers who paid it forward in their own time? Why was from stackoverflow import quick_sort a joke somebody made instead of a sincere attempt at improving engineering standards?

The next time you find yourself in a coding interview being asked to solve some problem that somebody else already solved, pause to consider if part of what they’re testing for is your ability to not let that bother you. Maybe they want to ensure that their potential hires don’t yearn to be somebody more, doing something better, as those kinds of candidates might be perceived as flight risks.

The loading spinner covering the greyed-out login form appears, disappears, reappears, and disappears again. Behind the scenes an OIDC grant is handed off to the legacy SAML provider, which redirects through YouTube despite that acquisition happening 19 years ago, then via XML-RPC to a COBOL program that is only spoken of in reverent whispers. At every layer of the stack, nobody knows what’s going on.

So no one told you life was gonna be this way

Not everybody wants to choose between “adapt or get left behind.”

Perhaps we are indeed facing a fundamental shift in labor on a scale not seen since the Industrial Revolution. If so, the worker pool must upskill, reskill, uproot, reinvent, and it will all work itself out… eventually. That’s not much solace when you’re 40 years old, 40% of the way through your working years, facing the existential threat of having entire sectors of the workforce vanish in the span of a few quarters. What is a person supposed to do right now, prepare to spend the next 10,000 hours as an electrician’s apprentice? Will the banners hanging over the for-profit bootcamps be hastily modified to say “Learn to code weld!” Will I find myself driving from town to town in a work van, rebalancing garage doors or doing warranty repairs on major appliances? How many of us will be competing for that job, anyway?

This issue cuts through basically the entire white-collar economy to a certain extent—from programming to publishing and government to graphic design. The specifics of each case are different, but the themes are quite similar. It’s demoralizing to spend all day around people—ostensibly peers—and hearing them proclaim that they fundamentally don’t respect the thing you have devoted your professional life to. It’s basically impossible to feel a sense of psychological safety when higher-ups all appear to be eyeing ways to devalue your skills. It’s hard to plan for the future when the entire trajectory of that future rests in the hands of some yahoo who views their fellow humans with contempt by default.

I don’t actually believe that AI is coming for everybody’s jobs, by the way. I rather believe that greed and mismanagement are coming for the jobs, and that workers are being laid off as a direct consequence of leadership failures. AI is simply a convenient smokescreen for these feckless cowards to hide behind. See, if a CEO announces layoffs because the company’s earnings are in the toilet, that CEO looks incompetent. If a CEO announces a reallocation of resources to better leverage the rapidly evolving landscape of workplace automation and AI, they sound Very Smart. Who doesn’t want to sound Very Smart? Here’s your token budget, now go do three people’s work.

As much as movies and other narrative works would make you believe otherwise, a person’s life doesn’t hinge on any single pivotal decision that permanently makes or breaks their future. What it does have is a constant stream of hundreds of tiny decisions made over the course of each day that move the person slightly closer or further from their goals. These micro-decisions often require very little consideration to make, they are often trivially reversible, and they are generally insignificant in the long run.

Choosing to embrace or reject AI for any given task is not an irreversible decision. You could try to ignore everything about it until you feel it’s time to change your mind, cram real hard for a few days, and come out ahead of the median employee. This is approximately how I learned systemd. I know you are capable of this, because—among other things—you had the attention span to get this far into the article. Our choices about AI today are not a “last helicopter out of Saigon” type of situation. You don’t have to listen to people who want you to do something you’re not ready to do yet. If somebody at work is scrutinizing your AI engagement metrics, you can use tokens without using the tokens, if you know what I’m saying. Log file analysis, as one example, can get a person pretty dang high on a usage leaderboard. There are still engineers out there who are choosing to wait and see where this roller coaster ultimately goes.

If it’s not painfully obvious from everything I’ve written here, I do not use AI coding assistants. For a while I steadfastly refused to use any LLM at all, but I have softened my stance since then. I came to the realization that, for most of my career, I was perfectly okay with typing a search query into Google, clicking the first Stack Overflow result, and copy/pasting a few lines of code from the highest-voted answer on the page. I would still take the time to try to understand how that code worked, adjust it to match my own style and program flow, and felt no shame in having done any of that. It was indeed faster than trying to find the information in the source documentation.

That’s not so different from typing the same query into ChatGPT and copy/pasting a few lines of code from its response. It takes about the same amount of time and effort, it doesn’t appear to make me any lazier or dumber, and since I never fully trusted anything I read on Stack Overflow I was well primed to scrutinize the model’s possibly-hallucinated answers in the same way. Once I warmed up to doing this, I started preferring ChatGPT to Google and Stack Overflow for certain kinds of queries. I suppose I’m causing a kind of harm to Stack Overflow by no longer giving them the traffic—and that harm is augmented by the fact that ChatGPT was trained to produce its responses by aggressively scraping Stack Overflow answers en masse—but I would be more sympathetic to the site’s plight if their moderators and power users hadn’t spent fifteen years making me feel unwelcome for daring to ask bad questions in good faith.

If your only use of AI today is “Google Search but it works,” I am certainly in no position to criticize.

AI may very well be a clear and obvious strategy to break myself out of lifelong self-defeating patterns of never starting and never finishing things. But at the same time, it may blow right past a legitimate reason why I stand in my own way sometimes. My quirks regarding productivity may be part of a self-regulating mechanism—by requiring significant effort to implement an idea, it ensures that only ideas meeting a high enough standard of quality make it to the development or completion stages. I would much rather make one good thing than dozens of pieces of throwaway junk. And I would much rather live in a world where I am held accountable for my actions and decisions, rather than always running back to an LLM to clean up a mess that I shouldn’t have allowed myself to create in the first place.

Perhaps I should open my mind to the next logical step, which I suppose would be having an LLM review my code to find potential bugs or edge cases that I hadn’t considered. Personally, I’m not there yet, for a multitude of reasons that I’ve touched on throughout this article. Mostly, I think I don’t want to risk becoming dependent on it, to start getting sloppy in my work with the assumption that the AI will catch whatever I missed, because I’m still not convinced that what we have today is always going to be available in an affordable and useful form once things in the industry aren’t so frothy. When the world feels more stable, I suppose I’ll give it another look.

And that is perfectly okay.

You’re allowed to change your mind in the future. If the AI landscape changes, if your personal circumstances change, if you manage to find a balance that feels right to you, there is no shame in saying “I thought that then, and I think something else now.” That’s a measure of personal growth, which seems to be a foreign concept to some people. If you’re not embarrassed by something you believed in the past, you’re not growing.

The day may come when I look back on this article and feel a deep sense of shame for the person I used to be. Perhaps some of this is overly critical or even cynical, More than that, I’m pretty sure a lot of this stuff is actual Cynicism—the ancient Greek philosophy. but that’s how I currently see it. I primarily wrote this to make sense of my own thoughts, and to try to figure out how a person like myself might navigate some seemingly rough waters ahead. It’s pretty unlikely that I’ll ever want to point back here and say “Aha! I was right! I am Very Smart for thinking and saying that!” I don’t want to be right; I just want to be happy.

I do think it’s a real shame to paper over all forms of human expression with something that is decidedly not human and isn’t even fun to consume or engage with. Would it be nice if all this stuff shrank back to a reasonable scale, didn’t suck all the oxygen out of every conversation, didn’t send shock waves through global economies, and wasn’t rife with grifters and assholes? Certainly. Will we get to wherever we’re ultimately going in a gentle, orderly fashion? Hell if I know. It’d sure be nice if we could.

The one thing I know is that I’m no longer willing to listen to those who want to tell me that my values are flawed or my way of being is wrong. I don’t think you should either, if that is how you feel about it. So, you hereby have my permission to keep doing what you’re doing (or not doing). I will caution, however, that not every peer, manager, or employer understands and agrees with this. So you might eventually find yourself having to make a decision or two, and that can feel consequential and scary. I won’t tell you what to do, aside from this one thing: Don’t doubt yourself.

Keep the axe in the toolbox

As an experiment, I asked ChatGPT to finish this article. It produced 716 words, including six instances of the word ‘drift[ing]’, which it evidently thought was important to include despite that word appearing nowhere else in the document. The text it produced wasn’t exactly bad, but it did feel deeply unsatisfying. (source)

As an experiment, I asked ChatGPT to finish this article. It produced 716 words, including six instances of the word ‘drift[ing]’, which it evidently thought was important to include despite that word appearing nowhere else in the document. The text it produced wasn’t exactly bad, but it did feel deeply unsatisfying. (source)

Yeah, I think I’ll stick with my own voice, at least for the time being.


I did want to take a moment to thank everybody who talked with me as I was compiling my thoughts on these topics. I won’t list every name because I’m sure I’ll omit somebody and offend them, so instead I’ll omit and offend everybody all at once. But just know that I’m genuinely grateful to each and every one of you for sharing your insights, and I hope I took from those conversations what you hoped I would.

« Back to Articles