2025-06-19Average

I managed to survive the rise and fall of the blockchain without writing a single goddamn word about it. So it is with a heavy heart I take to the keyboard this time, because as you probably surmised upon reading the B-word, this post is me finally waving my white flag and writing something about AI. I'm sorry, and I wish things could be different, but this is apparently what the world wants and I am powerless in the face of it.

I've been dark on social media for a few months now, and it's been treating me fairly well. This wasn't my first rodeo, and I actually knew what to expect: withdrawal symptoms followed by euphoria followed by the Internet slowly seeping back through every tiny crack you mistakenly left open, and finally the realization that a well-curated feed is probably healthier than randomly stumbling across the vast expanse of human idiocy guided by nothing but desperate newsrooms and their algorithms. I've been trying to subsist on a diet of RSS and public broadcasting (which has honestly been pretty okay thanks to SVT being a lot better about not reporting on Donny Drump's every burp and fart this time around) but RSS and public news is all protein and no carbs. I entered rabbit starvation within about a month. Cutting the silly shit of your own choosing out of your life really doesn't result in that renewed energy for side projects and hobbies you might have expected, it results in reading fucking reddit on the fucking toilet. And, I figure, better the devil you know! So I'm going to put the quote posts, sardonic quips, and thought-leader flamewars back into my daily media diet somehow. How that happens and when that happens is still undecided, but at least I feel like I'm in good enough shape to enter the Thunderdome again. I have a confession to make, however. In my lethargic, meme-deprived state, I would sometimes scroll... LinkedIn.

Reader, that was a mistake.

Oh, the Linkydink, perennial scourge of sanity

I don't think I'm inflating my numbers when I say that fully half of that execrable feed is somehow about AI. Look, there's some sales schmuck pushing their new vibe-coded product. And over there you can see someone quantifying how much more "productive" they are as an exact percentage, decimals and all. Sci-fi speculation about AGI, with a straight face. Confident, swaggering predictions about which features are just around the corner and about to change entire industries, or which class of workers are most likely to be rounded up and murdered by Joseph Schumpeter or Roko's Basilisk within the year. It's a speedrun of everything we've been through this last decade—cryptocurrencies, web3, self-driving cars—but somehow even more bombastic, even more inevitable, even more blink-and-you'll-be-left-behind and it's all accompanied by a selfie because that's what gets the algorithm horny and DTF. The hype couldn't be more repellent to me if it were synthesized in a lab out of a mixture of pure spite and cursed organic matter harvested from Peter Thiel's perineum. I hate it. Hate it, hate it, hate it.

But as with all of these technologies, one has to hold one's nose and try to see beyond the putrid steam being vented by a thousand chipper LinkedInfluencers. What are the facts on the ground? Is it useful? If so, HOW useful? How much of the hype is warranted? Can I really make myself 61.22% more productive by learning to promptineer an engification model?

Wild-eyed conviction as the default

Tim Bray wrote the mic drop piece on blockchains eight years ago, and he nailed it with this sentence:

But here’s the thing. I’m an old guy: I’ve seen wave after wave of landscape-shifting technology sweep through the IT space: Personal computers, Unix, C, the Internet and Web, Java, REST, mobile, public cloud. And without exception, I observed that they were initially loaded in the back door by geeks, without asking permission, because they got shit done and helped people with their jobs.

Am I seeing that happen with AI? Sort of, I suppose. But not the way Tobi Lütke or Steve Yegge think it's happening, where you glue a chatbot to an employee's computer and they instantly start doing The Macarena and ascend to a higher plane of existence. Left to their own devices, people will use it to do stuff they're not great at, or really don't enjoy doing. Write report emails or one-off shell scripts. Do fuzzy documentation searches, like "I know Rails has a method to generate ids from a model, what's it called again". Get help with naming things. (LLMs are actually amazing at that, it's like having a really weird friend who memorized the entire dictionary front-to-back.) You can also "pair" with an LLM in the sense that you can ask stuff like "is normalizing this data a good idea" and it might suggest another way of doing it, which could provide inspiration. These things can absolutely increase someone's output. Jury's definitely still out on letting the things write actual production code, which strikes me as kind of ominous? I mean, ChatGPT exploded onto the scene fully three years ago now, and we're still at the stage where the LLM tends to go "Sorry, you are correct, that won't work because..." every second try.

Remember: this is all being rammed down our throats at the speed of a hundred billion dollars. There are truly jaw-dropping sums being invested, and all that moolah needs to generate more moolah for Big Tech somehow. If recent history has taught us anything, it's that the people at the top aren't quite as sharp as we perhaps thought they were, and they're herd animals at their core. Modern management is all about anxiously eyeing everyone else, making sure you're wearing the correct color Patagonia vest and firing the proper percentage of people every year. Maybe another thousand layoffs will get you invited to a chat with the bros by one of Marc Andreesen's seven fireplaces? So much of the AI hype at the upper echelon of tech seems purely performative at this point. Your stock will suffer if you're not foaming at the mouth about AI, so you just adjust the collar of your $2000 button-down shirt, sigh, and start foaming.

Tobi's problems aren't your problems

Anyway, back to my LinkedIn feed! Is the state of AI really such that you have to adopt it wholesale, become a certified expert, or be left for dead? Well, if the CEOs want you to do it, ignoring it altogether might hurt your job prospects. Given today's labor market we shouldn't be surprised to see people standing astride the networks bellowing about their devotion to AI. As long as they're being rewarded, they're gonna keep doing it, and with the platforms boosting every signal about it it's no wonder it feels ever-present. But I don't think most AI is meant for you, dear reader! You're on my blog, which means you're smart and attractive and an expert in your field. (Plus, honestly, you have some world-class glutes.)

We may in the end have to give a few of our anxious CEOs some credit. There's a train of thought that goes like this: LLMs are literal mediocrity machines. They chew through inconceivable amounts of text and code, and when you input something they will furnish that input with statistical averages which should mostly turn out okay, because the statistical average is basically okay. Now imagine you're the boss of, what, twenty thousand people? Well, you're already at the mercy of the law of averages. There's a bell curve there, and roughly half your employees are going to fall on the lower half of it. If you were to arm each one of your below-the-curve employees with a machine that magically turns their work into whatever the average is, your company should see a pretty impressive boost in productivity! It's hard to define who's on the bottom half, so might as well tell everyone to adopt their new chatbot tamagotchi in everything they do and see how it shakes out. The tech C-suite of today are mostly bean-counters, and this would make sense to them.

But, listen, would it make sense to you? Any company with less than, say, fifty employees should be incredibly wary of deploying too much AI. I've had people ask me what my "AI strategy" was and my brain instantly translates that to my "mediocrity strategy" which, at six employees, I really don't think I need. We should all aspire to be above average.

When I played football, I wanted to improve the things I was bad at. If I could have had a magic device that turned my left foot into an "average" foot instead of the useless club of meat I had on the pitch, I would have taken it in a heartbeat. But here's the thing: my game was rarely defined by what I could do with my left foot. It had a lot more to do with vision and positioning, and so the positions where I was deployed were defined by that. If the magic machine that made my left foot viable also turned my strengths into "averages" I would have been a lot less useful. The more you work on something, the better you become at it. Seems worth remembering in 2025.

Use it or don't, but the FOMO feels unnecessary

I haven't even touched on the energy use, corporate ethics, and confidentiality parts of the issue. Nothing is ever simple anymore. There are places where you can get a lot of value out of an AI. It can eliminate the blank-page problem, it can do some chores for you, it can name stuff, it can shore up things you suck at and don't want to improve. But I think it's wise to try to tune out the hype. Maybe these tools make a lot of sense to you, maybe they don't, but the ultimate arbiter of that is you and not Satya Nadella. Better to pit your own judgement against the tech, rather than that of a handful of signal-juiced billionaires and their vision of the future. You're in no rush. We've been through three years of it already and shit changes daily anyway. If it comes to pass, it comes to pass, and then you can adjust your cheap button-down shirt, sigh, and start foaming at the mouth like everyone else.