Gabriel L. Helman Gabriel L. Helman

And Another Thing: Pianos

I thought I had said everything I had to say about that Crush ad, but… I keep thinking about the Piano.

One of the items crushed by the hydraulic press into the new iPad was an upright piano. A pretty nice looking one! There was some speculation at first about how much of that ad was “real” vs CG, but the apology didn’t include Apple reassuring everyone that it wasn’t a real piano, I have to assume they really did sacrifice a perfectly good upright piano for a commercial. Which is sad, and stupid expensive, but not the point.

I grew up in a house with, and I swear I am not making this up, two pianos. One was an upright not unlike the one in the ad—that piano has since found a new home, and lives at my uncle’s house now. The other piano is a gorgeous baby grand. It’s been the centerpiece of my parent’s living room for forty-plus years now. It was the piano in my mom’s house when she was a little girl, and I think it belonged to her grandparents before that. If I’m doing my math right, it’s pushing 80 or so years old. It hasn’t been tuned since the mid-90s, but it still sounds great. The pedals are getting a little soft, there’s some “battle damage” here and there, but it’s still incredible. It’s getting old, but barring any looney tunes–style accidents, it’ll still be helping toddlers learn chopsticks in another 80 years.

My point is: This piano is beloved. My cousins would come over just so they could play it. We’ve got pictures of basically every family member for four generations sitting at, on, or around it. Everyone has played it. It’s currently covered in framed pictures of the family, in some cases with pictures of little kids next to pictures of their parents at the same age. When estate planning comes up, as it does from time to time, this piano gets as much discussion as just about everything else combined. I am, by several orders of magnatude, the least musically adept member of my entire extended family, and even I love this thing. It’s not a family heirloom so much as a family member.

And, are some ad execs in Cupertino really suggesting I replace all that with… an iPad?

I made the point about how fast Apple obsoletes things last time, so you know what? Let’s spot them that, and while we’re at it, let’s spot them how long we know that battery will keep working. Hell, let’s even spot them the “playing notes that sound like a piano” part of being a piano, just to be generous.

Are they seriously suggesting that I can set my 2-year old down on top of the iPad to take the camera from my dad to take a picture while my mom shows my 4-year old how to play chords? That we’re all going to stand in front of the iPad to get a group shot at thanksgiving? That the framed photos of the wedding are going to sit on top of the iPad? That the iPad is going to be something there will be tense negotiations about who inherits?

No, of course not.

What made that ad so infuriating was that they weren’t suggesting any such thing, because it never occurred to them. They just thought they were making a cute ad, but instead they (accidentally?) perfectly captured the zeitgeist.

One of the many reasons why people are fed up with “big tech” is that as “software eats the world” and tries to replace everything, it doesn’t actually replace everything. It just replaces the top line thing, the thing in the middle, the thing thats easy. And then abandons everything else that surrounds it. And it’s that other stuff, the people crowded around the piano, the photos, that really actually matters. You know, culture. Which is how you end up with this “stripping the copper out of the walls” quality the world has right now; it’s a world being rebuilt by people whose lives are so empty they think the only thing a piano does is play notes.

Read More
Gabriel L. Helman Gabriel L. Helman

Last Week In Good Sentences

It’s been a little while since I did an open tab balance transfer, so I’d like to tell you about some good sentences I read last week.

Up first, old-school blogger Jason Kottke links to a podcast conversation between Ezra Klein and Nilay Patel in The Art of Work in the Age of AI Production. Kottke quotes a couple of lines that I’m going to re-quote here because I like them so much:

EZRA KLEIN: You said something on your show that I thought was one of the wisest, single things I’ve heard on the whole last decade and a half of media, which is that places were building traffic thinking they were building an audience. And the traffic, at least in that era, was easy, but an audience is really hard. Talk a bit about that.

NILAY PATEL: Yeah first of all, I need to give credit to Casey Newton for that line. That is something — at The Verge, we used to say that to ourselves all the time just to keep ourselves from the temptations of getting cheap traffic. I think most media companies built relationships with the platforms, not with the people that were consuming their content.

“Building traffic instead of an audience” sums up the last decade and change of the web perfectly. I don’t even have anything to add there, just a little wave and “there you go.”

Kottke ends by linking out to The Revenge of the Home Page in the The New Yorker, talking about the web starting to climb back towards a pre-social media form. And that’s a thought thats clearly in the air these days, because other old school blogger Andy Baio linked to We can have a different web.

I have this theory that we’re slowly reckoning with the amount of cognitive space that was absorbed by twitter. Not “social media”, but twitter, specifically. As someone who still mostly consumes the web via his RSS reader, and has been the whole time, I’ve had to spend a lot of time re-working my feeds the last several months because I didn’t realize how many feeds had rotted away but I hadn’t noticed because those sites were posting update as tweets.

Twitter absorbed so much oxygen, and there was so much stuff that migrated from “other places” onto twitter in a way that didn’t happen with other social media systems. And now that twitter is mostly gone, and all that creativity and energy is out there looking for new places to land.

If you’ll allow me a strained metaphor, last summer felt like last call before the party at twitter fully shut down; this summer really feels like that next morning, where we’ve all shook off the hangover and now everyone is looking at each other over breakfast asking “okay, what do you want to go do now?”


Jumping back up the stack to Patel talking about AI for a moment, a couple of extra sentences:

But these models in their most reductive essence are just statistical representations of the past. They are not great at new ideas. […] The human creativity is reduced to a prompt, and I think that’s the message of A.I. that I worry about the most, is when you take your creativity and you say, this is actually easy. It’s actually easy to get to this thing that’s a pastiche of the thing that was hard, you just let the computer run its way through whatever statistical path to get there. Then I think more people will fail to recognize the hard thing for being hard.

(The whole interview is great, you should go read it.)

But that bit about ideas and reducing creativity to a prompt brings me to my last good sentences, in this depressing-but-enlightening article over on 404 media: Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

A small publisher for speculative fiction and roleplaying games is shuttering after 22 years, and the “final straw,” its founder said, is an influx of AI-generated submissions. […] “The problem with AI is the people who use AI. They don't respect the written word,” [founder Julie Ann] Dawson told me. “These are people who think their ‘ideas’ are more important than the actual craft of writing, so they churn out all these ‘ideas’ and enter their idea prompts and think the output is a story. But they never bothered to learn the craft of writing. Most of them don't even read recreationally. They are more enamored with the idea of being a writer than the process of being a writer. They think in terms of quantity and not quality.”

And this really gets to one of the things that bothers me so much about The Plagiarism Machine—the sheer, raw entitlement. Why shouldn’t they get to just have an easy copy of something someone else worked hard on? Why can’t they just have the respect of being an artist, while bypassing the work it takes to earn it?

My usual metaphor for AI is that it’s asbestos, but it’s also the art equivalent of steroids in pro sports. Sure, you hit all those home runs or won all those races, but we don’t care, we choose to live in a civilization where those don’t count, where those are cheating.

I know several people who have become enamored with the Plagiarism Machines over the last year—as I imagine all of us do now—and I’m always struck by a couple of things whenever they accidentally show me their latest works:

First, they’re always crap, just absolute dogshit garbage. And I think to myself, how did you make it to adulthood without being able to tell what’s good or not? There’s a basic artistic media literacy that’s just missing.

Second, how did we get to the point where you’ve got the nerve to be proud that you were cheating?

Read More
Gabriel L. Helman Gabriel L. Helman

Antiderivitives

I’ve been thinking all week about this macbook air review by Paul Thurrott: Apple MacBook Air 15-Inch M3 Review - Thurrott.com (via Daring Fireball).

For a little bit of context Thurrott has spend most of the last couple decades as The Windows Guy. I haven’t really kept up on the folks in the Windows ecosystem now that I’m not in that ecosystem as much anymore, so it’s wild to see him give a glowing review to the macbook.

It’s a great review, and his observations really mirrored mine when I made the switch fifteen years or so ago (“you mean, I can just close the lid, and that works?”). And it’s interesting to see what the Macbook looks like from someone who still has a Windows accent. But that’s not what I keep thinking about. What I keep thinking about is this little aside in the middle:

From a preload perspective, the MacBook Air is bogged down with far too many Apple apps, just as iPhones and iPads are. And I’m curious that Apple doesn’t get dinged on this, though you can, at least, remove what you don’t want, and some of the apps are truly useful. Sonoma includes over 30 (!) apps, and while none are literally crapware, many are gratuitous and unnecessary. I spent quite a bit of time cleaning that up, but it was a one-time job. And new users should at least examine what’s there. Some of these apps—Safari, Pages, iMovie, and others—are truly excellent and may surprise you. Obviously, I also installed a lot of the third-party apps I need.

And this is just the perfect summary of the difference in Operating System Philosophy between Redmond and Cupertino.

Microsoft has always taken the world-view that the OS exists to provide a platform for other people to sell you things. A new PC with just the Microsoft OS of the day, DOS, Windows 3, Win 95 Windows 11, whichever, is basically worthless. That machine won’t do anything for you that you want to do, you have to go buy more software from what Microsoft calls an “Independent software vendor” or from them, but they’re not gonna throw Word in for free, that’s crazy. PCs are a platform to make money.

Apple, on the other hand, thinks of computers as appliances. You buy a computing device from Apple, a Mac, iPhone, whatever, out of the box that’ll do nearly everything you need it to do. All the core needs are met, and then some. Are those apps best-in-class? In some cases, yes, but mostly if you need something better you’re probably a professional and already know that. They’re all-in-one appliances, and 3rd party apps are bonus value, but not required.

And I think this really strikes to the heart of a lot of the various anti-monopoly regulatory cases swirling around Apple, and Google, and others. Not all, but a whole lot of them boil down to basically, “Is Integration Bad?” Because one of the core principles of the last several decades of tech product design has been essentially “actually, it would be boss if movie studios owned their own theatres”.

And there’s a lot more to it than that, of course, but also? Kinda not. We’ve been doing this experiment around tightly integrated product design for the last couple of decades, and how do “we” (for certain values of “we”) feel about it?

I don’t have a strong conclusion here, so this feels like one of those posts where I end by either linking to Libya is a land of contrasts or the dril tweet about drunk driving..

But I think there’s an interesting realignment happening, and you can smell the winds shifting around whats kinds of tradeoffs people are willing to accept. So maybe I’ll just link to this instead?

Read More
Gabriel L. Helman Gabriel L. Helman

Electronics Does What, Now?

A couple months back, jwz dug up this great interview of Bill Gates conducted by Terry Pratchett in 1996 which includes this absolute gem: jwz: "Electronics gives us a way of classifying things"

TP: OK. Let's say I call myself the Institute for Something-or-other and I decide to promote a spurious treatise saying the Jews were entirely responsible for the Second World War and the Holocaust didn't happen. And it goes out there on the Internet and is available on the same terms as any piece of historical research which has undergone peer review and so on. There's a kind of parity of esteem of information on the Net. It's all there: there's no way of finding out whether this stuff has any bottom to it or whether someone has just made it up.

BG: Not for long. Electronics gives us a way of classifying things. You will have authorities on the Net and because an article is contained in their index it will mean something. For all practical purposes, there'll be an infinite amount of text out there and you'll only receive a piece of text through levels of direction, like a friend who says, "Hey, go read this", or a brand name which is associated with a group of referees, or a particular expert, or consumer reports, or the equivalent of a newspaper... they'll point out the things that are of particular interest. The whole way that you can check somebody's reputation will be so much more sophisticated on the Net than it is in print today.

“Electronics gives us a way of classifying things,” you say?

One of the most maddening aspects of this timeline we live in was that all our troubles were not only “forseeable”, but actually actively “forseen”.

But we already knew that; that’s not why this has been, as they say, living rent-free in my head. I keep thinking about this because it’s so funny.

First, you just have the basic absurdity of Bill Gates and Terry Pratchett in the same room, that’s just funny. What was that even like?

Then, you have the slightly sharper absurdity of PTerry saying “so, let me exactly describe 2024 for you” and then BillG waves his hands and is all “someone will handle it, don’t worry.” There’s just something so darkly funny to BillG patronizing Terry Pratchet of all people, whose entire career was built around imagining ways people could misuse systems for their own benefit. Just a perfect example of the people who understood people doing a better job predicting the future than the people who understood computers. It’s extra funny that it wasn’t thaaat long after this he wrote his book satirizing the news?

Then, PTerry fails to ask the really obvious follow-up question, namely “okay great, whose gonna build all that?”

Because, let’s pause and engage with the proposal on it’s own merits for a second. Thats a huge system Bill is proposing that “someone” is gonna build. Whose gonna build all that, Bill? Staff it? You? What’s the business model? Is it going to be grassroots? That’s probably not what he means, since this is the mid-90s and MSFT still thinks that open source is a cancer. Instead: magical thinking.

Like the plagiarism thing with AI, there’s just no engagement with the fact that publishing and journalism have been around for literally centuries and have already worked out most of the solutions to these problems. Instead, we had guys in business casual telling us not to worry about bad things happening, because someone in charge will solve the problem, all while actively setting fire to the systems that were already doing it.

And it’s clear there’s been no thought to “what if someone uses it in bad faith”. You can tell that even in ’96, Terry is getting more email chain letters than Bill was.

But also, it’s 1996, baby, the ship has sailed. The fuse is lit, and all the things that are making our lives hard now are locked in.

But mostly, what I think is so funny about this is that Terry is talking to the wrong guy. Bill Gates is still “Mister Computer” to the general population, but “the internet” happened in spite of his company, not due to any work they actually did. Instead, very shortly after this interview, Bill’s company is going to get shanked by the DOJ for trying to throttle the web in its crib.

None of this “internet stuff” is going to center around what Bill thinks is going to happen, so even if he was able to see the problem, there wasn’t anything he could do about it. The internet was going well before MICROS~1 noticed, and routed around it and kept going. There were some Stanford grad students Terry needed to get to instead.

But I’m sure Microsoft’s Electronic System for classifying reputation will ship any day now.

I don’t have a big conclusion here other than “Terry Pratchett was always right,” and we knew that already.

Read More
Gabriel L. Helman Gabriel L. Helman

The Dam

Real blockbuster from David Roth on the Defector this morning, which you should go read: The Coffee Machine That Explained Vice Media

In a large and growing tranche of wildly varied lines of work, this is just what working is like—a series of discrete tasks of various social function that can be done well or less well, with more dignity or less, alongside people you care about or don't, all unfolding in the shadow of a poorly maintained dam.

It goes on like that until such time as the ominous weather upstairs finally breaks or one of the people working at the dam dynamites it out of boredom or curiosity or spite, at which point everyone and everything below is carried off in a cleansing flood.

[…]

That money grew the company in a way that naturally never enriched or empowered the people making the stuff the company sold, but also never went toward making the broader endeavor more likely to succeed in the long term.

Depending on how you count, I’ve had that dam detonated on me a couple of times now. He’s talking about media companies, but everything he describes applies to a lot more than just that. More than once I’ve watched a functional, successful, potentially sustainable outfit get dynamited because someone was afraid they weren’t going to cash out hard enough. And sure, once you realize that to a particular class of ghoul “business” is a synonym for “high-stakes gambling” a lot of the decisions more sense, at least on their own terms.

But what always got me, though, was this:

These are not nurturing types, but they are also not interested in anything—not creating things or even being entertained, and increasingly not even in commerce.

What drove me crazy was that these people didn’t use the money for anything. They all dressed badly, drove expensive but mediocre cars—mid-list Acuras or Ford F-250s—and didn’t seem to care about their families, didn’t seem to have any recognizable interests or hobbies. This wasn’t a case of “they had bad taste in music”, it was “they don’t listen to music at all.” What occasional flickers of interest there were—fancy bicycles or or golf clubs or something—was always more about proving they could spend the money, not that they wanted whatever it was.

It’s one thing if the boss cashes out and drives up to lay everyone off in a Lamborghini, but it’s somehow more insulting when they drive up in the second-best Acura, you know?

I used to look at this people and wonder, what did you dream about when you were young? And now that you could be doing whatever that was, why aren’t you?

Read More
Gabriel L. Helman Gabriel L. Helman

Implosions

Despite the fact that basically everyone likes movies, video games, and reading things on websites, every company that does one of those seems to continue to go out of business at an alarming rate?

For the sake of future readers, today I’m subtweeting Vice and Engaget both getting killed by private equity vampires in the same week, but also Coyote vs Acme, and all the video game layoffs, and Sports Illustrated becoming an AI slop shop and… I know “late state capitalism” has been a meme for years now, and the unsustainable has been wrecking out for a while, but this really does feel like we’re coming to the end of the whole neoliberal project.

It seems like we’ve spent the whole last two decades hearing about something valuable or well-liked went under because “their business model wasn’t viable”, but on the other hand, it sure doesn’t seem like anyone was trying to find a viable one?

Rusty Foster asks What Are We Dune 2 Journalism? while Josh Marshall asks over at TPM: Why Is Your News Site Going Out of Business?. Definitely click through for the graph on TPM’s ad revenue.

What I find really wild is that all these big implosions are happening at the same time as folks are figuring out how to make smaller, subscription based coöps work.

Heck, just looking in my RSS reader alone, you have: Defector, 404 Media, Aftermath, Rascal News, 1900HOTDOG, a dozen other substacks or former substacks,
Achewood has a Patreon!

It’s more possible than ever to actually build a (semi?) sustainable business out there on the web if you want to. Of course, all those sites combined employ less people that Sports Illustrated ever did. Because we’re talking less about “scrappy startups”, and more “survivors of the disaster.”

I think those Defector-style coöps, and substacks, and patreons are less about people finding viable business models then they are the kind of organisms that survive a major plague or extinction event, and have evolved specifically around increasing their resistance to that threat. The only thing left as the private equity vultures turn everything else and each other into financial gray goo.

It’s tempting to see some deeper, sinister purpose in all this, but Instapot wasn’t threatening the global order, Sports Illustrated really wasn’t speaking truth to power, and Adam Smith’s invisible hand didn’t shutter everyone’s favorite toy store. Batgirl wasn’t going to start a socialist revolution.

But I don’t think the ghouls enervating everything we care about have any sort of viewpoint beyond “I bet we could loot that”. If they were creative enough to have some kind of super-villian plan, they’d be doing something else for a living.

I’ve increasingly taken to viewing private equity as the economy equivalent of Covid; a mindless disease ravaging the unfortunate, or the unlucky, or the insufficiently supported, one that we’ve failed as a society to put sufficient public health protections against.

Read More
Gabriel L. Helman Gabriel L. Helman

“Hanging Out”

For the most recent entry in asking if ghosts have civil rights, the Atlantic last month wonders: Why Americans Suddenly Stopped Hanging Out.

And it’s an almost perfect Atlantic article, in that it looks at a real trend, finds some really interesting research, and then utterly fails to ask any obvious follow-up questions.

It has all the usual howlers of the genre: it recognizes that something changed in the US somewhere around the late 70s or early 80s without ever wondering what that was, it recognizes that something else changed about 20 years ago without wondering what that was, it displays no curiosity whatsoever around the lack of “third places” and where, exactly kids are supposed to actually go when then try to hang out. It’s got that thing where it has a chart of (something, anything) social over time, and the you can perfectly pick out Reagan’s election and the ’08 recession, and not much else.

There’s lots of passive voice sentences about how “Something’s changed in the past few decades,” coupled with an almost perverse refusal to look for a root cause, or connect any social or political actions to this. You can occasionally feel the panic around the edges as the author starts to suspect that maybe the problem might be “rich people” or “social structures”, so instead of talking to people inspects a bunch of data about what people do, instead of why people do it. It’s the exact opposite of that F1 article; this has nothing in it that might cause the editor to pull it after publication.

In a revelation that will shock no one, the author instead decides that the reason for all this change must be “screens”, without actually checking to see what “the kids these days” are actually using those screens for. (Spoiler: they’re using them to hang out). Because, delightfully, the data the author is basing all this on tracks only in-person socializing, and leaves anything virtual off the table.

This is a great example of something I call “Alvin Toffler Syndrome”, where you correctly identify a really interesting trend, but are then unable to get past the bias that your teenage years were the peak of human civilization and so therefore anything different is bad. Future Shock.

I had three very strong reaction to this, in order:

First, I think that header image is accidentally more revealing than they thought. All those guys eating alone at the diner look like they have a gay son they cut off; maybe we live in an age where people have lower tolerance for spending time with assholes?

Second, I suspect the author is just slightly younger than I am, based on a few of the things he says, but also the list of things “kids should be doing” he cites from another expert:

“There’s very clearly been a striking decline in in-person socializing among teens and young adults, whether it’s going to parties, driving around in cars, going to the mall, or just about anything that has to do with getting together in person”.

Buddy, I was there, and “going to the mall, driving around in cars” sucked. Do you have any idea how much my friends and I would have rather hung out in a shared Minecraft server? Are you seriously telling me that eating a Cinnabon or drinking too much at a high school house party full of college kids home on the prowl was a better use of our time? Also: it’s not the 80s anymore, what malls?

(One of the funniest giveaways is that unlike these sorts of articles from a decade ago, “having sex” doesn’t get listed as one of the activities that teenagers aren’t doing anymore. Like everyone else between 30 and 50, the author grew up in a world where sex with a stranger can kill you, and so that’s slipped out of the domain of things “teenagers ought to be doing, like I was”.)

But mostly, though, I disagree with the fundamental premise. We might have stopped socializing the same ways, but we certainly didn’t stop. How do I know this? Because we’re currently entering year five of a pandemic that became uncontrollable because more Americans were afraid of the silence of their own homes than they were of dying.

Read More
Gabriel L. Helman Gabriel L. Helman

AI Pins And Society’s Immune Responses

Apparently “AI Pins” are a thing now? Before I could come up with anything new rude to say after the last one, the Aftermath beat me to it: Why Would I Buy This Useless, Evil Thing?

I resent the imposition, the idea that since LLMs exist, it follows that they should exist in every facet in my life. And that’s why, on principle, I really hate the rabbit r1.

It’s as if the cultural immune response to AI is finally kicking in. To belabor the metaphor, maybe the social benefit of blockchain is going to turn out to have been to act as a societal inoculation against this kind of tech bro trash fire.

The increasing blowback makes me hopeful, as I keep saying.

Speaking of, I need to share with you this truly excellent quote lifted from jwz: The Bullshit Fountain:

I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be *right*, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is *not to swim in it*.

Read More
Gabriel L. Helman Gabriel L. Helman

Fully Automated Insults to Life Itself

In 20 years time, we’re going to be talking about “generative AI”, in the same tone of voice we currently use to talk about asbestos. A bad idea that initially seemed promising which ultimately caused far more harm than good, and that left a swathe of deeply embedded pollution across the landscape that we’re still cleaning up.

It’s the final apotheosis of three decades of valuing STEM over the Humanities, in parallel with the broader tech industry being gutted and replaced by a string of venture-backed pyramid schemes, casinos, and outright cons.

The entire technology is utterly without value and needs to be scrapped, legislated out of existence, and the people involved need to be forcibly invited to find something better to send their time on. We’ve spent decades operating under the unspoken assumption that just because we can build something, that means it’s inevitable and we have to build it first before someone else does. It’s time to knock that off, and start asking better questions.

AI is the ultimate form of the joke about the restaurant where the food is terrible and also the portions are too small. The technology has two core problems, both of which are intractable:

  1. The output is terrible
  2. It’s deeply, fundamentally unethical

Probably the definite article on generative AI’s quality, or profound lack thereof, is Ted Chiang’s ChatGPT Is a Blurry JPEG of the Web; that’s almost a year old now, and everything that’s happened in 2023 has only underscored his points. Fundamentally, we’re not talking about vast cyber-intelligences, we’re talking Sparkling Autocorrect.

Let me provide a personal anecdote.

Earlier this year, a coworker of mine was working on some documentation, and had worked up a fairly detailed outline of what needed to be covered. As an experiment, he fed that outline into ChatGPT, intended to publish the output, and I offered to look over the result.

At first glance it was fine. Digging in, thought, it wasn’t great. It wasn’t terrible either—nothing in it was technically incorrect, but it had the quality of a high school book report written by someone who had only read the back cover. Or like documentation written by a tech writer who had a detailed outline they didn’t understand and a word count to hit? It repeated itself, it used far too many words to cover very little ground. It was, for lack of a better word, just kind of a “glurge”. Just room-temperature tepidarium generic garbage.

I started to jot down some editing notes, as you do, and found that I would stare at a sentence, then the whole paragraph, before crossing the paragraph out and writing “rephrase” in the margin. To try and be actually productive, I took a section and started to rewrite in what I thought was better, more concise manner—removing duplicates, omitting needless words. De-glurgifying.

Of course, I discovered I had essentially reconstituted the outline.

I called my friend back and found the most professional possible way to tell him he needed to scrap the whole thing start over.

It left me with a strange feeling, that we had this tool that could instantly generate a couple thousand words of worthless text that at first glance seemed to pass muster. Which is so, so much worse than something written by a junior tech writer who doesn’t understand the subject, because this was produced by something that you can’t talk to, you can’t coach, that will never learn.

On a pretty regular basis this year, someone would pop up and say something along the lines of “I didn’t know the answer, and the docs were bad, so I asked the robot and it wrote the code for me!” and then they would post some screenshots of ChatGPTs output full of a terribly wrong answer. Human’s AI pin demo was full of wrong answers, for heaven’s sake. And so we get this trend where ChatGPT manages to be an expert in things you know nothing about, but a moron about things you’re an expert in. I’m baffled by the responses to the GPT-n “search” “results”; they’re universally terrible and wrong.

And this is all baked in to the technology! It’s a very, very fancy set of pattern recognition based on a huge corpus of (mostly stolen?) text, computing the most probable next word, but not in any way considering if the answer might be correct. Because it has no way to, thats totally outside the bounds of what the system can achieve.

A year and a bit later, and the web is absolutely drowning in AI glurge. Clarkesworld had to suspend submissions for a while to get a handle on blocking the tide of AI garbage. Page after page of fake content with fake images, content no one ever wrote and only meant for other robots to read. Fake articles. Lists of things that don’t exist, recipes no one has ever cooked.

And we were already drowning in “AI” “machine learning” gludge, and it all sucks. The autocorrect on my phone got so bad when they went from the hard-coded list to the ML one that I had to turn it off. Google’s search results are terrible. The “we found this answer for you” thing at the top of the search results are terrible.

It’s bad, and bad by design, it can’t ever be more than a thoughtless mashup of material it pulled in. Or even worse, it’s not wrong so much as it’s all bullshit. Not outright lies, but vaguely truthy-shaped “content”, freely mixing copied facts with pure fiction, speech intended to persuade without regard for truth: Bullshit.

Every generated image would have been better and funnier if you gave the prompt to a real artist. But that would cost money—and that’s not even the problem, the problem is that would take time. Can’t we just have the computer kick something out now? Something that looks good enough from a distance? If I don’t count the fingers?

My question, though, is this: what future do these people want to live in? Is it really this? Swimming a sea of glurge? Just endless mechanized bullshit flooding every corner of the Web?Who looked at the state of the world here in the Twenties and thought “what the world needs right now is a way to generate Infinite Bullshit”?

Of course, the fact that the results are terrible-but-occasionally-fascinating obscure the deeper issue: It’s a massive plagiarism machine.

Thanks to copyleft and free & open source, the tech industry has a pretty comprehensive—if idiosyncratic—understanding of copyright, fair use, and licensing. But that’s the wrong model. This isn’t about “fair use” or “transformative works”, this is about Plagiarism.

This is a real “humanities and the liberal arts vs technology” moment, because STEM really has no concept of plagiarism. Copying and pasting from the web is a legit way to do your job.

(I mean, stop and think about that for a second. There’s no other industry on earth where copying other people’s work verbatim into your own is a widely accepted technique. We had a sign up a few jobs back that read “Expert level copy and paste from stack overflow” and people would point at it when other people had questions about how to solve a problem!)

We have this massive cultural disconnect that would be interesting or funny if it wasn’t causing so much ruin. This feels like nothing so much as the end result of valuing STEM over the Humanities and Liberal Arts in education for the last few decades. Maybe we should have made sure all those kids we told to “learn to code” also had some, you know, ethics? Maybe had read a couple of books written since they turned fourteen?

So we land in a place where a bunch of people convinced they’re the princes of the universe have sucked up everything written on the internet and built a giant machine for laundering plagiarism; regurgitating and shuffling the content they didn’t ask permission to use. There’s a whole end-state libertarian angle here too; just because it’s not explicitly illegal, that means it’s okay to do it, ethics or morals be damned.

“It’s fair use!” Then the hell with fair use. I’d hate to lose the wayback machine, but even that respects robots.txt.

I used to be a hard core open source, public domain, fair use guy, but then the worst people alive taught a bunch of if-statements to make unreadable counterfit Calvin & Hobbes comics, and now I’m ready to join the Butlerian Jihad.

Why should I bother reading something that no one bothered to write?

Why should I bother looking at a picure that no one could be bothered to draw?

Generative AI and it’s ilk are the final apotheosis of the people who started calling art “content”, and meant it.

These are people who think art or creativity are fundamentally a trick, a confidence game. They don’t believe or understand that art can be about something. They reject utter the concept of “about-ness”, the basic concept of “theme” is utterly beyond comprehension. The idea that art might contain anything other than its most surface qualities never crosses their mind. The sort of people who would say “Art should soothe, not distract”. Entirely about the surface aesthetic over anything.

(To put that another way, these are the same kind people who vote Republican but listen to Rage Against the Machine.)

Don’t respect or value creativity.

Don’t respect actual expertise.

Don’t understand why they can’t just have what someone else worked for. It’s even worse than wanting to pay for it, these creatures actually think they’re entitled to it for free because they know how to parse a JSON file. It feels like the final end-point of a certain flavor of free software thought: no one deserves to be paid for anything. A key cultual and conceptual point past “information wants to be free” and “everything is a remix”. Just a machine that endlessly spits out bad copies of other work.

They don’y understand that these are skills you can learn, you have to work at, become an expert in. Not one of these people who spend hours upon hours training models or crafting prompts ever considered using that time to learn how to draw. Because if someone else can do it, they should get access to that skill for free, with no compensation or even credit.

This is why those machine generated Calvin & Hobbes comics were such a shock last summer; anyone who had understood a single thing about Bill Watterson’s work would have understood that he’d be utterly opposed to something like that. It’s difficult to fathom someone who liked the strip enough to do the work to train up a model to generate new ones while still not understanding what it was about.

“Consent” doesn’t even come up. These are not people you should leave your drink uncovered around.

But then you combine all that with the fact that we have a whole industry of neo-philes, desperate to work on something New and Important, terrified their work might have no value.

(See also: the number of abandoned javascript frameworks that re-solve all the problems that have already been solved.)

As a result, tech has an ongoing issue with cool technology that’s a solution in search of a problem, but ultimately is only good for some kind of grift. The classical examples here are the blockchain, bitcoin, NFTs. But the list is endless: so-called “4th generation languages”, “rational rose”, the CueCat, basically anything that ever got put on the cover of Wired.

My go-to example is usually bittorrent, which seemed really exciting at first, but turned out to only be good at acquiring TV shows that hadn’t aired in the US yet. (As they say, “If you want to know how to use bittorrent, ask a Doctor Who fan.”)

And now generative AI.

There’s that scene at the end of Fargo, where Frances McDormand is scolding The Shoveler for “all this for such a tiny amount of money”, and thats how I keep thinking about the AI grift carnival. So much stupid collateral damage we’re gonna be cleaning up for years, and it’s not like any of them are going to get Fuck You(tm) rich. No one is buying an island or founding a university here, this is all so some tech bros can buy the deluxe package on their next SUV. At least crypto got some people rich, and was just those dorks milking each other; here we all gotta deal with the pollution.

But this feels weirdly personal in a way the dunning-krugerrands were not. How on earth did we end up in a place where we automated art, but not making fast food, or some other minimum wage, minimum respect job?

For a while I thought this was something along one of the asides in David Graeber’s Bullshit Jobs, where people with meaningless jobs hate it when other people have meaningful ones. The phenomenon of “If we have to work crappy jobs, we want to pull everyone down to our level, not pull everyone up”. See also: “waffle house workers shouldn’t make 25 bucks an hour”, “state workers should have to work like a dog for that pension”, etc.

But no, these are not people with “bullshit jobs”, these are upper-middle class, incredibly comfortable tech bros pulling down a half a million dollars a year. They just don’t believe creativity is real.

But because all that apparently isn’t fulfilling enough, they make up ghost stories about how their stochastic parrots are going to come alive and conquer the world, how we have to build good ones to fight the bad ones, but they can’t be stopped because it’s inevitable. Breathless article after article about whistleblowers worried about how dangerous it all is.

Just the self-declared best minds of our generation failing the mirror test over and over again.

This is usually where someone says something about how this isn’t a problem and we can all learn to be “prompt engineers”, or “advisors”. The people trying to become a prompt advisor are the same sort who would be proud they convinced Immortan Joe to strap them to the back of the car instead of the front.

This isn’t about computers, or technology, or “the future”, or the inevitability of change, or the march or progress. This is about what we value as a culture. What do we want?

“Thus did a handful of rapacious citizens come to control all that was worth controlling in America. Thus was the savage and stupid and entirely inappropriate and unnecessary and humorless American class system created. Honest, industrious, peaceful citizens were classed as bloodsuckers, if they asked to be paid a living wage. And they saw that praise was reserved henceforth for those who devised means of getting paid enormously for committing crimes against which no laws had been passed. Thus the American dream turned belly up, turned green, bobbed to the scummy surface of cupidity unlimited, filled with gas, went bang in the noonday sun.” ― Kurt Vonnegut, God Bless You, Mr. Rosewater

At the start of the year, the dominant narrative was that AI was inevitable, this was how things are going, get on board or get left behind.

Thats… not quite how the year went?

AI was a centerpiece in both Hollywood strikes, and both the Writers and Actors basically ran the table, getting everything they asked for, and enshrining a set of protections from AI into a contract for the first time. Excuse me, not protection from AI, but protection from the sort of empty suits that would use it to undercut working writers and performers.

Publisher after publisher has been updating their guidelines to forbid AI art. A remarkable number of other places that support artists instituted guidlines to ban or curtail AI. Even Kickstarter, which plunged into the blockchain with both feet, seemed to have learned their lesson and rolled out some pretty stringent rules.

Oh! And there’s some actual high-powered lawsuits bearing down on the industry, not to mention investigations of, shall we say, “unsavory” material in the training sets?

The initial shine seems to be off, where last year was all about sharing goofy AI-generated garbage, there’s been a real shift in the air as everyone gets tired of it and starts pointing out that it sucks, actually. And that the people still boosting it all seem to have some kind of scam going. Oh, and in a lot of cases, it’s literally the same people who were hyping blockchain a year or two ago, and who seem to have found a new use for their warehouses full of GPUs.

One of the more heartening and interesting developments this year was the (long overdue) start of a re-evaluation of the Luddites. Despite the popular stereotype, they weren’t anti-technology, but anti-technology-being-used-to-disenfrancise-workers. This seems to be the year a lot of people sat up and said “hey, me too!”

AI isn’t the only reason “hot labor summer” rolled into “eternal labor september”, but it’s pretty high on the list.

Theres an argument thats sometimes made that we don’t have any way as a society to throw away a technology that already exists, but that’s not true. You can’t buy gasoline with lead in it, or hairspray with CFCs, and my late lamented McDLT vanished along with the Styrofoam that kept the hot side hot and the cold side cold.

And yes, asbestos made a bunch of people a lot of money and was very good at being children’s pyjamas that didn’t catch fire, as long as that child didn’t need to breathe as an adult.

But, we've never done that for software.

Back around the turn of the century, there was some argument around if cryptography software should be classified as a munition. The Feds wanted stronger export controls, and there was a contingent of technologists who thought, basically, “Hey, it might be neat if our compiler had first and second amendment protection”. Obviously, that didn’t happen. “You can’t regulate math! It’s free expression!”

I don’t have a fully developed argument on this, but I’ve never been able to shake the feeling like that was a mistake, that we all got conned while we thought we were winning.

Maybe some precedent for heavily “regulating math” would be really useful right about now.

Maybe we need to start making some.

There’s a persistant belief in computer science since computers were invented that brains are a really fancy powerful computer and if we can just figure out how to program them, intelligent robots are right around the corner.

Theres an analogy that floats around that says if the human mind is a bird, then AI will be a plane, flying, but very different application of the same principals.

The human mind is not a computer.

At best, AI is a paper airplane. Sometimes a very fancy one! With nice paper and stickers and tricky folds! Byt the key is that a hand has to throw it.

The act of a person looking at bunch of art and trying to build their own skills is fundamentally different than a software pattern recognition algorithm drawing a picture from pieces of other ones.

Anyone who claims otherwise has no concept of creativity other than as an abstract concept. The creative impulse is fundamental to the human condition. Everyone has it. In some people it’s repressed, or withered, or undeveloped, but it’s always there.

Back in the early days of the pandemic, people posted all these stories about the “crazy stuff they were making!” It wasn’t crazy, that was just the urge to create, it’s always there, and capitalism finally got quiet enough that you could hear it.

“Making Art” is what humans do. The rest of society is there so we stay alive long enough to do so. It’s not the part we need to automate away so we can spend more time delivering value to the shareholders.

AI isn’t going to turn into skynet and take over the world. There won’t be killer robots coming for your life, or your job, or your kids.

However, the sort of soulless goons who thought it was a good idea to computer automate “writing poetry” before “fixing plumbing” are absolutely coming to take away your job, turn you into a gig worker, replace whoever they can with a chatbot, keep all the money for themselves.

I can’t think of anything more profoundly evil than trying to automate creativity and leaving humans to do the grunt manual labor.

Fuck those people. And fuck everyone who ever enabled them.

Read More
Gabriel L. Helman Gabriel L. Helman

Covering the Exits

So! Adobe has quietly canceled their plans to acquire Figma. For everyone playing the home game, Figma is a startup that makes web-based design tools, and was one of the first companies to make some actual headway into Adobe’s domination of the market. (At least, since Adobe acquired Macromedia, anyway.). Much ink has been spilled on Figma “disrupting” Adobe.

Adobe cited regulatory concerns as the main reason to cancel the acquisition, which tracks with the broader story of the antitrust and regulatory apparatus slowly awakening from its long slumber.

On the one hand, this was blatantly a large company buying up their only outside competition in a decade. On the other hand, it’s not clear Figma had any long-term business plan other than “sell out to Adobe?”

Respones to this have been muted, but there’s a distinct set of “temporarily embarrassed” tech billionaries saying things like “well, tut tut, regulations are good in theory, but I can still sell my startup, right?”

There’s an entire business model thats emerged over the last few decades, fueled by venture capital and low interest rates, where the company itself is the product. Grow fast, build up a huge user-base, then sell out to someone. Don’t worry about the long term, take “the exit.”

This is usually described in short-hand as “if you’re not paying for something, you’re not the customer, you’re the product”, which isn’t wrong, but it’s not totally right either. There’s one product: the company itself. The founders are making one thing, then they’re going to sell it to someone else.

And sure, because if that’s the plan, things get so easy. Who cares what the long-term vacation accural schedule is, or the promotional tracks, or how we’re going to turn a profit? In five years, that’ll be Microsoft/Adobe/Facebook/Google’s problem, and we’ll be on a beach earning twenty percent.

Anf there’s a real thread of fear out there now that the “sell the company” exit might not be as easy as deal as it used to be?

There’s nothing I can think of would have a more positive effect on the whole tech industry than taking “…and then sell the company” off the table as a startup exit. Imagine if that just… wasn’t an option? If startups had to start with “how are we going to become self-funding”, if VCs knew they weren’t going to walk away with a couple billion dollars of cash from Redmond?

I was going to put a longer rant here, but there must be something in the water today because Ed Zitron covered all the same ground but in more detail and angrier today—Software Is Beating The World:

Every single stupid, loathsome, and ugly story in tech is a result of the fundamentally broken relationship between venture capital and technology. And, as with many things, it started with a blog.

While I’m here, though, I’m going to throw one elbow that Ed didn’t: I’m not sure any book has had more toxic, unintended consequences than The Innovator’s Dilemma. While “Disruption Theory” remains an intellectually attractive description of how new products enter the market, it turns out it only had useful explanatory power once: when the iPhone shipped. Here in the twenties, if anyone is still using the term “Disruption” with a straight face they’re almost certainly full of crap, and are probably about to tell you about their “cool business hack” which actually ends up being “ignore labor laws until we get acquired.”

It’s time to stop talking about disruption, and start talking about construction. Stop eying the exits. What what it look like if people started building tech companies they expected their kids to take over?

Read More
Gabriel L. Helman Gabriel L. Helman

Friday Linkblog, don’t-be-evil edition

I’ve been meaning to link to these for a while, but keeping some thematic unity with this week, the Verge has has a whole series of articles on Google at 25. My favorites were: The end of the Googleverse and The people who ruined the internet.

(Also, that second article links to Ed Zitron’s excellent The Internet is Already Broken, which I also recommend)

As someone who was both a legal adult and college graduate before Google happened, it’s deeply strange to realize that I lived through the entire era where Google “worked”; before it choked out on SEO content-farm garbage, advertising conflicts of interest, and general enshittification.

And then, Google lost the antitrust case against Epic; see: The Verge, Ars.

(As an aside a certain class of nerd are losing their damn minds that Google lost but Apple didn’t. The Ars comment thread in particular is a showcase of Dunning-Kruger hallucinations of what they wish the law was instead of what it really is.)

I bring this all up so I can tell this story:

Back in the early 2000s, probably 2003 or 4 based on where I was and who I was talking to, I remember a conversation I had about the then-new “don’t be evil” Google. The persons I was talking to were very enthusiastic about them. Recall, there was still the mood in the room that “we” had finally beat Microsoft, they’d lost the antritrust case, the web was going to defeat Windows, and so on.

And I distinctly remember saying something like “Microsoft just operated like an old railroad monopoly, so we already knew how to be afraid of them. We haven’t learned how to be afraid of companies like google yet.”

And, reader: “LOL”. “LMAO”, even. Because, go back and read the stuff in Epic’s lawsuit against Google—Google was doing the exact same stuff that Microsoft got nailed for twenty years ago. To call myself out here on main, we already DID know how to be afraid of google, we just bought their marketing hook, line, and sinker.

We were all so eager to get past Microsoft’s stranglehold on computers that we just conned ourselves into handing even more control to an even worse company. Unable to imagine computers not being dominated by a company, so hey, at least this one isn’t Microsoft, or IBM, or AT&T!

(This is the same fallacy that bugs me about Satanists—they want to rebel, but buy into all the same fundamental assumptions about the universe, but they just root for the other team. Those people never actually go outside the box they started in, and become, say, Buddhists.)

A decade ago this is where I would have 800 words endorsing FOSS as the solution, but I think at this point, deep down, we all know that isn’t the answer either.

Maybe this time, lets try regulating the hell out of all of this, and then try hard to pay attention and not get scammed by the next big company that comes along and flirts with us? Let's put some energy into getting out of the box instead of just finding one with nicer branding.

Read More
Gabriel L. Helman Gabriel L. Helman

Layoff Season(s)

Well, it’s layoff season again, which pretty much never stopped this year? I was going to bury a link or two to an article in that last sentence, but you know what? There’s too many. Especially in tech, or tech-adjacent fields, it’s been an absolute bloodbath this year.

So, why? What gives?

I’ve got a little personal experience here: I’ve been through three layoffs now, lost my job once, shoulder-rolled out of the way for the other two. I’ve also spent the last couple decades in and around “the tech industry”, which here we use as shorthand for companies that are either actually a Silicon Valley software company, or a company run by folks that used to/want to be from one, with a strong software development wing and at least one venture capital–type on the board.

In my experience, Tech companies are really bad at people. I mean this holistically: they’re bad at finding people, bad at hiring, and then when they do finally hire someone, they’re bad at supporting those people—training, “career development”, mentoring, making sure they’re in the right spot, making sure they’re successful. They’re also bad any kind of actual feedback cycle, either to reward the excellent or terminate underperformers. As such, they’re also bad at keeping people. This results in the vicious cycle that puts the average time in a tech job at about 18 months—why train them if they’re gonna leave? Why stay if they won’t support me?

There are pockets where this isn’t true, of course; individual managers, or departments, or groups, or even glue-type individuals holding the scene together that handle this well. I think this is all a classic “don’t attribute to malice what you can attribute to incompetence” situtation. I say this with all the love in the world, but people who are good at tech-type jobs tend to be low-empathy lone wolf types? And then you spend a couple decades promoting the people from that pool, and “ask your employees what they need” stops being common sense and is suddenly some deep management koan.

The upshot of all this is that most companies with more than a dozen or two employees have somewhere between 10–20% of the workforce that isn’t really helping out. Again—this isn’t their fault! The vast majority of those people would be great employees in a situation that’s probably only a tiny bit different than the one you’re in. But instead you have the one developer who never seems to get anything done, the other developer who’s work always fails QA and needs a huge amount of rework, the person who only seems to check hockey scores, the person whos always in meetings, the other person whose always in “meetings.” That one guy who always works on projects that never seem to ship.1 The extra managers that don’t seem to manage anyone. And, to be clear, I’m talking about full-time salaried people. People with a 401(k) match. People with a vesting schedule.

No one doing badly enough to get fired, but not actually helping row the boat.

As such, at basically any point any large company—and by large I mean over about 50—can probably do a 10% layoff and actually move faster afterwards, and do a 20% layoff without any significant damage to the annual goals—as long as you don’t have any goals about employee morale or well-being. Or want to retain the people left.

The interesting part—and this is the bad interesting, to be clear—is if you can fire 20% of your employees at any time, when do you do that?

In my experience, there’s two reasons.

First, you drop them like a submarine blowing the ballast tanks. Salaries are the biggest expense center, and in a situation where the line isn’t going up right, dropping 20% of the cost is the financial equivalent of the USS Dallas doing an emergency surface.

Second, you do it to discipline labor. Is the workforce getting a little restless? Unhappy about the stagnat raises? Grumpy about benefits costing more? Is someone waving around a copy of Peopleware?2 Did the word “union” float across the courtyard? That all shuts down real fast if all those people are suddenly sitting between two empty cubicles. “Let’s see how bad they score the engagement survey if the unemployment rate goes up a little!” Etc.

Again—this is all bad! This is very bad! Why do any this?

The current wave feels like a combo plate of both reasons. On the one hand, we have a whole generation of executive leaders that have never seen interest rates go up, so they’re hitting the one easy panic button they have. But mostly this feels like a tantrum by the c-suite class reacting to “hot labor summer” becoming “eternal labor september.”

Of course, this is where I throw up my hands and have nothing to offer except sympathy. This all feels so deeply baked in to the world we live in that it seems unsolvable short of a solution that ends with us all wearing leather jackets with only one sleve.

So, all my thoughts with everyone unexpectedly jobless as the weather gets cold. Hang on to each other, we’ll all get through this.


  1. At one point in my “career”, the wags in the cubes next to mine made me a new nameplate that listed my job as “senior shelf-ware engineer.” I left it up for months, because it was only a little bit funny, but it was a whole lot true.

  2. That one was probably me, sorrryyyyyy (not sorry)

Read More
Gabriel L. Helman Gabriel L. Helman

Re-Capturing the Commons

The year’s winding down, which means it’s time to clear out the drafts folder. Let me tell you about a trend I was watching this year.

Over the last couple of decades, a business model has emerged that looks something like this:

  1. A company creates a product with a clear sales model, but doesn’t have value without a strong community
  2. The company then fosters such a community, which then steps in and shoulders a fair amount of the work of running said community
  3. The community starts creating new things on top of what that original work of the parent company—and this is important—belong to those community members, not the company
  4. This works well enough that the community starts selling additional things to each other—critically, these aren’t competing with that parent company, instead we have a whole “third party ecosystem”.

(Hang on, I’ll list some examples in a second.)

These aren’t necessarily “open source” from a formal OSI “Free & Open Source Software” perspective, but they’re certainly open source–adjacent, if you will. Following the sprit, if not the strict legal definition.

Then, this year especially, a whole bunch of those types of companies decided that they wouldn’t suffer anyone else makining things they don’t own in their own backyard, and tried to reassert control over the broader community efforts.

Some specific examples of what I mean:

  • The website formerly known as Twitter eliminating 3rd party apps, restricting the API to nothing, and blocking most open web access.
  • Reddit does something similar, effectively eliminates 3rd party clients and gets into an extended conflict with the volunteer community moderators.
  • StackOverflow and the rest of the StackExchange network also gets into an extended set of conflicts with its community moderators, tries to stop releasing the community-generated data for public use, revises license terms, and descends into—if you’ll forgive the technical term—a shitshow.
  • Hasbro tries to not only massively restrict the open license for future versions of Dungeons and Dragons, but also makes a move to retroactively invalidate the Open Game License that covered material created for the 3rd and 5th editions of the game over the last 20 years.

And broadly, this is all part of the Enshittification Curve story. And each of these examples have a whole set of unique details. Tens, if not hundreds of thousands of words have been written on each of these, and we don’t need to re-litigate those here.

But there’s a specific sub-trend here that I think is worth highlighting. Let’s look at what those four have in common:

  • Each had, by all accounts, a successful business model. After-the-fact grandstanding non-withstanding, none of those four companies was in financial trouble, and had a clear story about how they got paid. (Book sales, ads, etc.)
  • They all had a product that was absolutely worthless without an active community. (The D&D player’s handbook is a pretty poor read if you don’t have people to play with, reddit with no comments is just an ugly website, and so on)
  • Community members were doing significant heavy lifting that the parent company was literally unable to do. (Dungeon Mastering, community moderating. Twitter seems like the outlier here at first glance, but recall that hashtags, threads, the word “tweet” and literally using a bird as a logo all came from people not on twitter’s payroll.)
  • There were community members that made a living from their work in and around the community, either directly or indirectly. (3rd party clients, actual play streams, turning a twitter account about things your dad says into a network sitcom. StackOverflow seems like the outlier on this one, until you remember that many, many people use their profiles there as a kind of auxiliary outboard resume.)
  • They’ve all had recent management changes; more to the point, the people who designed the open source–adjacent business model are no longer there.
  • These all resulted in huge community pushback

So we end up in a place where a set of companies that no one but them can make money in their domains, and set their communities on fire. There was a lot of handwaving about AI as an excuse, but mostly that’s just “we don’t want other people to make money” with extra steps.

To me, the most enlightening one here is Hasbro, because it’s not a tech company and D&D is not a tech product, so the usual tech excuses for this kind of behavior don’t fly. So let’s poke at that one for an extra paragraph or two:

When the whole OGL controversy blew up back at the start of the year, certain quarters made a fair amount of noise about how this was a good thing, because actually, most of what mattered about D&D wasn’t restrict-able, or was in the public domain, and good old fair use was a better deal than the overly-restrictive OGL, and that the community should never have taken the deal in the first place. And this is technically true, but only in the ways that don’t matter.

Because, yes. The OGL, as written, is more restrictive that fair use, and strict adherence to the OGL prevents someone from doing things that should otherwise be legal. But that misses the point.

Because what we’re actually talking about is an industry with one multi-billion dollar company—the only company on earth that has literal Monopoly money to spend—and a whole bunch of little tiny companies with less than a dozen people. So the OGL wasn’t a crummy deal offered between equals, it was the entity with all the power in the room declaring a safe harbor.

Could your two-person outfit selling PDFs online use stuff from Hasbro’s book without permission legally? Sure. Could you win the court case when they sue you before you lose your house? I mean, maybe? But not probably.

And that’s what was great about it. For two decades, it was the deal, accept these slightly more restrictive terms, and you can operate with the confidence that your business, and your house, is safe. And an entire industry formed inside that safe harbor.

Then some mid-level suit at Hasbro decided they wanted a cut?

And I’m using this as the example partly because it’s the most egregious. But 3rd party clients for twitter and reddit were a good business to be in, until they suddenly were not.

And I also like using Hasbro’s Bogus Journey with D&D as the example because that’s the only one where the community won. With the other three here, the various owners basically leaned back in their chairs and said “yeah, okay, where ya gonna go?” and after much rending of cloth, the respective communities of twitter, and reddit, and StackOverflow basically had to admit there wasn’t an alternative., they were stuck on that website.

Meanwhile, Hasbro asked the same question, and the D&D community responded with, basically, “well, that’s a really long list, how do you want that organized?”

So Hasbro surrendered utterly, to the extent that more of D&D is now under a more irrevocable and open license that it was before. It feels like there’s a lesson in competition being healthy here? But that would be crass to say.

Honestly, I’m not sure what all this means; I don’t have a strong conclusion here. Part of why this has been stuck in my drafts folder since June is that I was hoping one of these would pop in a way that would illuminate the situation.

And maybe this isn’t anything more than just what corporate support for open source looks like when interest rates start going up.

But this feels like a thing. This feels like it comes from the same place as movie studios making record profits while saying their negotiation strategy is to wait for underpaid writers to lose their houses?

Something is released into the commons, a community forms, and then someone decides they need to re-capture the commons because if they aren’t making the money, no one can. And I think that’s what stuck with me. The pettiness.

You have a company that’s making enough money, bills are paid, profits are landing, employees are taken care of. But other people are also making money. And the parent company stops being a steward and burns the world down rather than suffer someone else make a dollar they were never going to see. Because there’s no universe where a dollar spent on Tweetbot was going to go to twitter, or one spent on Apollo was going to go to reddit, or one spent on any “3rd party” adventure was going to go to Hasbro.

What can we learn from all this? Probably not a lot we didn’t already know, but: solidarity works, community matters, and we might not have anywhere else to go, but at the same time, they don’t have any other users. There’s no version where they win without us.

Read More
Gabriel L. Helman Gabriel L. Helman

Friday linkblog, war-criminal-obituary-roundup edition

Why yes, I am going to open with that Anthony Bourdain quote everyone else is using, because it’s perfect:

Once you’ve been to Cambodia, you’ll never stop wanting to beat Henry Kissinger to death with your bare hands.

The best headline goes to Rolling Stone: Henry Kissinger, War Criminal Beloved by America’s Ruling Class, Finally Dies. This, via jwz’s Now that is how you write a headline, from which I also obtained the header image, there.

Josh Marshall over at TPM asks an interesting question, though: Why Did So Many People Hate Henry Kissinger So Much?.

Why did Kissinger collect all the animus while the other guys that should have been in shackles in the Hague next to him—Nixon, McNamara, Ford, etc—didn’t so much.

I don’t think it’s that complicated: It’s because he took the credit! Kissenger made sure everyone knew he was the guy. All the other architects of the Vietnam catastrophe had the good sense to keep quiet or express remorse; Kissenger went to his grave acting like the Christmas Bombing was the greatest act of foreign policy of all time.

Look, it’s not like Nixon spent decades bitching that later presidents didn’t call for advice on how to win elections, you know?

Read More
Gabriel L. Helman Gabriel L. Helman

Two things that are always true

I don’t have any particular insight into the weekend’s OpenAI shenanigans, other than to note two things I have observed to be universally true in our industry:

  1. If you and your boss don’t get along, it doesn’t matter what your job is, one of you is going to have to go. CEOs frequently forget that the board is actually their boss? (I’ve personally had two different CEOs of places I worked step on this rake and end up spending more time with their families.)
  2. If you have something that Microsoft wants, they will move instantly to exploit any opportunity to get their hands on it. (Doesn’t matter if they’re friendly now, and maybe an investor.)
Read More
Gabriel L. Helman Gabriel L. Helman

You call it the “AI Nexus”, we call it the “Torment Pin”

There’s a class of nerd who, when looking at a potential concept, can’t tell the difference between “actually cool” and “only seemed cool because it was in something I read/saw when I was 14.”

Fundamentally, this is where the Torment Nexus joke comes from. This is why Zuckerberg burned zillions of dollars trying to build “The Metaverse” from Snow Crash, having never noticed that 1) the main character of the book is one of the architects of the metaverse and it left him broke, and 2) the metaverse gets hijacked to deliver a deadly mind virus to everyone in in, both of which are just a little too close to home here.

Normally, this is where I would say this is what you git after two or three decades of emphasizing STEM education over the humanites, but it’s not just that. When you’re fourteen, you're supposed to only engage on the surfaces aesthetic level. The problem is when those teenagers grow up and never think about why those things seemed cool. Not just about what the authors were trying to say, but a failure to consider that maybe consider that it seemed so cool because it was a narrative accelerant, a shortcut to get the story to the next dramatic point.

Anyway, Humane announced their AI Pin.

And, look, it’s the TNG com-badge + the Enterprise computer. And that’s cool, I guess, but totally fails to engage (pun intended) with the reason that the com-badge seems so cool is that it’s a storytelling device, a piece of narrative accelerant.

My initial reaction, giving the number of former Apple employees at the company, is that this whole product is blatantly something that Tim Apple rejected, so they took their pitch deck and started their own damn company, you’ll be sorry, etc.

I don’t understand who this product is for. And it’s not that I don’t get it, it’s just that it seems to start from a premise I don’t buy. There’s a core worldview here that isn’t totally expressed, but that seems to extend from a position that people like to talk more than they like to look at things, and I disagree. Sure, there’s a privacy angle to needing to talk out loud to get things done, but I think that’s a sideshow. Like the Apple Cyber Goggles, it’s a new way to be alone. As far as I’m concerned, any device that you can’t use to look at menu together , or show other people memes, or pictures of your kids is a non-starter. There’s a weird moral angle to the design, where Humane seems to think that all the things I just listed are things we shouldn’t be doing, that they’re here to rescue us from our terrible fate of being able to read articles saved for later while in the hospital waiting room. The marketing got right up to the line of saying that reading text messages from your kids on the go was going to give you hairy palms, and I don’t think thats going to go over as well as they think. More than anything, it reminded me of those weird Reagan-era anti-drug campaigns that totally failed to engage or notice why people were doing drugs? Just Say No to… sending pictures of the kids to my mom?

It also suffers the guessing when you can ask fallacy. It has a camera, and can take pictures of things you ask it to, but doesn’t have a viewfinder? Instead of letting you take the picture, it tries to figure it out on its own? Again, the reason that the images they look at in Star Trek are so nice to look at is they were built by an entire professional art department, and not by a stack of if-statements running in the com-badge.

And speaking of that “AI” “agent”, we’re at a weird phase of the current AI grift carnival, where the people who are bought in to the concept have rebuilt their personality around being a true believer, and are still so taken with the fact that “my com-badge talked to me!” that they ship a marketing video full of AI hallucinations & errors and don’t notice. This has been a constant thing since LLMs burst into the scene last year; why do the people showing them off ask questions they don’t know the answers to, and then don’t fact-check? Because they’re AI True Believers, and getting Any Answer from the robot is more important than whether it’s true.

I don’t know if voice agents and “VUIs” are going to emerge as a significant new interaction paradigm or not, but I know a successful one won’t come from a company that builds their marketing around an incorrect series of AI answers they don’t bother to fact check. You can’t build a successful anything if you’re too blinded by what you want to build to see what you actually built.

I’d keep going, but Charlie Stross already made all these points better than I did, about why using science fiction as a source of ideas is a bad idea, and why tech bros keep doing it anyway: We're sorry we created the Torment Nexus

Did you ever wonder why the 21st century feels like we're living in a bad cyberpunk novel from the 1980s?

It's because these guys read those cyberpunk novels and mistook a dystopia for a road map. They're rich enough to bend reality to reflect their desires. But we're not futurists, we're entertainers! We like to spin yarns about the Torment Nexus because it's a cool setting for a noir detective story, not because we think Mark Zuckerberg or Andreesen Horowitz should actually pump several billion dollars into creating it.

It’s really good! You should go read it, I’ll meet you under the horizontal line:

And this is something of a topic shift, but in a stray zing Stross manges to nail why I can’t stand WIRED magazine:

American SF from the 1950s to the 1990s contains all the raw ingredients of what has been identified as the Californian ideology (evangelized through the de-facto house magazine, WIRED). It's rooted in uncritical technological boosterism and the desire to get rich quick. Libertarianism and it's even more obnoxious sibling Objectivism provide a fig-leaf of philosophical legitimacy for cutting social programs and advocating the most ruthless variety of dog-eat-dog politics. Longtermism advocates overlooking the homeless person on the sidewalk in front of you in favour of maximizing good outcomes from charitable giving in the far future. And it gels neatly with the Extropian and Transhumanist agendas of colonizing space, achieving immortality, abolishing death, and bringing about the resurrection (without reference to god). These are all far more fun to contemplate than near-term environmental collapse and starving poor people. Finally, there's accelerationism: the right wing's version of Trotskyism, the idea that we need to bring on a cultural crisis as fast as possible in order to tear down the old and build a new post-apocalyptic future. (Tommasso Marinetti and Nick Land are separated by a century and a paradigm shift in the definition of technological progress they're obsessed with, but hold the existing world in a similar degree of contempt.)

And yeah, that’s what always turned me off from WIRED, the attitude that any technology was axiomatically a Good Thing, and any “short term” social disruption, injustice, climate disasters, or general inequality were uncouth to mention because the future where the sorts of people who read WIRED were all going to become fabulously wealthy and go to space was so inevitable that they were absolved of any responsibility for the consequences of their creations. Anyone asking questions, or objecting to being laid off, or suggesting regulations, or bringing up social obligations, or even just asking for benefits as a gig worker, were all just standing in the way of Progress! Progress towards the glorious future on the Martian colonies! Where they’ll get to leave “those people” behind.

While wearing “AI Pins”.

Read More
Gabriel L. Helman Gabriel L. Helman

What the heck happened to Boing Boing?

Back during the Heroic Age of the indie web—between the dot com crash and before the web shrunk to a group of five websites, each consisting of screenshots of text from the other four, Boing Boing felt absolutely essential. Nerd culture! The beginning of the maker movement! The EFFs battles against big tech! Counterculture! “Wonderful things!”

Now it’s like a failed downtown mall—choked with sales for low-quality grift-y products, and lower-quality writing. Far from being at the front of internet culture, the whole site seems increasingly out of touch; not just stale, but from a worldview completely decoupled from the world we live in now.

(And, it’s absolutely none of our business why Cory Doctorow or Xeni Jardin left the site, but I’ll just casually mention that Cory Doctorow’s Pluralistic continues to be the same sort of essential reading boing boing used to be. I’m sure boing boing becoming a seedy sales channel and Doctorow starting his own site are completely unrelated phenomena.)

What finally pushed me over the edge, though, was the endless videos of “look how stupid these redhats are!” This isn’t even the usual brain rot that “if we show them the truth they’ll change their minds”, instead it’s just post after post drenched in their own smug superiority that some old white dude in a red hat is being “hypocritical”.

It’s not the summer of 2016 anymore, guys. They’re not hypocrites, they’re white supremacist fascists. They know exactly what they’re saying, quit acting like you don’t so you can, what, score points, with… someone? Making fun of them on a website and nothing else was how we lost that election. Everyone else has figured this out, but no, boing boing is still stuck in the middle of the last decade.

Usually this is the point where someone counters by talking about the value of humor speaking truth to power or some other such self-aggrandizing justification. When that happens I always pull out this quote from a Norm McDonald Interview:

They say humor is the ray of light that illuminates the evil or whatever, but I was reading that in Germany and Adolf Hitler times, everybody was making fun of Hitler. Every cartoon was against Hitler, there were comedy troupes doing sketches about Hitler being an idiot with a stupid mustache and what a stupid little idiot he was. So anyway, there goes that theory about the power of comedy. It doesn’t work at all.

Ron Gilbert thinks boing boing are all sellouts, but that’s not quite it somehow. Like a lot of turn-of-the-century Gen-X vaguely-edgelordy (mostly white) counterculture, it’s has a borderline-nihilistic attitude that nothing really matters, the worst thing you can do be be caught caring about something, and the only morally correct thing to do is snark at anyone who does.

And, just, that was a crappy attitude in 2010, but then we elected a racist gameshow host as president, wikileaks turned out to be an op by the Russians, literal nazis started marching in the streets, and a million people and counting died of the plague. The world has changed since the early teens, or rather, things that were already there became impossible to ignore.

It’s not so much that they got old, it’s that they failed to grow.

Read More
Gabriel L. Helman Gabriel L. Helman

Three and a half years

Well, it took three and a half years, but COVID finally caught us. We’re all fully vaxed and boosted, and by all accounts we had a pretty mild time of it, but my goodness, that’s by far the sickest I’ve ever been. It’s a hard disease to complain too much about, because while sure, I was as sick as I’ve ever been, this thing has killed something like 27 million people worldwide, and mostly all I did was sleep for a week?

I only seem to have two lasting effects, and I’m not totally sure either one is directly COVID’s fault. Weeks later, I’ve still got this lingering cough, but it’s the sort of cough where I’m coughing because my lungs are irritated, and they’re irritated because I’ve been coughing so much, and that’s gone full recursive. As as result, I’ve been living on Ricola cough drops. My second lingering symptom is that my stomach is constantly upset, but I’m not sure that’s the virus as the fact that its been permanently full of the contents of a Swiss apothecary.

One positive lasting effect of the pandemic, if you’re willing to work to turn the frown upside down, is that it is way easier to be sick than it used to be. The home grocery delivery infrastructure is still in place, and you can still genuinely stay inside, not interact with anyone, and get everything you need delivered. (As long as you don’t look at the bill.). The kids’ school has a well-tuned system for reporting that the kids had COVID and would be out for a while, and even work was an easy conversation to the extent of “sure, take the time, let us know when you’re better.” This was not the experience we had when we all got the flu in ’18!

But.

The reason we got it in the first place was that the schools have been stripped of any meaningful way to prevent the spread, and so in a period where cases are spiking they had a gum full of teenagers without masks in close quarters. The only thing worse than shivering through a multi-day fever is knowning you only have it because people you never met don’t care enough to keep it from spreading.

All through the main pandemic, and the “cold pandemic” we’re in now, I’ve been pretty determined not to catch it. And hey, anecdotally, three and a half years is the best run of anyone I know. But now that I have had it, I’m even more determined not to catch it again. I don’t understand anyone who could go through this and then not think “wow, I’m doing whatever I can to keep that from happening again.” If it weren’t for the fact that the school is the vector, I might never go outside again!

So. People. It doesn’t have to be like this. It still not too late to choose a different future.

Read More
Gabriel L. Helman Gabriel L. Helman

With enough money, you don’t have to be good at anything

Following up on our previous coverage, I’ve been enjoying watching the reactions to Isaacson’s book on twitter’s new owner.

My favorite so far has been Dave Karpf’s mastodon live-toot turned substack post. Credit where credit is due, I saw this via a link on One Foot Tsunami, and I’m about to jump on the same quote that both Dave Karpf and Paul Kafasis did:

[Max] Levchin was at a friend’s bachelor pad hanging out with Musk. Some people were playing a high-stakes game of Texas Hold ‘Em. Although Musk was not a card player, he pulled up to the table. “There were all these nerds and sharpsters who were good at memorizing cards and calculating odds,” Levchin says. “Elon just proceeded to go all in on every hand and lose. Then he would buy more chips and double down. Eventually, after losing many hands, he went all in and won. Then he said “Right, fine, I’m done.” It would be a theme in his life: avoid taking chips off the table; keep risking them.

That would turn out to be a good strategy. (page 86)

And, man, that’s just “Masterful gambit, sir”, but meant sincerely.

But this quote is it.. Here’s a guy who found the closest thing to the infinite money cheat in Sim City as exists in real life, and he’s got a fleet of people who think that’s same as being smart. And then finds himself a biographer possessed of such infinite credulity that he can’t tell the difference between being actually good at poker and being someone who found the poker equivalent of typing IDDQD before playing.

With enough money, you don’t have to be good at anything. With infinite lives, you’ll eventually win.

My other favorite piece of recent days is How the Elon Musk biography exposes Walter Isaacson by Elizabeth Lopatto. The subhead sums it up nicely: ” One way to keep Musk’s myth intact is simply not to check things out.”

There’s too much good stuff to pull out a single quote, but it does a great job outlining not only the book’s reflexive responses of “Masterful gambit”, but also the way Isaacson breezes past the labor issues, racism, sexism, transphobia, right-wing turn, or anything vaguely “political”, seeming to treat those things as besides the story. They’re not! That IS the story!

To throw one more elbow at the Steve Jobs book, something that was really funny about it was that Isaacson clearly knew that Jobs had a “reality distortion field” that let him talk people into things, so when Jobs told Isaacson something, Isaacson would go find someone else to corroborate or refute that thing. The problem was, Isaacson would take whatever that other person said as the unvarnished truth, never seeming to notice that he was talking to heavily biased people, like, say, Bill Gates.

With this book, he doesn’t even go that far, just writing down whatever Elno Mork tells him without checking it out, totally looking past the fact that he’s talking to a guy who absolutely loves to just make stuff up all the time.

Like Lopatto points out, this is maddening for many reasons, but not the least of which being that Isaacson has been handed a great story: it turns out the vaunted business techical genius spaceships & cars guy is a jerk whose been dining out on infinite money, a compliant press, and other people’s work for his whole life. “How in the heck did he get this far?” would have been a hell of a book. Unfortunately, the guy with access failed to live up to the moment

The tech/silicon valley-focused press has always had a problem with an enthusiasm for new tech and charismatic tech leaders that trends towards the gullible. Why check things out if this new startup it claiming something you really want to be true? (This isn’t a new problem, I still have the Cue Cat Wired sent me.)

But even more than recent reporting failures like Theranos or the Crypto collapse, Musk’s last year in the wreckage of twitter really seems to be forcing some questions around “Why did you all elevate someone like this for so long? Any why are people still carrying water for him?”

Read More
Gabriel L. Helman Gabriel L. Helman

Saturday Night Linkblog, “This has all happened before” edition

There was a phrase I was grasping for while I was being rude about Mitt Romney yesterday, something half remembered from something I’d read over the last few years.

It was this From “Who Goes Nazi?” by Dorothy Thompson, from the August 1941 issue of Harpers Magazine:

Sometimes I think there are direct biological factors at work—a type of education, feeding, and physical training which has produced a new kind of human being with an imbalance in his nature. He has been fed vitamins and filled with energies that are beyond the capacity of his intellect to discipline. He has been treated to forms of education which have released him from inhibitions. His body is vigorous. His mind is childish. His soul has been almost completely neglected.

Those who haven’t anything in them to tell them what they like and what they don’t—whether it is breeding, or happiness, or wisdom, or a code, however old-fashioned or however modern, go Nazi.

Haven’t anything in them to tell them what they like and what they don’t.

Read More