And we’re back.

In Part One, we looked at the basic technology underneath modern “artificial intelligence,” along with possible use cases professional writers see across industries. This also lengthy part (sorry, there’s a lot out there!) will explore the business of AI, as well as my favorite developments in the public eye.

Even in the short time since I put out the first part of this essay, so much has happened in the space. The big elephant in the attic now is Deepseek’s r1 reasoning model, which not only performs at a similar level to the fanciest western models, it costs a fraction of the price if you use it in the cloud and can be run locally. 

We’ll finish with some predictions and recommendations, as if my hot takes will be good and valuable. Again, no one else went to the trouble of writing something like this, so now I have to do it.

THE PRODUCT PROBLEM

For all its fancy technology, I’ve found LLMs end up being good at things I’m great at, bad at things I’m good at, and they literally can’t do things I want to offload. Even the cutting edge Deepseek r1 model running on my MacBook Pro can’t be trusted to do something simple, like list out character and location tags from a story.

Though unlike many LLMs, it can actually count how many ‘r’s are in the word ‘strawberry.’

Unfortunately, I am not the person who pays me for most of my work. I have bosses. They are largely writers, but their bosses aren’t. And what those people think about the value of this generation of AI is a lot more important to the health of the career. We are, in essence, held hostage by the whims of the manager class.

Depending on where they are on the ladder, and at what size of company, managers have some version of one job: the number must go up. 

A lady in a good satirical movie once said, “To beat the bug, you must first understand the bug.” Perhaps we should try and understand how numbers go up and down, and whether the numbers are going up or down with this generation of AI.

In Part One, I said I have a background in product strategy, sales, and marketing. That’s still true! My hope is I can use some of that business experience to provide context, so the next time someone says some obvious nonsense, you can laugh at them, like they deserve.

A PRIMER ON PRODUCT STRATEGY

Let’s start small.

There are two questions I want you to add to your arsenal. After that HOW question from Part One.

The first is “Would I pay for this?” 

The second is “How much?”

Deep down inside, you know the answers to these. You answer them automatically with action. I noticed for the first time that I had an answer for both when my 3rd gen iPod broke in a year that isn’t important, and I went out and bought a gen 4 the same day. I was a broke college grad, but that was such a necessary part of my life that there was no hesitation. 

I should have bought Apple stock right then, but I spent my stock dollars on the iPod. Womp womp.

But why am I talking about iPods?

These questions are relatives of the legendary sales and marketing uber-question: “What’s in it for me?” This question is extraordinarily difficult to answer, but if you can come up with a good one, you might have a successful product on your hands.

So in my case, I might answer vis a vis my ancient iPod:

What’s in it for me? I get my everyday carry music player back.

Would I pay for this? Immediately, yes.

How much? $400 ($648 in 2025 dollars, dang)

Entire companies and industries are bad at answering these. Like, extraordinarily bad. Go read Why We Buy like I had to.

Anyway, we’re not product managers, we’re writers. We don’t have to answer any of these questions, we just need to understand what good answers look like so that we can think critically about this generation’s AI bubble.

And there’s a classic example you’ve probably seen. An all time great example of an audience hearing an answer to “What’s in it for me?” and FREAKING OUT

I point you to that clip from the 2007 iPhone keynote because I want to set a reference point, a high water mark for tech. THIS is what a real product revolution looks like. This is what it looks like when an audience GETS in a second what’s in it for them, and they are BOUGHT IN.

I wasn’t there for that iPhone keynote (I watched it online), but I had thousands and thousands of conversations with cinematographers, students, and videographers when the Canon 5D Mark II and Nikon D90 hit, and clearly remember not just the types of conversations we had, but the undercurrent of excitement. The answer to “would I pay for this?” was so quickly answered that it felt forgotten.

In a real tech revolution, the conversation almost instantly becomes a negotiation about price; how to get The Thing into people’s hands as soon as possible.

Have you ever seen an AI demo and responded with an audible gasp?

Which brings us back to what we’re told is the next big iPhone moment. But is it? Did Apple’s tepid Apple Intelligence WWDC keynote leave you shaking, goosebumps on your arms? What about NVIDIA’s multiple utterly awful AI NPC demos? Are you thrilled Google Workspaces now forces their (suddenly free) AI features on you?

Is there something useful-seeming to you about AI chatbots? Perhaps a chance to ask it questions for research, or for structure for a new venture, like getting in shape? Okay, great.

Would you pay to use AI? Hmm, I notice you’re quiet. You’re not paying for it now, are you?

How much? So I guess that’s zero?

You have the questions now. You can use your own experience and the behavior of people around you to begin to investigate whether something works as a product. This is an anecdotal exploration. It’s not always accurate, of course, but we’re starting here because god damn, we have got to practice thinking critically. And thinking has real material benefits, like discovering when someone is pissing on your shoe.

The Hype-Revenue Gap

At some point in 2023, I noticed my personal experience with AI products involved paying a monthly fee once, a couple of times a year. Much of the testing I’ve done had no money changing hands.

So to answer the questions…

What’s in it for me? I want to mess around with AI products and see if they are useful.

Would I pay for this? Once or twice.

How much? Maybe $60 a year.

And this was before I installed Deepseek r1 locally and it was free and indiscernible from cloud-based models. Now the answers are “I’m not sure,” “no,” and “nothing.”

The issue here is not that I gave them money (though they lost a ton of money making those $60, we’ll get to that), it’s that the amount of money isn’t remotely what I should be spending, based on the cataclysmic marketing hype around AI. There is a gap here between what we are being told and the real numbers.

These people are promising a revolution on the scale of the smartphone, or the personal computer, or air conditioning. A mass market product. I said in Part One that I’m a power user and early adopter. Why is it, then, that I was auto-buying iPods and pre-ordering expensive VR headsets, but my spend on AI across multiple years is less than I spent last month on 3D printer filament?

But it’s great at making memes!

Would you pay for a meme generator? How much?

Your answers are all still anecdotal, but like I said, you can do this experiment at home. 

The issues below are not anecdotal.

AI IS LOSING CATASTROPHIC AMOUNTS OF MONEY AND IT HAS NO CLEAR PATH TO PROFITABILITY.

My $60 annual AI spend lost money for the companies involved. What if I told you they spent $120 on operating expenses to make my $60?

Pretty bad, right?

1. The revenue is small and fragile.

In 2024 OpenAI made $3.7B in revenue but had $8.7B in OpEx (operating expenses, the day to day costs of running a company). That means the biggest AI startup with the most popular products spent $2.12 for every $1 it made in 2024.

And I won’t spend too much time on this because we’re not pursuing an MBA, only $1B of that revenue was from enterprise accounts. Enterprise accounts are the backbone of a healthy software revenue stream, the revenue is hundreds or thousands of times more than what a regular customer pays, and it’s much harder to cancel a big annual account. For reference, Microsoft’s 365 services (Office, Teams, etc.) make over 80% of their revenue from enterprise customers. OpenAI makes 27% from them.

I used an LLM to do that research, and dutifully checked the numbers somewhere else. 🫡

Protip: for a business to be sustainable, you need to make more money than you spend.

We’re talking about the traditional profit model. I know there’s private equity corpse stripping stuff that makes money out of destroying healthy businesses (Toys R Us, Party City, Red Lobster). We’re not talking about that.

But I heard AI will make 70 trillion dollars a month!

Microsoft’s 365 suite, which is used the world over, brings in “only” $56B in revenue a year. So the next time someone says there is a trillion dollar upside to AI that you can ask to make spreadsheets, PLEASE mock them. Shame is a powerful tool.

2. The funding is mind-bogglingly huge and will have to continue.

In Part One, I snarkily asked how OpenAI continues operating if they lose billions of dollars a year. That answer is…funding! Like this enormous round that ended in October 2024.

That’s not just funding, that’s the mother of all funding rounds. OpenAI had to raise more money than any startup has ever raised, and will have to do so again soon. And Softbank is in there again, a company famous for making bad calls.

There’s a question lurking under all this: when do you stop putting money in? Better Offline’s Ed Zitron loves to end his Howard Beale-like tirades with a weary, sad section where he reminds us that tech… doesn’t have anything else. Will they white knuckle this hype cycle right into a global depression?

I keep harping on asking questions and critical thinking because every seemingly good sounding bit of news leaves other problems unanswered. If you read about the 2025 funding round, you’ll note it’s supposed to be for $40 billion buckaroos, and has the backing of Oracle and President Trump, and blah blah blah. But…none of it explains how OpenAI is going to become profitable. Throwing more money at AI doesn’t solve that problem. Spending more only makes sense when the demand is there. Build a second factory because you keep selling out of your popular water bottles.

Why is it Silicon Valley feels like a middle-aged man desperate to reclaim lost glory?

3. The promised growth is enormous.

Sam Altman predicted OpenAI revenue will jump to $11.6B in 2025. That’s more than a 3x revenue jump.

If you’ve ever worked in sales (I have) and had management proclaim that you’ll be growing the business by a huge number in a single year (been there), your first question is “what new products are we going to sell?” And if the new products are unremarkable and badly priced, like the ones OpenAI announced in their 12 Days of OpenAI event, maybe you start updating your resume (I did).

Uh, you can guess what I’d say if those products were then undermined by identical cheap and free alternatives.

4. And the OpEx is not getting better.

AI requires insane computational power. And you think a Sora video prompt costs less than a ChatGPT 4 text prompt? Oh no, it’s a lot more.

That $200/month subscription they announced on the first of the 12 days was an attempt to make SOMETHING profitable, but Sam blew the pricing and even that tier loses OpenAI money

But what about Lyft/Uber??? What about THEM??????

Hey Westin, startups operate at a loss ALL THE TIME! You idiot. You waste of blood.

Whoa, that’s so mean. Waste of blood?!

I know you’re smart enough not to directly compare ridesharing to AI products. Uber was a revelation when it hit. I’m an elder millennial, I was there. It was incredible. Between personal and work travel, I spent thousands of dollars a year on ride sharing. When I use ChatGPT I get annoyed and depressed.

We are years into generative A.I. Where is the horizontal enablement? Where is the thing it’s enabling? Two years. Show me one thing which you use that you go, “Oh, damn, I’m so glad I have this.” Show me the AirPlay; show me the Apple Pay. Show me the thing that you’re like, “Goddamn, I’m glad this is here.”

Ed Zitron, interviewed by Slate

5. Oh right, also commoditization.

Commoditization is when a product stops being unique because everyone else has their own identical thing. It becomes a commodity

I’d argue AI products are largely commoditized right now. There’s no “moat” between Claude and GPT4, or between the reasoning models like Open AI’s o1 and Deepseek’s r1. Every time someone puts out an innovative new thing, there’s a very, very similar thing that hits days later.

What about the video generators? OpenAI has Sora, but that mediocre car chase movie was made in Google’s Veo 2. The Blumhouse films from their partnership (I’ll cover that below) are made with Meta’s tools. You could tell, right?

Commoditization drives down prices, because you can’t compete on anything else. You don’t care if your gasoline comes from one place or another, right? All you care about is…price. It’s the ONLY thing on gas station signs. Once an industry commoditizes, it becomes a race to the bottom.

This is a massive problem if you’re already losing money. Are AI companies already losing money? Oh right, I already wrote that section. They are.

Now, do I know when the funding stops? Not really. I don’t know when or if the bottom falls out. Money keeps pouring in and there are just slivers of evidence that things are slowing. Nothing seems to stick.

But I also see CyberTrucks pretty regularly, and if Tesla is selling so few of them they’re cutting production, we’re probably at peak CyberTruck. This right now, in February 2025, is the most CyberTruck saturation there may ever be. But we’ll only know for sure later.

Anyway, to close out this long look at the business, here’s a writeup where Benjamin Riley compares OpenAI’s business to Enron’s, something he has some experience with.

WHAT’S THE WORD ON THE STREET?

“…There was VR and then there was 3D printing and then there was crypto and then there was the metaverse and there’s VR again. I have been doing this long enough to now understand this cycle where the cool new thing is not going to change everything. Or even happen. And it’s going to be full of fucking grifters. All of them are full of grifters.”

From Better Offline: Watching the Watchmen with Jason Koebler, Dec 17, 2024

What if we took a look at the actual business and news happening around AI? If you’re like me, this is where things get interesting.

THE TEA

The discourse around tech and creatives has been raging for a bit. Here’s the stuff I flagged:

Pro Writers Hit Back at AI (with Mixed Success)

Ted Chiang (who is a very good writer) did that essay in The New Yorker about how AI could never imitate a human writer in its current form. True, but not super helpful as it doesn’t directly address the business arguments for AI.

Xalavier Nelson got warmer when he said AI produces “oatmeal” that isn’t very useful. A predecessor to the “slop” we all decry today! He wins the messaging game so far.

For subtler funnier digs, I also want to shout out Josh Sawyer (at the same link) who wryly pointed out that the problems in game storytelling aren’t solved by more writing.

Consumer AI Products Get an ongoing muted response

Marques Brownlee has not great stuff to say about Sora and Apple IntelligenceNeither did CNET.

I already mentioned OpenAI’s 12 Days of OpenAI” needed to launch big, exciting products in both the enterprise and consumer space and the response was…muted.

Nobody seems very excited about “AI” PCs. Like, what even is an AI PC? They’re not running local models, that stuff is all free and requires a ton of compute power and RAM. 

CES ghouls cackle about people losing their jobs to AI

I linked to this podcast episode (which contains actual audio of the CES snakes) but I’m linking it again because I want you to be as angry as I am. I hope their portfolios burn.

The game devs are getting to where the writers are at.

In my experience, AI screenwriting hysteria peaked in 2023, and while it’s still present enough to include screenwriting in this screed, it ain’t what it was. Too many people have used it. All a would-be innovator can bring to the table is help for beginners and speed, which we’ll talk about later.

But in gamedev? The skeptics are also myriad there. The annual Game Developers’ Conference State of the Game Industry survey hit, and among its bleak results, just 13% of respondents think AI will have a positive impact on the industry. There’s little confidence, stop me if you’ve heard this one before, that AI can’t replace real people and quality will take a hit.

AI companies steal 120,000 teleplays and screenplays for training.

This may become a lawsuit, but since it’s currently just outrage and despair, it goes here.

Late last year, screenwriters found that teleplays and screenplays were lifted as training data in the tens of thousands, and because screenwriters hand over copyright to their work, there’s nothing they can do to directly defend their work. The studios have to step in and it’s not clear they will.

The Ankler podcast interview with screenwriter Robert King does a great job of explaining the state of things and expressing outrage over this theft.

The 12/17/2024 episode of screenwriting podcast Scriptnotes, “They Ate Our Scripts” also goes into the reaction from writers, and boy howdy is it a downer. Let’s do a pallet cleanse:

A reformed data scientist does a big long The Last Psychiatrist style rant.

Even in places where AI is supposed to be helpful, like software, people have strong negative opinions. Enter “I Will Fucking Piledrive You If You Mention AI Again” very funny and specific look at the business from a different perspective. It’s long, be warned. 

I have opinions on whether customer service will truly be disrupted long term like Nikhil predicts, but that’s a digression I’ll spare you from since it is neither exciting nor relevant.

PROJECTS, DEALS, AND LAWSUITS

Here’s a sampling of the released demos and media, big partnership announcements between studios and tech, and of course, some of the many ongoing lawsuits against AI companies.

Ubisoft says they love AI writing and it totally won’t eliminate Junior roles.

What is it: Project

There’s been a lot of bold talk from leadership at Ubisoft and elsewhere about using AI to do entry-level writing work: barks and side quests and so on. Look, barks suck and are not fun, but they’re also a key thing beginner game writers do. And learning how to make barks good makes you a better game writer.

I have not personally seen anything beyond Ubisoft’s 2023 PR video, which portrays their tool as a sort of sounding board, which you’ll remember from Part One is one of the few use cases people seem to have for LLMs. 

The video also emphasizes that the generator only makes first drafts, which…yeah, let me know what you think about those hot NVIDIA demos below.

Are you at a studio that’s started using AI writing in the process? Let me know how it’s going.

The AI Toys R Us commercial.

There’s likely a ton more to say in a dedicated essay about AI filmmaking, but I had to touch on some key non-writing events here. So that we may suffer in our cursed knowledge together.

What is it? Project

This was the first “commercial” made with AI. Backlash was intense, because it looks quite bad. An article from last summer here includes the spot and the drama.

This spot is doubly gross feeling when you learn Toys R Us had clawed its way back to profitability before it was bought by (wait for it) private equity, saddled with paying off its own purchase debt, and crushed into giraffe paste.

Like the marketing hype seemingly directed at creative pros in this hype cycle, this commercial wasn’t actually for Toys R Us customers, by the way. While there is technically a Toys R Us retail presence today, it’s a Macy’s in-store popup run by WHP Global. This commercial was made for…wait for it…investors! Native Foreign (the agency) partnered with OpenAI to produce it as a Sora demo. When no healthy brand would dare risk being first to risk their public standing, Geoffrey’s corpse was trotted out with the power of the uncanny valley.

I’d say thanks, but he can’t hear us. He’s dead.

The Coca-Cola holidays commercial.

What is it? Project

This mediocre Coca Cola holiday commercial full of trucks of different shapes and sizes was created by three (?!) commercial agencies: Secret Level, Silverside AI, and Wild Card. It’s not nearly as good as the horny Folgers siblings spot, but few holiday ads are.

The marketing math here seems incompatible to me. If your brand is most of your company’s value, why would you risk public perception for a small reduction in production costs? Like so many deals, I need to know more to get a clearer picture of how this got greenlit and why.

This made it into normal broadcast despite the controversy, and Coca-Cola is pretty unrepentant. And as many people have noted online, viewers I was with couldn’t tell it was AI, but they also had nothing positive to say about it. I think the best metric for if it was successful for Coca Cola is if they do another one.

Minecraft AI/DOOM AI.

What is it? Project

They’ve figured out how to make a diffuser interact with your control inputs and hallucinate a thing that resembles a video game, so long as said game looks exactly like one of the biggest video games ever and you don’t mind that none of the systems work.

Seems familiar right? The functionality and limitations here are not so different from the ones in text based mediums, except that someone figured out how to do control inputs. That one detail is honestly really interesting, and the true innovation here is being overshadowed by the traditional hype nonsense about how every video game will be these instantly cooked up codeless marvels.

David Goyer tries the crowdsourced franchise thing with AI help.

What is it? Project

David Goyer (who has made many things, and most recently worked on the AppleTV adaptation of Foundation), wants to crowdsource and compensate creators in a shared sci-fi universe called IncentionAtlas is an LLM that will act as…a dramaturge, a way to help sort through and guide a massive slush pile. Neat. It’s not doing any writing itself, it’s fact checking and summarizing the slush pile. That makes sense.

He also wants to strip out nonsense and bloat from the traditional blockbuster model, including merchandising and so on. I’ve heard of business models like this popping up, and it’s hard to tell if it will go anywhere. He needs both for creators to show up and make things, and for content made from the franchise to be successful. BUT the design and intent are both healthier and bottom-up

Right wing extremists use LLMs to spit out cruel executive orders.

What is it? Project, I guess.

Uh, that deluge of destructive and chaotic executive orders coming from the Trump administration appears to be made in part by AI

This is as good a place as any to drop my bleakest read, which is that one of the few places AI seems like it could work as a business is as a tool for authoritarians. I’m not the only one who clocked this, unhooking art from humans and making it look terrible is extraordinarily useful to that crowd.

It’s sort of heartening that it sucks and is unreliable, but a thing that sucks and is unreliable can still cause a lot of suffering. It may be that the business failings of AI mean nothing in the face of adoption by the riches, but you can always…repeat after me…laugh at them, like they deserve.

The NVIDIA AI NPC Demos.

What is it? Project(s)

Ah, NVIDIA. You and your NPC demos. I’d recommend watching one, if you haven’t yet.

Two obvious problems. Yeah, it’s bad. The guy is a weird robot delivering a boring side quest. Dialogue sequences already get ignored. Why develop something so uninteresting that it begs even more for that skip? This is why deliberate narrative design, good writing, and a good performance are so important.

And you have to talk to this guy. With your voice. Hmm. Maybe that’s three problems.

The goal here is to streamline AAA development, by getting us MORE content for less labor. But remember that cheeky Josh Sawyer quote? People don’t actually want more middling narrative.

“The appeal for our players is the characters feel very specific,” Sawyer said. “I’m not looking to make a lot of generic dialogue.”

Ubisoft and NVIDIA also had a similar demo at GDC 2024. It’s worth watching the video of the interactions in the article. I legitimately don’t think the current model of game design should be forcing player speech into the game. People don’t seem to want to do it.

There’s also the Covert Protocol demo, the one where you’re a spy in a hotel lobby. The writer’s behavior while trying out the demo is telling. Here, an attempt was made to do something more thoughtful and interactive, but that requires far more involvement from the player. Is a video game entertainment, or a job?

And much love to the author, but going into a game world to derail the gameplay sounds cute but is again something no one wants. You don’t go into Sleep No More to break the immersion of the play, you don’t go to Disneyland to force your way out of the Guardians/Twilight Zone ride and knock over hologram mirrors. That stuff sounds fun because of the subversive power trip, like pouring your freshly opened Coke Zero onto the ground. But now you don’t have a refreshing beverage.

These demos hint at an interest in next generation game design that isn’t supported by the business. Someone is going to need to make such a game first, and then everyone else will copy it when it makes a ton of money. That’s how things work right now. It’s why the industry is obsessed with live service games even though they’re extremely hard to make successful.

Alas, all of these reads are subtle and easy to grasp only if you’ve gotten into either design or critical analysis of games. So the only takeaway that will land with people is that none of this is fun to play. That’s fine by me.

Microsoft has a big game narrative AI patent.

What is it? Is a patent a project?

Windows Central has discovered a cool new patent Microsoft filed.

A bunch of narrative devs (myself included) had a laugh about this, mostly because what LLMs can do (which I covered extensively in…Part One) doesn’t help much with narrative. We don’t need more narrative, as Josh Sawyer pithily said in that NPR interview. 

Or as Nathan Savant said to me in that Bluesky thread, “Most of procedural narrative is about grappling with the 10,000 Bowls of Oatmeal problem. AI is very very good at generating 10,000 bowls of…something. At best it’s oatmeal. AI has never even begun to grapple with the thing we designers have been wrangling since the start.”

Runway AI partners with Lionsgate.

What is it? Deal

This hit last year and the press release is full of CEO fluff. It’s not clear at all what the deal is or what it means. Including it in case there’s…I don’t know… legal training on screenplays and teleplays happening inside it.

Meta partners with Blumhouse.

What is it? Deal

Blumhouse and Meta are working together in a vague way, but here at least we see some output from Blumhouse filmmakers.

The highlighted first film to come out of the partnership, by Aneesh Chaganty, “I h8 ai,” has the filmmaker using sloshy AI to transform childhood movies into the settings and locations he’d imagined they could have been when he was 10 years old.

As a fellow filmmaker who started in the 1990s with video cameras, uh, that would have been pretty neat.

But is this what Meta was hoping for? That their cutting edge technology that’s supposed to be the backbone of their hojillion dollar future businesses is portrayed as a fun toy for children making terrible movies? Children are broke, man, that’s literally a point made in “I h8 ai.”

The NY Times got scraped.

What is it? Lawsuit

Like almost all of the lawsuits I’ve seen, this one is ongoing. I guess models somehow got their hands on the entirety of the New York Times for training data. It’s tough to understand what will come of it, one of the many reasons it can feel like there’s no way to push back against the inevitability of the technology.

While we wait and see how these go, I’ll just point out that it’s extremely funny that the training data theft is so egregious and yet “it’s not really theft, and also Deepseek stole from us!” 

Lots of writers sued Meta and the stuff coming out of the case is damning.

What is it? Lawsuit

Not only did Meta download massive torrents of copyrighted books to train on, their employees talked about doing so on company laptops, and expressed concern because they knew what they were doing was not only wrong, but potentially super illegal.

Jason Kint posted the scan of the email confirming that Zuck himself signed off on this to BlueskyOh and one of their lawyers quit in January.

You can stop parroting that stupid marketing hook about how training AI on movies and books isn’t stealing. Even the companies doing the training know it is.

Now, how will the trial go? No idea. But there’s one glimmer of hope that hit the week this post went live…

Reuters just won a ‘fair use’ ruling against an AI company.

What is it? Lawsuit

This is the first promising evidence that cases like the above have some weight in favor of the creators whose work was stolen and used as training data.

Hell yes. But yeah, not a guarantee as to how the full lawsuit will ultimately go.

FACE PLANTS

Ah, public embarrassment and shame. If you don’t see your favorite one here, please please please send it over.

Tech giants force the future on us, a sign everything is going great.

In short, the big tech companies are not getting the traction they’d hoped for so they’re just putting AI buttons into their existing paid services. This is honestly happening in so many platforms I don’t feel like capturing the scope of it, so here’s some blowback from the spreadsheet factory.

LinkedIn thought leader lies to our faces.

This swill crossed my LinkedIn feed as I was drafting this essay. It’s…let’s say indicative of the sort of think you can expect on there.

Note that the first prediction, paraphrased, is “AI products will become useful”. Then note the bullet about how AI will enable a 4 day workweek, which is outrageously condescending. 

Look, LinkedIn is dense with performative nonsense. It’s one of the reasons why r/LinkedInLunatics is such a cathartic read. It takes something to cross a line with me on LinkedIn, but that workweek prediction did it.

I’m begging you (if you haven’t before) to have a conversation with your next rideshare driver. Ask them how fun and good it is to be a tech gig worker. Ask your Amazon delivery person. Maybe for fun, ask your UPS driver how being a union worker is for contrast.

C-suite publishing man does not understand.

This one is cute. That’s all I’ve got. 

Screenwriters do a little contest for the youngs, it is a confusing mess.

Did you see that NoFilmSchool AI screenwriting challenge?

NFS did a contest where a well-meaning staff screenwriter faced off against a team of writers using AI to do a ten-page challenge that readers would then do a blind vote on. Jason Hellerman has been produced and had a script on the BlackList, and he seems to have walked into the contest not expecting to go up against a team who rewrites AI output, made of people who are also produced screenwriters. Uh?

Look, I think Jason’s heart was in the right place, but doing a John Henry with people trying to market their AI writing services is a weird choice. It didn’t help that (sorry fam) neither entry was strong, both in my opinion, and also in the almost tied poll they ran on :shudders: screenwriting Twitter. I read the post mortem blog feeling like J.K. Simmons’s fed character at the end of Burn After Reading.

Even after all the research I did for this essay, I’m still struggling to find the demand for an AI screenwriting team that can write a script in a day. That’s not to say it isn’t there, more than one pro screenwriter I talked to assumed there was demand for high volume (lower quality) script pages, but I have yet to hear concrete evidence it’s happening.

Y’all have no idea how many crazy things I cut from that list. Maybe I really should write a followup about AI and filmmaking.

PREDICTIONS AND RECOMMENDATIONS

If it wasn’t clear from that mass pile of news, AI doesn’t feel “inevitable” when you look closely at what deals are happening and where. It feels…present. And interesting in some ways. So if the hype gap remains, even after a look at all the legal and business interactions, what now?

Prediction: The bubble pops and it’s a huge mess.

Let’s get the apocalyptic one out of the way.

The AI bubble will probably pop, unless government bails out the tech industry. If the bubble goes, we get another AI Winter, which is a real thing I wish people knew about. Take a biiiig sip of coffee and look at that Wikipedia article on AI Winters…it’s super weird all these AI experts never once mention the multiple times funding for AI in tech has dried up.

The only thing is, if the bubble goes, it’ll take the stock market with it. That seems bad.

Prediction: AI dents the royalty free media market, but people continue to easily tell when it is used.

I think the image and video generation stuff will continue to see some demand as a turnkey media generation resource. Think headline images for speed dating, mixers, and those YouTube thumbnails I won’t shut up about.

I think it’s going to gain a reputation as a low cost option, though. Gen Z is wary of AI and can more easily detect it.

This assumes AI companies aren’t torn apart by lawsuits from copyright holders, of course. Without access to training data, the generation is dead.

Prediction: AI is going to be like VR.

Here’s a fun development, it’s the night before this post goes live and I’m changing a prediction.

Once upon a time, it felt obvious to assume the tech standard bearers (your Apples, your Adobes, your Alphabets) would be last men standing with AI, but the proliferation of commoditized and open source tech makes me think that AI will stick around as a sort of resonant hum under the noise of the world moving forward. You know, while people figure out what to do with it.

Does that sound familiar? If you’ve been into VR in the last ten years, it should. Glimmers of interesting things and a lot of false promises. Waning funding, but the innovators are still plugging away on the margins and under the direction of the occasional obsessed billionaire while we wait for the next Beat Saber or Half-Life: Alyx to hit and genuinely move things forward.

An example: I was about to point to the Rabbit R1 as a small company disaster, but a quick search showed people are pleased with how the little Teenage Engineering-designed gizmo has improved, even if it’s still a far cry from what was promised ahead of launch.

I could probably do another one of these on VR. Oy. Maybe next year.

Prediction: A bunch of apps will pull their AI buttons.

Even if the costs are getting freaky cheap with all the competition, it’s not clear any of the AI use cases are going to be substantial revenue generators. As prices come up (we already have optional price raises with stuff like OpenAI’s $200/mo ChatGPT sub), companies will have to evaluate if their customers actually want those AI buttons.

If there’s a bubble pop, a lot of buttons will vanish soon after. Maybe even in big spaces like iOS and Android.

Prediction: AI will continue to (help) eat entry level jobs, for now.

I hate this one. This is more of a prediction for games, but I could see it impacting other media. And it’s not because the quality is there, it’s because of existing market conditions and the perceptions of managers. 

There’s a perfect storm of problems in games and Hollywood. We were already struggling as TV writers to keep the pipeline alive, what with the streaming boom’s reduced writers room sizes and smaller TV seasons. It was a big part of the 2023 strike. And in games, if you’ve looked for anything writing-related in the last two years, you know how many roles are Senior or Lead and how almost none are Junior or Associate. Same deal – securing funding isn’t as easy and the line didn’t go up steep enough, and the new hire is the first to get cut.

It’ll be hard to tell the job market is finally improving until we’re on the way back up. When that happens, the theory is you’ll see the junior roles return to that huge game jobs spreadsheet. Fingers crossed it happens this year.

Prediction: Audiences and users will continue to be indifferent (or even hostile) to AI.

It’s become a trend to dunk on people crying out that AI is the future with tepid examples, even on LinkedIn, the Safe Space For Thought Leaders™. But that dynamic sits largely between hype bros (who have no aesthetic or literary taste), and the people who get paid to make said aesthetic and literary things. It has little to do with the real world, where people might enjoy the novelty of AI but are at best indifferent to its creative output and the feasibility of the larger business.

Note I’m not talking here about an AI meme going viral. That’s one of the rare use cases AI can be good at. I’m sure that will happen. The jank will serve it. But will the next Coca Cola AI commercial win a Clio? I’m saying nah.

Prediction: Commoditization will continue to make differentiation…murky.

Gemini will look like ChatGPT will look like Claude. And probably, they’ll all look like open source models you can put on your home computer. Outside of programming, I never see people claim one is better than the other.

I’ve watched with curiosity as open source teams work to replicate the research models the big companies have released. Agents? Yeah Google and OpenAI have semi-working ones now. It’s easy to expect open source options to come into existence there, too.

Alright. Here’s the part I wrote this thing to get to.

Recommendation: Ask questions.

Ask how, ask why, ask if what you’re being told is true or not. Spot check your instincts around paid products, as well as the behavior of people around you. 

You have the power. I believe in you.

And for the love of god, when this stuff tanks grandly and the leaderboard on the S&P 500 is down 50% and American Giant hoodie-clad CEOs are talking about quantum computing as they pack entire canisters of Zyn into their cheeks like rodents, please keep asking questions.

Recommendation: Take the job.

Hey, writing is an extremely tough career, if you’re lucky enough to suffer through it. Our responsibility as writers is to get that bread.

Just remember if Uncle Clive’s AI Emporium wants you to train their model, your rate is triple your standard one, on account of how volatile the business is. I’m being completely serious: many of these companies probably aren’t going to exist in the next year. The risk is real and should be factored in to your compensation.

Recommendation: Have a drink, do not get drunk.

Squeezing in just under the wire, a new paper shows us that an over-reliance on generative AI results in a deterioration of cognitive faculties, surprising no one.

I’ve tried to avoid the sort of moralizing others have fallen into amid the deluge of noise the AI hype wave has brought us, but since this one also has a caramel center of self interest, I’ll let it through.

Play with the tech, see what it can and can’t do today. Share your cheeky AI anecdotes on the business posturing app, but maybe don’t change your name to “The AI Imperator” or something else that’ll be a painful and expensive tattoo removal.

Recommendation: Take the reins of the design side.

This one is for game devs, but there’s a version of it authors and screenwriters could use.

Why is it these AI game demos all suck so profoundly?

I mean the easy answer is the truly skilled storytellers in the industry aren’t working on them, and the tech isn’t useful for the problems said storytellers are out to solve. Say what you will about the extraordinarily mixed bag that is AI generated commercials and short films, but at least that stuff is occasionally made by someone who knows what they’re doing.

There’s a next generation of storytelling in games coming. I can feel it every time I find an NPC who wants me to kill ten of something for them. Likely, this generation of AI won’t help much in that arena, and so if the “thought leaders” don’t have the sense or profit motive to hone in on the opportunity, it’s up to us.

It’s a shame tech as a whole has abdicated the quest for innovation, but maybe it can still happen if the experts push for it.

IN CONCLUSION…

We made it. 

This monster of an essay was borne out of love for the writer community. Well, and annoyance at the flimflam from the tech industry. I hope that it’s been a valuable high altitude look at the state of things. Calming, even.

I want to thank my beta readers, who haven’t given permission yet to have their names in here. This part came in hot.

A Reminder:

I love technology.

I love gadgets and buttons and software that do cool things. I want technology to be useful. It’s deeply frustrating to watch the industry obsess over solutions in search of problems. Y’all are just going to fuel yet another Douglas Rushkoff book. And maybe destroy the planet.

There’s a final thought I had here, sort of a paean sung about an AI assistant that works. A product that actually does something. Something I can trust to research things and do tasks for me, that’s mine, that doesn’t turn around and feed my data into some oligarch’s database.

Maybe there’s a glimmer of hope, with the flavor of the old internet, when you weren’t a product to be sold to advertisers. Maybe there’s a world where LLMs become more effective but they’re also so cheap and available that they live on your devices. Maybe the schadenfreude the industry is facing turns their endless failures into a net benefit for humanity. For once.

Maybe the disintegration of this bubble leaves us free to fund and appreciate interesting innovations that promise a long term future for creative industries. The next generation of narrative design? It’s waiting for us to find it. Unreal engine virtual puppets? Yeah, people are doing amazing stuff there. Free open source game engines that make real Game Boy gamesIndie television?

The future is out there, waiting for a moment when we aren’t being screamed at.

Want to complain or tell me I’m clever or good? Hit me up on Bluesky.

Want a notification for when the next blog hits? Join the newsletter below.

xoxo Westin