- INTRO
- WHY IT’S CONFUSING (AND DEPRESSING) OUT THERE
- WHY YOU SHOULD LISTEN TO ME
- HOW THIS STUFF ACTUALLY WORKS
- HOW WRITERS USE THIS STUFF
- PART TWO
INTRO
Here we are in what promises to be a fun, sunshine-y 2025, with years now of the current generation of AI technology out in the market. We’ve been inundated with hype at a scale seemingly greater than the last five tech industry hype waves combined. We’ve been told all sorts of great-sounding lines about how revolutionary and inevitable this technology is, especially for creative endeavors like writing.
I don’t know about you, but it’s been difficult keeping up with the deluge. In addition to regularly-updated core LLMs like ChatGPT (and its very similar competition) and diffusers like Midjourney (and its very similar competition), we now have video generators like the just-released Sora, and a massive pile of API-driven products on offer for screenwriters, novelists, copywriters, and so on.
On the consumer side, seemingly every piece of software with a user base over 1,000 has received shiny new AI functions, whether we like it or not. Apple is finally in on the game, with a suite of sort-of interesting tools and a truly bizarre integration of ChatGPT.
If you don’t know what all those letters up there mean, keep reading. You’re not alone. I’ve seen LinkedIn posts from professional creatives talking up the inevitability of the technology. I’ve had conversations with aspiring writers who parrot the hype machine, and who swear by the new version of one model or another, who are adamant that this thing or that thing is really good now. They promise.
But is it? Are they??
My goal with this write-up is to help fellow pro and aspiring pro writers establish a basic understanding of the technology, the use cases for writers (as I understand them), and the business of AI as it stands at the start of 2025. Yes, the second part of this lengthy write-up includes lawsuits and deals and public embarrassments, which I call faceplants!
You may open your Topo Chicos. Let’s begin.
WHY IT’S CONFUSING (AND DEPRESSING) OUT THERE
The hype is loud.
Let me know if any of these sound familiar:
- AI models learn like humans, and thus the theft of training data isn’t theft, it’s just the same process of ingesting media that you as a human do.
- AI won’t take your job, but someone using AI might.
- AI has an upside, depending on the industry, of billions or hundreds of billions of dollars.
- This is the worst it’s ever going to be.
- AI will let you generate custom movies and games so you can make your very own Minecraft-Skyrim hybrid (and people will want that).
You’ve heard some of these parroted right? These are some of the AI marketing hooks that have stuck. Are they true? It doesn’t matter, these hooks exist to sound good. Repeating them makes this generation of Artificial Intelligence feel inevitable and profoundly useful. If you’re a creative pro, and you see other creatives throw these out, you might feel like you need to learn some AI tools, or that it’s time to pack it in and get a “real” job.
Is AI actually inevitable and useful? Some of it might be, sometimes, but that’s not the narrative.
This messaging isn’t meant for creatives, by the way, it’s meant for potential investors, and to make those who have invested feel it was a smart move. This messaging is part of the Silicon Valley marketing playbook and has been for the last several hype cycles. There is so much money to be made in a hype cycle that the grafters and grifters hop onto whatever is gaining traction, and it gets LOUD. Imagine, if you will, an army of LED screen-paneled Cybertrucks headed to where the billionaires are (San Jose, Manhattan, bunkers in the desert, etc.).
And it seems like it’s never been this loud. NFTs weren’t this loud. Web3 wasn’t. The Metaverse? 3D Printing? VR? No.
There aren’t that many of us.
It doesn’t help that those hype bros far outnumber professional writers (certainly on LinkedIn). Even if you fold in the aspiring crowd that’s on the cusp, the number is relatively small. It doesn’t help that the craft gets nuanced and sophisticated, such that most people don’t know whether or not the current generation of AI can even help with (or do) what writers do. And it really doesn’t help that we are all busy scrambling for the next job, or in the case of the aspiring pro, fighting for ever-dwindling junior roles. It’s been rough out there.
The tech is byzantine.
There is some remarkably sophisticated technology underneath these products. Odds are high that you, the reader, literally do not understand how transformers or diffusion models work, even in the abstract. So if someone excitedly says AI is conscious, and its “thought process” resembles a human’s, who are you to doubt them (even though they are wrong)?
The want is real.
One thing is undeniably true about AI: they really do want to remove labor from companies. That saves money, baby! Profit for the money trough. They laughed about laying people off at CES. Skilled workers and creatives are a liability to the billionaires and the millionaires they anoint to lead their empires.
Regardless that there’s some good news to be had in how well their grand scheme is going, I want you to remember that.
WHY YOU SHOULD LISTEN TO ME
Honestly, I’m at best a medium understander of AI, but no one else has done this type of write-up to my satisfaction. So here we are:
- I’m a professional writer. I’ve done the most paid work in games, mostly in the AAA (big-budget) space. I have three indie novels out, I’ve worked as a screenwriter, and I was lead writer on the new website for one of the biggest cinema lighting companies in the world. So that’s professional work experience in games, print, screenwriting, and copywriting.
- Because of that work experience, I’ve used and been around people who use AI products in a professional setting. I first experimented with image generators back in 2021 and LLMs (large language models) in 2022. I’ve personally experimented with and seen people use AI across mediums and on projects large and small.
- I’m a tech power user and early adopter. I had an iPod in 2003, a Twitter handle in 2009, multiple VR headsets in 2015, made my Blender donut in 2020, and so on. Yes, I just bought my first 3D printer. I’m not an extreme early adopter, I get in when things are a year or three from breaking out. Make your way to the bottom of a rabbit hole and you’ll find me there reading 15-year-old threads on REDuser.
- In a past life, I spent 15 years in product strategy, sales, and marketing in cinema gear, content creator accessories, and consumer electronics. I’ve launched products into Apple stores and embarrassed myself in front of (among others) Canon’s C-suite. Thanks to that experience…
- I’ve seen firsthand what a real tech revolution looks like. I was at Redrock Micro when the company was first to market with video rigs for DSLR cameras. You’re welcome, YouTube. I’ve seen things you people wouldn’t believe. Not crazy apocalyptic things, just weird things if you know how trade shows for video cameras used to look.
That last one is a blessing and a curse because living through actual real tech revolutions firsthand makes me skeptical that this generation of AI has the juice. But my evidence for it is based on experiences your average writer or tech grafter has never had.
HOW AI ACTUALLY WORKS
I need to make sure we all have a baseline understanding of how these models operate. We’re not gonna get into extreme detail, because we’re not setting up our own Large Language Model, we just need the same foundation.
You’ve probably seen or typed up pushback against AI from a moral standpoint. What if I told you that learning the basics would give you something a million times more powerful than morality (at least when talking to hustle bros): business acumen?
What if the next time someone climbed out of their Cybertruck and screamed “This is the worst it’s ever going to be!!” You laughed at them like they deserve?
Oh, and because this is such a sprawling field, we’re going to focus on the text products (Large Language Models, or ‘LLMs’) as much as possible. Maybe I’ll do a separate write-up on filmmakers and AI later. That side of things is strange, noisy, and fascinating in its own right.
A brief-ish explainer.
I found what I consider to be a strong overview of how Large Language Models operate. It’s 8 minutes long, but I promise it’s not too obtuse. In short, Large Language Models abstract incoming words into data, and compare this data abstraction to a model it created by training on data it got from somewhere. Then the LLM can guess what a next word (or part of a word) might be. Rinse, repeat. Repeat many, many times. Drain a lake dissipating the heat from all that repeating.
You might notice that this process resembles something between smartphone autocomplete and the baseline predictive engine in a human mind, what improvisers call your ‘lizard brain.’ That means there are plenty of layers of human cognition not represented: AI does not filter through survival mechanisms and memories of lived experience. To dip into craft: It’s missing the ability to anticipate audience tension and release. Which, of course, is the core tool of comedy, music, and storytelling.
File that one away. I think it explains a lot.
Is there an example of this lack of understanding and high level thought? Totally! For a kind of dense example with explanation, check out this Bluesky thread from Benjamin Riley about temperature (slideshow below, since you need to be logged in to view).
Incidentally, Benjamin Riley was one of the first people to question Enron’s business before, well, something happened and they needed to rename the Astros’ stadium. Does he see any concerning parallels? Check out Part Two!
“I seriously think a lack of expertise is why LLMs seem useful…”
You don’t see this cognitive gap discussed as it relates to writing work much because, well, it’s very Inside Baseball, and LLMs can sort of fake tension and release because the training data contains the entire internet. Your average hype bro skipped humanities classes like leg days and will never be able to tell. Cue Princess Jane videos.
Really though, I dunk on the awful AI filmmaking online, but I seriously think a lack of expertise from users is an explanation as to why LLMs seem useful (at least for high-level writing) regardless of their actual ability. I went into ChatGPT the other day to ask it about a Python use case and it spat out lengthy answers complete with possible lines of code. It looked GREAT! I could take the information there and easily go build my database. But, and I cannot be clearer, I have NEVER USED PYTHON.
WIRED’s recent article supports this, by the way. It is titled, “The Less People Know About AI, the More They Like It.”
“But,” you say, “This is the worst it’s ever going to be!” Very good. Like a Large Language Model, you have selected an appropriate-sounding response to the above information. LLMs could get better, right? After all, that’s how technology is supposed to work. It gets better in a consistent fashion over time.
Is that still true, though? Does it? Are you sure? Because the experts aren’t. More on that in Part Two.
As a somewhat chonky preview, here’s some research-based thinking on how to make LLMs less bad from Ars Technica. The researchers netted modest improvements in LLM performance on a specific type of logic puzzle. So, you know, not only is none of this close to hitting a consumer product, it looks like when it does, all it will do is help grade school students cheat on standardized tests.
By the way, while we’re talking about how AI seems versus how it actually is…
Stop despairing about AI worship.
Dan Olson: “…the thing that you see in these circles is people…praying to ChatGPT. That they have a question.”
Adam Conover: “You think that’s what they’re doing?”
Dan: “Yeah, they have a question in their mind and they petition this thing that they place as a higher power. They approach it as an oracle and ask it questions and simply trust what it gives them. They don’t understand that it doesn’t know what a pineapple is.”-Dan Olson, from Factually! with Adam Conover Jun 7, 2023
You might be experiencing an existential crisis over just how reverent and obsessive people are about this generation of AI. Maybe you heard how Silicon Valley loves Anthropic’s Claude because it is nicer than the other LLMs. Perhaps you were subjected to Casey Newton’s huffy apologia, “The Phony Comforts of AI Skepticism,” where he points to people developing “intimate relationships with chatbots” as evidence of AI’s accomplishments. Or maybe…you still have a New York Times subscription.
But some of us are children of the 1990s and remember AOL’s chatbots. And some of us are even game writers, and remember the chapter in Hamlet on the Holodeck about ELIZA, the original chatbot, and how even with primitive 1960s software, people were treating computers as more human than human.
This is just more of that. We’re parakeets, trying to feed our reflection in a mirror. We’ve always done it. People feel like there’s a thinking being in there, and don’t understand that feelings don’t make things true.
At least this generation of AI is genuinely sophisticated and human-sounding. Maybe petition your school board to do more classes on self-awareness and emotional intelligence? I don’t know.
But What About A.G.I.?
I know, the latest news from the hype cycle is that AI has achieved (or will achieve soon) Artificial General Intelligence, a la Data from Star Trek: The Next Generation. That is, LLMs in all their predictive glory have evolved into thinking, reasoning beings. AGI is how we’re all supposed to get agents, AI that can operate websites and do complex tasks for us.
New reasoning models like OpenAi’s o-series seem to be better at more complex tasks, but there are lots of reasons to be skeptical. For one, why are all the thought leaders saying AGI is here but won’t be any good? I know I talk a ton of shit when I’m launching a revolutionary new product. Strongly discourage people from trying it.
Devin sounds on paper like the platonic ideal of an “agent,” the exciting new AI products that are supposed to be able to see computer screens and interact with apps for you. Devin lives in Slack chat and goes and does stuff when you ask it to. But Devin ain’t cutting it, would you pay for (or even trust) an assistant with a >70% failure rate?
Tasks it can do are those that are so small and well-defined that I may as well do them myself, faster, my way. Larger tasks where I might see time savings I think it will likely fail at. So no real niche where I’ll want to use it.
– Johno Whitaker, from the answer.ai article linked above
Incidentally, OpenAI’s new agent has hit and even an AI apologist struggles to make it sound much better.
Hmm. I wonder if I’ll find something similar about LLMs as writing tools, over several thousand words in an upcoming section.
At the risk of over-linking, Benjamin Riley also has an interesting look at this ongoing discussion, though be warned it’s more in the weeds than previous explainers.
If you can’t tell, the jury is very much out on what AGI is, so much so that OpenAI and Microsoft don’t even define it by a test regimen, they measure it by OpenAI’s revenue. We don’t need to take the AGI discussion seriously if they don’t, right?
Let’s talk shop.
HOW WRITERS USE AI
Those of you who haven’t played in-depth with AI products might be wondering how you actually use this stuff to help with your writing.
From now on, our favorite question will be…HOW?
- How does this work?
- How specifically are writers using AI?
- How does OpenAI continue operating if it loses billions of dollars a year and there’s no clear path to profitability?
…and so on.
With that in mind, here are the use cases I’ve seen and tried myself over the last few years. They all start in very similar ways, with a prompt to a chatbot. To kick things off, you might type something like “I want to work on a short story.” An example with Anthropic’s Claude:
I realize there isn’t much upside in short stories, but I enjoy writing them. Plus, they’re great for marketing and experimentation.
Alright Claude, sounds like you need some detail to get the process started. Here’s some basic ideas I had…
Oh. Claude ignored me and just wrote the story on its own. Nice. You know what? Let’s start here.
USING AI FOR ACTUAL WRITING
“I’ve been told junior engineers shouldn’t be using it, but senior engineers can because they know what’s wrong and they can fix it and the output that it’s given.”
– The Vergecast: AGI is coming and nobody cares, Dec 6, 2024
Yes, you can have an LLM predict up actual prose and screenwriting for you to either rework or subject other humans to. I don’t recommend it and I haven’t personally seen any paid work firsthand where a writer took LLM outputs and used them, but you could. I’m sure someone somewhere has gotten paid doing this.
1. Claude writes me a short story.
Since Claude is a tad precocious, let’s take a look at that short story. Here are all 639 words.
Let’s be clear, you saw the prompt. I had basically no input in this story. Some guy once said “Ideas are cheap, like table salt,” and this is three ideas smushed together. Claude made the rest of the choices for me, and they are blandly generic and skew away from my instincts if you could believe that.
As for the writing, individual sentences are okay, but the writing is devoid of tension, pacing, and depth. The gulf grows the further you get into the piece. There’s no detail to the characters. Why is this story happening to this girl? Facts also change in the story in a frustrating dream logic. It looks like a car but when you lift the hood there’s no engine.
“Through the Tunnel” it is not. I’d put this at “drunk grandparent makes something up for a bedtime story” level of quality.
Still, let’s pretend I want to continue working on “The Sea Glass Collector.” Rather than show every step of a process that is usually tedious, I’ll make a list of things I’d attempt to fix with prompts and a list of things I’d rewrite to turn this into a functioning short.
Here’s what I would try to do with prompts:
- Expand the story to >2k words.
- Change the title and central magical element to something more my speed, like what appears to be a young person locked up in a strange sleeping pod in the attic. (some concerns as to what AI would consider appropriate here, I’m betting it’d require a lot of guidance)
- Add a try-fail cycle tied to the magical element. Claude will probably need to be told exactly what it is in some detail.
- Change the broad strokes of the ending based on how the cycle above and the strange element contrast with my main character’s central problem.
- Add more sensory details, fix contradictions, remove bad writing (Glass smells metallic? The bizarre way the attic and bedroom mix up? The parents just vanish?)
Let’s assume Claude nailed each of those with an average of five prompts each. Now I can start working.
Here’s my list of items to rewrite:
- Fix the opening, it’s devoid of depth (a character in a place with a problem). Feels like what’s there is actually paragraph three or four. Probably a total rewrite and expansion to establish how MC feels about arriving again at this place, and build out the space through their senses as their parents drop them off.
- Fix Grandma. Grandma is dull and lifeless. She needs specifics and some kind of twist reveal at the end, so we’ll go through the story touching up her presence with an eye towards what will be revealed about her in the end (as a character, not her connection to the supernatural thing), and what that means for our lead.
- As the characters settle in and expand, the magical element and the try/fail cycle tied to it will also require tweaking, continuity fixes, and a substantial punch-up to match.
- A fuller ending is needed, something with stakes. You might have heard of stakes, they are the reason stories work. LLMs do not know this, we have to help them. More writing, probably new.
- Last, as I work I’ll conform voice and add pacing, break up and process the AI junk tone into something decent.
My guess is the above would take me around four hours and would touch every sentence in the story. That’s very much a back-of-napkin guess; these models aren’t exactly renowned for their iterative qualities. You’ll see examples of that in a moment.
There’s an elephant in this attic: I can write a 2,500 word short story in four hours, especially one that treads in such familiar territory. And it’ll be a more enjoyable experience where I’m using my creative voice, so I’ll have fun and be surprised by things I come up with.
But hey, what if you’ve been drinking and your granddaughter needs a bedtime story?
By the way, I did scold Claude for ignoring me and it was funny:
2. I try ChatGPT for screenwriting.
At some point in 2024, I was at a writing mixer where a random was adamant that ChatGPT was much improved from when I experimented with it last. Always happy to be wrong, I paid the subscription and brought work on a pilot in to experiment with.
My first test was to see if it could assist in drafting a pivotal police interrogation scene. The pilot’s an ensemble sci-fi puzzle box adventure, so the plan was to play like this was a standard, trope-y interrogation and then have things go off the rails, both our hero and the audience realize something super strange is happening.
And so, we prompt.

The LLM’s first attempt at an outline was boring, basically the interrogation scene any American has seen dozens or hundreds of times on TV. And fair enough! Again, my prompt has a tiny fraction of the choices I would have made doing the work myself.
We’re going to talk outlines in the next section so let’s skip all that. How did the scene draft turn out?
Hah. Okay, let’s stop with that half page. Not doing much for me, and again, even this short part of the scene is lousy with things I’d need to fix. I don’t think I can use any of it.
Writing this out, I realize my mistake: this is a new kind of scene that just looks like the stereotype. The tension between how they’re “supposed” to act and their actions is a key part of what I want to do here. That’s something that will inform every word on the page subtly, and that…is a lot to ask of this generation of AI.
If you’re a writer doing more traditional writing, is the output more useful? I’m betting no, but hey, let me know.
3. I try to use ChatGPT to write like three lines from a play, surely it can do this.
After the above experience, I thought, “Hey, maybe I need to give the AI smaller things to work on.” I then prompted it for help with dialogue from a creepy, banned play-within-a-show. Think The King in Yellow.
We get glimpses of this play as police rush to shut down illegal performances and it’s super weird and ominous. Great fun. Surely AI can give me some useful ideas for crazy things the monstrous skeletal wraith character might say!
The response goes on and on, with multiple scenes. They all use a rhyming scheme like this:
WIZARD: So here we stand, at the end of all roads, Your machines against my ancient codes. For each spell you silence, ten more shall scream, Through the valleys, the winds, and in each dreamer's dream. (He struggles against the overwhelming noise, his voice a beacon in the dark.) You may bury my bones 'neath your cold, iron tracks, But can you silence the truth? Ah, it always fights back. For every free spirit, you chain in your quest, Will rise in rebellion, putting your steel to the test!
Again, simple mistake on my part here as a prompt auteur: I did not specify the character is villainous. Also, I do not want rhymes. Maybe I can prompt it with an example of dialogue and it’ll match it. LLMs are very good at parroting what you give them, right?
And the result?
WRAITH: Behold, my reign – endless as the void! From the cosmic dust was my empire forged, A dominion sired by shadow, not by right. Under my decree, stars flee the night! (He stands, his form casting an immense shadow over the city model below.) Tremble, O fleeting lives of clay and breath, For I am the harbinger of everlasting death. Your cries are but whispers beneath my storms, As I reshape worlds and defy their forms.
Good grief, those stupid rhymes. It doesn’t even understand what I’m asking. I should not trust randoms at mixers and neither should you.
AN ASIDE: The Best Writer Chatbot Confirmed
You might be wondering which chatbot is best, or at least, the most useful, for our purposes. In my experience, all the LLMs blend together as writing tools. It’s not a surprise; the interface is similar, the underlying technology is similar, and the training data they got from somewhere is frequently the same.
You might then ask yourself, “Is it bad when a product is identical-feeling to its competitors?” Yes. It is bad. That’s called “commodification,” and commodification is very bad if none of the products are profitable. More on that in a section in Part Two titled “The Product Problem.”
At the time of publishing, you can go get Deepseek’s open source models and install them locally on a computer. I’ve done it, and it’s a tad involved (you need to type command lines into Terminal), but there are plenty of easy-to-follow tutorials out there. This is a new wrinkle in the bigger zeitgeist, and who knows what developments will hit between now and when I publish Part Two of this write-up (which contains plenty of news and business gossip).
Not only does Deepseek perform similarly to the fanciest paid models, running it locally gives you complete control over your chats. Is it a good writer? Not really. But it’s a fun novelty, occasionally useful, private, and completely free.
AN ASIDE DEUX: Ode to APIs in Wrappers
There are quite a few specialized writing products out there: Jasper for copywriting, Sudowrite for prose, Story Prism for screenwriting, and so on. There are even hybrid tools like Nolan.AI or ScreenplayIQ that claim to help with the development and pre-production process for film and TV by combining LLMs and image generators.
I know I’ve been snarky, but I at least like the intent behind these tools. They’re bringing details for how to write in one medium or another to the table, clearly thinking about how writers work in different industries.
Still, there are four issues I’ve seen with these specialized products: structure, the buttons, cost, and whether your new favorite tool will be around long term.
Like with books and classes on writing, you have to be down with whatever structure the teacher/book/guru is applying to the process. I love a good model when I’m new to something, but at this point, I don’t need The Coffee Break Screenwriter to write a screenplay. One app, that shall remain nameless, makes you type in your story’s theme first, which is a miserable and academic way to write that I do not recommend.
And then there are the buttons: These apps use APIs – Application Programming Interfaces – for their AI functions. In other words, when you click the magic wand button, it’s just talking to ChatGPT (or a very similar competitor) to get you a response. Were you underwhelmed by the results you got from the chatbots? Well, I’ve got bad news.
And since these products sit on top of APIs, you have to think about cost. Midjourney and ChatGPT pro together will run you about $40 a month, but Nolan.AI wants $100 a month. Are the wrapper, integration, and additional breakdown tools worth the extra cost? Remember, I just said you can download and run a solid chatbot for free on a reasonably modern computer.
And last, will your paid writing tool be available long term? I don’t think LLMs are completely going away after this generation’s AI bubble bursts, but a specific app, that uses a specific API, in a niche discipline, is inherently risky.
Writing tools already struggle with upkeep. On a recent NoFilmSchool podcast, John August mentioned my beloved Highland 2 has a mere 7,000 regular users, and we all paid $40 once for it over 5 years ago. That’s not a ton of revenue to keep your app updated and evolving. Scrivener’s sole reliance on Dropbox for sync is dating the program. I don’t use Dropbox for anything else at this point. This is a minor gripe, but it’s getting me to experiment with alternatives like Ulysses, where this blog was written.
So will the AI buttons stick around when its API goes away, as Gartner predicted 30% of AI products would by the end of the year? AI products are dying right now and I guarantee you are a lot less valuable than a customer that buys an $800 emotional development toy for their kid.
USING AI FOR DEVELOPMENT
dramaturge /drăm′ə-tûrj″, -tûrg″, drä′mə-/ noun
A position within a theatre that deals mainly with research and development.
What if we take a step back and use LLMs to help organize story plans and cook up plot and worldbuilding and character names? Development is a massive part of screenwriting and game writing, after all.
Producer Eric Barmack writes a series for The Ankler exploring how AI is being used in Hollywood. An early post had him developing an adult animated comedy about a professional soccer team. The article was one of the first cases I can remember of someone finally showing prompts and responses, even if he didn’t once mention the long-running animated pro soccer comedy The Champions in his write-up on developing an animated pro soccer comedy.
I’m teasing, this is a useful article, and he does what I declined to do above and shows every tedious prompt step. Eric demonstrates the most common use case I’ve seen when you get down to actual specifics: the sounding board, where you take an LLMs constant stream of bad ideas, come up with better ones yourself, and eventually filter that into a proper pitch or outline. He also uses image generation to make character mockups.
The “sounding board” is one of the few actual professional use cases I keep hearing about, just not by writers. Like Hollywood producers, consultants in games use AI to quickly iterate on a pool of ideas. Once a client is happy with some high-level ideas, they bring on professional writers to turn a brief into a proper pitch. Like using AI image generation for pitch decks, I’m not sure anyone was paying a pro for this work before. So…no harm no foul?
There’s one major problem I have with the sounding board, and that’s how little the LLM will contribute to the process in my experience. Sharp-eyed users will notice that the responses are parroting back to you what you fed it, inside a basic understanding of storytelling structure. I already have a basic understanding of storytelling structure! Why am I using this thing?
I’ll keep saying this, but this all still tracks with our newfound understanding of what the technology behind LLMs is and isn’t.
USING AI FOR RESEARCH
There is one place where at least a few writers use AI professionally. This recent Reddit post from a copywriter breaks down how their agency basically forced them into using AI because of the workload. Like my findings, the actual copy was frequently unusable, but u/Hoomanbeanzzz found LLMs valuable for quick research.
A primary value proposition from AI acolytes is that AI can generate a volume of “output” at lightning speed. Is the output useful? If it’s high-quality reader-facing words, probably not, but if it’s summarizing a niche in the skin care industry because you need to get a social campaign written up in an hour? Sure, just promise me you’ll read the warning below…
By the way, more on this in Part Two, but I’m legitimately confused by screenwriters who pitch that they can turn around AI-assisted pages in an hour. Who wants that? No, really. Modern series and movies cost a fortune and take years to put out.
Maybe your Asylums and soap operas do, with their high output and lower quality target? I haven’t heard anything concrete one way or another. The highest volume outlets I’m aware of are the podcasting companies adapting international soaps into English, and none of the people I know toiling in the podcast mines have reported LLMs making an appearance. Get in touch if you know something.
USING AI FOR BUSINESS ADVICE
I know some of you are querying ChatGPT for hot tips and contract reviews. Go nuts. I’ve done it. Ask it to look over a statement of work and summarize it. That’s one of the things this technology seems to be good at, with one serious caveat:
Do not use AI in a situation where a mistake will get you fired, bankrupt you, or send you to jail.
Once more for the seniors in the back:
Hire a lawyer or pay the price.
I have a bunch of fun news and embarrassments in a big long list in Part Two of this lengthy revue, but I’ll put the Lionsgate movie marketing snafu here because, oh my god, do not put something AI told you into a customer-facing ad without double-checking if that quote is even real.
In addition to confident mistakes, I’ve found LLMs to be weirdly compliant in a way I really wouldn’t want an advisor to be. I’ve had a chat with an LLM where I laid out possible projects and asked what I should work on next. When it recommended one, I asked about the state of the market, and it just agreed that I should do something else. Wow. Thank you SO much.
In hindsight, I’m a little embarrassed I expected a useful response. Remember what the podcast quote said: it doesn’t understand what a pineapple is.
USING AI TO PISS OFF OTHER CREATIVE PROFESSIONALS
I’ll break my own rule about sticking to LLMs here to mention a couple of overlapping use cases I am guilty of (alongside plenty of other pro writers, for shame!).
1. Making cheeky little placeholder pictures
This one is for the game developers.
Who among us hasn’t spat out an AI image for a character bio or previz document before you ship it off to the art team? I’ve done it.
I’ve had the equivalent done to me too, where a brief comes down with a boilerplate AI junk bio to go alongside some character art. I think there was a Justin Timberlake song about this dynamic.
Our garbage pass here is supposed to be looked at once and discarded, but it still feels a little icky. Should we go back to putting screenshots from The Matrix into our pitch documents?
Someone get a producer to resolve this, please.
2. Making cheeky little pictures for your pitch decks/book covers/LinkedIn screeds/crowdfunding
Likewise, what if your PowerPoint was full of shiny, seven-fingered AI space marines?
Or one could whip out an ebook cover in about an hour using AI for at least a chunk of the process, and then put that sucker for sale on KDP. If one were so inclined.
I used AI artwork for part of a book Kickstarter campaign in late 2022. It was early days for AI and by the end of the campaign I’d resolved to make sure all the art in the book was human-made. Long story, and the campaign is still on Kickstarter if you want to look at it. I’m glad I tossed the AI stuff, the “art” I’d initially liked now feels cheap, even if I still sort of like some of it.
The professional author communities (20booksto50k, Indie Cover Project, etc.) get a lot of authors posting terrible looking AI covers and the community shames them into hiring a professional. The result is always better and, as a bonus, you avoid getting scammed by companies currently under FTC investigation.
Is this community pushback warranted? It’s not clear yet that your average reader/viewer/player is clocking that they’re being fed AI slop, consciously or not, but I think it’s smart to be cautious. Certainly, people that can give you money in other venues are catching on. Cue warnings that editors know, and your average HR manager knows.

IN CONCLUSION, FOR NOW
We made it. That was a lot and I’m tired.
This write-up continues in an even longer (currently) second part to be released on February 13, 2025. It’s longer than this part, on account of the tea.
In it, I’ll get y’all caught up on how tech hype cycles work, how businesses (are supposed to) work, and then we’ll have a romp through a ton of amusing and embarrassing stuff that’s happened around this AI bubble.
Want to complain or tell me I’m clever or good? Hit me up on Bluesky.
Want a notification for when the next part hits? Join the newsletter below.
xoxo Westin
Leave a Reply