Further forays into AI
Mar. 13th, 2026 11:33 amI've been doing more stuff with AI, to learn how to use it (because it really looks like if I ever get another job, it'll be AI-related in some way; also, I can list this learning on my resumé) and to see what I can do with it. But first, a funny.
My husband picked up another litRPG book for free on Amazon, and in his opinion, he paid too much for it. It's a Royal Road book that the writer decided to self-publish. The story and world were good enough, but the writing was terrible. Here's a paragraph. For context, it's in first person, and the narrator is a human thrown into this new world as a lizard-like creature. Currently, they're waiting for something.
But beneath the switched-off scales, the fire pulsed. Constant. Patient. Waiting for dawn the way something waits that knows dawn is the beginning of what comes after dawn. What comes after is the fight. And what comes after the fight -- if I survive -- is everything else.
No, I have double-checked my typing and that is exactly what the paragraph says.
I will also note that I used the word "they" to refer to the narrator because the character switches gender during the story. They're not actually transexual, however. The writer simply referred to the character as male or female at random times throughout the novel.
This novel has sparked a lot of laughter in our household, and my husband now has a tendency to say things like, "I'm waiting for dinner the way I wait when I know that dinner is the beginning of what comes after dinner." But, a little sleuthing revealed that the author says he used AI "a little" to write it. (Which is why I'm talking about it here.) It has all the hallmarks of AI-written prose: repeated sections, inconsistencies like the gender-switching and saying that his agility increased by 30% in one paragraph and then saying that it "more than doubled" in the next paragraph, generic descriptions, etc. One of the Amazon reviewers said, "Look, we can *tell* this was written by AI. You didn't just use it "a little". You've got a great idea here, but this book is terrible. If you'd just write it yourself, it'd be really good."
Though I have to say, there are tons of typos in the book, which isn't common with AI, and the grammar is often strange (such as in that "dawn" sentence), which makes me think that there's a good chunk of the book that was actually written by the author himself and that he's simply just not a good writer but much of that is obscured by the poor AI work.
I'm primarily using AI to explore my writing, and I've come up with a couple of rules for my AI use.
Rule 1: AI is only a tool. That means it's only there to help me accomplish what I want; it's not there to do it for me. Also, like any other tool, I have to learn how to use it and what its limitations are, or it will be harmful.
Rule 2: This comes from a friend and I thought it was particularly insightful. Don't have the AI do the things I enjoy doing. I'm not going to tell the AI to write my story, because I enjoy doing that, both the designing and the actual writing. I'm not going to tell the AI to research a topic for me, either, because I enjoy doing that too. I might ask it to critique my story, because it's another pair of eyes (sorta) and can supply insights that, as the author, I'm blind to, but I won't let it rewrite my story using its suggestions, because that's still part of my enjoyment. I'll do it myself.
Rule 3: ALWAYS doubt what the AI tells me and double-check, double-check, double-check. AI does make mistakes. It does misinterpret what it sees. I've seen it say that this bit is wrong when all conventional wisdom says that this is how to do it. I've seen it make changes that completely reverse the meaning of the sentence.
But most importantly, AI, or at least ChatGPT, tries its hardest to not hurt your feelings and will tell you something is good when it's not. So, it's doubly important for me to throw away any compliments it gives me, because it is %$ing lying, and tell it, "No, try again and focus on what was done poorly".
So, with that in mind...
Last time I mentioned that fed the intro to the bowling story into ChatGPT to see what it would say about it, and it was rather disappointing, as it was mostly positive with a couple "could improve" bits here. I started playing with that and was pleased with the amount of insight it gave about why the intro worked. Some examples:
* It said that the first sentence, where Jon tells Donna to keep her eyes shut, and especially the word "reminded" set the "surprise destination" idea perfectly without describing that they'd been driving and Jon had been telling her to keep her eyes shut. It also noted that revealed Jon's playful side.
* It pointed out that the way Jon described the work on the telescope was consistent with how hobbyists talk and showed that I have some familiarity with the subject. It did say that his speech there was too long, more professor-like and not conversational, and that it interrupted the flow of the scene. It said it might lose readers, and suggested that I shorten it.
* It liked the mention of Jon's not wanting to attend Sylvia's bowling night, because that's how couples talk, flitting back and forth between topics, so it showed how comfortable they are with each other, and it was short enough not to distract from the main narrative.
I liked having these things pointed out because I don't think about them when I'm writing -- I'm just writing what I think feels right. But it's good to know why they feel right.
Then I thought, "How would ChatGPT write the intro itself?" So I wrote a summary of the intro that included why they were going bowling (to repay Dave for his work on Wilf's telescope) and why Donna was surprised Jon agreed to it (because he'd refused to go on Sylvia's bowling night out of embarrassment) and told it to write it. I also noted that it was meant to be an intro to a longer piece and not a story or scene in itself.
The results were quite eye-opening.
First, it was well-written, grammar-wise as well as entertainment-wise. I hadn't told it that it was a DW AU fanfic, so the Jon and Donna it wrote were very American. The banter was hilarious. One bit:
“You don’t bowl.”
“I could bowl.”
“You have actively not bowled,” she corrected. “You turned down my mother’s ‘Cosmic Seniors League Invitational.’ Those women had matching shirts, Jon. They had bedazzled shirts. You said—and I quote—‘I will not be emotionally hazed by retirees.’”
I loved that the AI made up Jon's actual words when he refused to go. I would never have thought of doing that. But I also noticed that it didn't make much sense. No one would say, "You have actively not bowled." They'd say, "You could, but you don't," or perhaps, "Maybe, but we've been here before and you refused." Then after that, the whole thing about matching bedazzled shirts doesn't fit either -- a real person would have gone directly from "Cosmic Seniors League Invitational" to "You said, and I quote..."
The thing that the AI produced was also over three times as long as what I wrote. Interestingly, though it criticized my version of Jon's explanation as being too long and detailed and for interrupting the larger scene, its version was much longer (though interspersed with sarcastic comments from Donna) and introduced a bunch of astronomy terms. The part about Sylvia's bowling night was also long
I think that it basically had a different intention for its version, as it felt like the banter was the centerpiece, while mine was more focused on communicating to the reader why they were going bowling.
Last time, I also mentioned that I fed some poor fanfics into ChatGPT to see what it would say, and in general, it only talked about the good things and "some things you might improve", and didn't mention any of the glaring problems (or sometimes, actually pointed at the problems and said they were good). So, I tried again, with multiple fics and one of my own (though I did not identify who wrote any of it -- the AI assumed I wrote all of them), but this time I asked like this:
Critique this story, focusing on what was done poorly and how it might be improved.
This phrasing got a lot better results. The AI listed "Problems: (stuff)" then "Improvements: (suggestions)", which is exactly what I wanted to see -- can the AI actually identify problems in story structure, flow, and style? Yes, it can. It just won't tell them to you unless you force it to.
Just to test that, I took one of the fics with a bunch of "Problems" and fed it to ChatGPT in a new chat and asked it just to critique the story, no qualifiers on focusing on bad stuff. Sure enough, it said, "This is a solid character-driven scene, and it reads very much like an episode cold-open from (it correctly identified the show, but I've removed it). The voices of (the listed characters) are recognizable, which is the most important benchmark for fanfiction in an established property." Then it went on to list the strengths of the piece and some minor improvements that could be made.
Though in the focused critique, it said,
* "(it's) hard to visualize the environment or the characters’ physical reactions"
* "humor comes almost entirely from (slapstick) and it loses impact because the stakes never rise"
* "The spatial layout is confusing... it’s hard to picture the exact room or danger"
* "the pacing is uneven"
the unfocused critique said,
* "Strengths: Excellent character voices, humor consistent with the show, clear premise, good dialogue rhythm"
* "Weaknesses: Slightly unclear spatial description, occasional narrator intrusion, a few overlong sentences"
* "Overall, this reads like competent, experienced fanfiction rather than beginner work. The characters are well understood, which is usually the hardest part."
It's interesting that the unfocused critique said that the story "read very much like an episode cold-open" when the things that a cold open needs to do -- set the scene clearly and raise the stakes so that the viewer wants to return after the credits to see what happens -- are not done at all here. Cold opens are not character moments, which the AI felt was the good part of the story.
I will note, by the way, that I didn't think it read like a cold open. It was more of a middle-of-the-episode, exploring a new place kind of scene. Which, of course, is why I'm very hesitant to listen to what AI says, and thus, my third rule above.
Interestingly, I got a totally different response from ChatGPT when I asked for a focused critique of my story, the one with the Doctor talking to Donna about circles. Rather than the "Problems" and "Improvements" format that I got when I fed other fics (which I chose because I thought they were weak) into it, ChatGPT gave me a response formatted with "What Works" and "What Could Be Improved" -- in other words, though ChatGPT had no idea that the pieces I gave it weren't all written by me, the only one that it didn't identify any problems with was the one I actually wrote. I suppose I should be flattered?
For the "things that could be improved", they all pretty much boiled down to one thing: ChatGPT felt that the Doctor's little talk about the symbolism of circles was too long, too professorial, and too expository. It broke its response into five sections (like character, dialogue, pacing, etc.) and in each one, it brought that up and explained why it was bad and how it could be improved.
However, I had actually written it that way deliberately, because the Doctor is often professorial, expository, and, well, gobby. So, I pointed that out to ChatGPT, and it responded, "You’re absolutely right—long philosophical monologues are very much in keeping with the Doctor’s character, especially in moments of reflection or teaching." Yes, it even italicezed "are". But the thing I hate hate hate is that "You're absolutely right!" If you correct ChatGPT, it always says that (at least in my minimal experience), and I hate that pandering attitude. Stop treating me like a child, ChatGPT. Just says, "Yes, that's true."
It then went on to suggest a really good improvement: rather than having one long paragraph, let Donna respond, just to break it up. (Which, of course, is what I used to say would have really helped Thirteen's lectures, having anyone actually respond to them.) Nice one. I'll keep that in mind. :)
Anyway, long story short, I'm still learning how to get AI to do what I actually want, rather than what it thinks I want, and I absolutely do not let anything it says go without double-checking it.
I've also been starting to use AI for work tasks, like building tools and testing, and I wanted to talk about that, but I've already spent way too long writing the above, so next time!
My husband picked up another litRPG book for free on Amazon, and in his opinion, he paid too much for it. It's a Royal Road book that the writer decided to self-publish. The story and world were good enough, but the writing was terrible. Here's a paragraph. For context, it's in first person, and the narrator is a human thrown into this new world as a lizard-like creature. Currently, they're waiting for something.
But beneath the switched-off scales, the fire pulsed. Constant. Patient. Waiting for dawn the way something waits that knows dawn is the beginning of what comes after dawn. What comes after is the fight. And what comes after the fight -- if I survive -- is everything else.
No, I have double-checked my typing and that is exactly what the paragraph says.
I will also note that I used the word "they" to refer to the narrator because the character switches gender during the story. They're not actually transexual, however. The writer simply referred to the character as male or female at random times throughout the novel.
This novel has sparked a lot of laughter in our household, and my husband now has a tendency to say things like, "I'm waiting for dinner the way I wait when I know that dinner is the beginning of what comes after dinner." But, a little sleuthing revealed that the author says he used AI "a little" to write it. (Which is why I'm talking about it here.) It has all the hallmarks of AI-written prose: repeated sections, inconsistencies like the gender-switching and saying that his agility increased by 30% in one paragraph and then saying that it "more than doubled" in the next paragraph, generic descriptions, etc. One of the Amazon reviewers said, "Look, we can *tell* this was written by AI. You didn't just use it "a little". You've got a great idea here, but this book is terrible. If you'd just write it yourself, it'd be really good."
Though I have to say, there are tons of typos in the book, which isn't common with AI, and the grammar is often strange (such as in that "dawn" sentence), which makes me think that there's a good chunk of the book that was actually written by the author himself and that he's simply just not a good writer but much of that is obscured by the poor AI work.
I'm primarily using AI to explore my writing, and I've come up with a couple of rules for my AI use.
Rule 1: AI is only a tool. That means it's only there to help me accomplish what I want; it's not there to do it for me. Also, like any other tool, I have to learn how to use it and what its limitations are, or it will be harmful.
Rule 2: This comes from a friend and I thought it was particularly insightful. Don't have the AI do the things I enjoy doing. I'm not going to tell the AI to write my story, because I enjoy doing that, both the designing and the actual writing. I'm not going to tell the AI to research a topic for me, either, because I enjoy doing that too. I might ask it to critique my story, because it's another pair of eyes (sorta) and can supply insights that, as the author, I'm blind to, but I won't let it rewrite my story using its suggestions, because that's still part of my enjoyment. I'll do it myself.
Rule 3: ALWAYS doubt what the AI tells me and double-check, double-check, double-check. AI does make mistakes. It does misinterpret what it sees. I've seen it say that this bit is wrong when all conventional wisdom says that this is how to do it. I've seen it make changes that completely reverse the meaning of the sentence.
But most importantly, AI, or at least ChatGPT, tries its hardest to not hurt your feelings and will tell you something is good when it's not. So, it's doubly important for me to throw away any compliments it gives me, because it is %$ing lying, and tell it, "No, try again and focus on what was done poorly".
So, with that in mind...
Last time I mentioned that fed the intro to the bowling story into ChatGPT to see what it would say about it, and it was rather disappointing, as it was mostly positive with a couple "could improve" bits here. I started playing with that and was pleased with the amount of insight it gave about why the intro worked. Some examples:
* It said that the first sentence, where Jon tells Donna to keep her eyes shut, and especially the word "reminded" set the "surprise destination" idea perfectly without describing that they'd been driving and Jon had been telling her to keep her eyes shut. It also noted that revealed Jon's playful side.
* It pointed out that the way Jon described the work on the telescope was consistent with how hobbyists talk and showed that I have some familiarity with the subject. It did say that his speech there was too long, more professor-like and not conversational, and that it interrupted the flow of the scene. It said it might lose readers, and suggested that I shorten it.
* It liked the mention of Jon's not wanting to attend Sylvia's bowling night, because that's how couples talk, flitting back and forth between topics, so it showed how comfortable they are with each other, and it was short enough not to distract from the main narrative.
I liked having these things pointed out because I don't think about them when I'm writing -- I'm just writing what I think feels right. But it's good to know why they feel right.
Then I thought, "How would ChatGPT write the intro itself?" So I wrote a summary of the intro that included why they were going bowling (to repay Dave for his work on Wilf's telescope) and why Donna was surprised Jon agreed to it (because he'd refused to go on Sylvia's bowling night out of embarrassment) and told it to write it. I also noted that it was meant to be an intro to a longer piece and not a story or scene in itself.
The results were quite eye-opening.
First, it was well-written, grammar-wise as well as entertainment-wise. I hadn't told it that it was a DW AU fanfic, so the Jon and Donna it wrote were very American. The banter was hilarious. One bit:
“You don’t bowl.”
“I could bowl.”
“You have actively not bowled,” she corrected. “You turned down my mother’s ‘Cosmic Seniors League Invitational.’ Those women had matching shirts, Jon. They had bedazzled shirts. You said—and I quote—‘I will not be emotionally hazed by retirees.’”
I loved that the AI made up Jon's actual words when he refused to go. I would never have thought of doing that. But I also noticed that it didn't make much sense. No one would say, "You have actively not bowled." They'd say, "You could, but you don't," or perhaps, "Maybe, but we've been here before and you refused." Then after that, the whole thing about matching bedazzled shirts doesn't fit either -- a real person would have gone directly from "Cosmic Seniors League Invitational" to "You said, and I quote..."
The thing that the AI produced was also over three times as long as what I wrote. Interestingly, though it criticized my version of Jon's explanation as being too long and detailed and for interrupting the larger scene, its version was much longer (though interspersed with sarcastic comments from Donna) and introduced a bunch of astronomy terms. The part about Sylvia's bowling night was also long
I think that it basically had a different intention for its version, as it felt like the banter was the centerpiece, while mine was more focused on communicating to the reader why they were going bowling.
Last time, I also mentioned that I fed some poor fanfics into ChatGPT to see what it would say, and in general, it only talked about the good things and "some things you might improve", and didn't mention any of the glaring problems (or sometimes, actually pointed at the problems and said they were good). So, I tried again, with multiple fics and one of my own (though I did not identify who wrote any of it -- the AI assumed I wrote all of them), but this time I asked like this:
Critique this story, focusing on what was done poorly and how it might be improved.
This phrasing got a lot better results. The AI listed "Problems: (stuff)" then "Improvements: (suggestions)", which is exactly what I wanted to see -- can the AI actually identify problems in story structure, flow, and style? Yes, it can. It just won't tell them to you unless you force it to.
Just to test that, I took one of the fics with a bunch of "Problems" and fed it to ChatGPT in a new chat and asked it just to critique the story, no qualifiers on focusing on bad stuff. Sure enough, it said, "This is a solid character-driven scene, and it reads very much like an episode cold-open from (it correctly identified the show, but I've removed it). The voices of (the listed characters) are recognizable, which is the most important benchmark for fanfiction in an established property." Then it went on to list the strengths of the piece and some minor improvements that could be made.
Though in the focused critique, it said,
* "(it's) hard to visualize the environment or the characters’ physical reactions"
* "humor comes almost entirely from (slapstick) and it loses impact because the stakes never rise"
* "The spatial layout is confusing... it’s hard to picture the exact room or danger"
* "the pacing is uneven"
the unfocused critique said,
* "Strengths: Excellent character voices, humor consistent with the show, clear premise, good dialogue rhythm"
* "Weaknesses: Slightly unclear spatial description, occasional narrator intrusion, a few overlong sentences"
* "Overall, this reads like competent, experienced fanfiction rather than beginner work. The characters are well understood, which is usually the hardest part."
It's interesting that the unfocused critique said that the story "read very much like an episode cold-open" when the things that a cold open needs to do -- set the scene clearly and raise the stakes so that the viewer wants to return after the credits to see what happens -- are not done at all here. Cold opens are not character moments, which the AI felt was the good part of the story.
I will note, by the way, that I didn't think it read like a cold open. It was more of a middle-of-the-episode, exploring a new place kind of scene. Which, of course, is why I'm very hesitant to listen to what AI says, and thus, my third rule above.
Interestingly, I got a totally different response from ChatGPT when I asked for a focused critique of my story, the one with the Doctor talking to Donna about circles. Rather than the "Problems" and "Improvements" format that I got when I fed other fics (which I chose because I thought they were weak) into it, ChatGPT gave me a response formatted with "What Works" and "What Could Be Improved" -- in other words, though ChatGPT had no idea that the pieces I gave it weren't all written by me, the only one that it didn't identify any problems with was the one I actually wrote. I suppose I should be flattered?
For the "things that could be improved", they all pretty much boiled down to one thing: ChatGPT felt that the Doctor's little talk about the symbolism of circles was too long, too professorial, and too expository. It broke its response into five sections (like character, dialogue, pacing, etc.) and in each one, it brought that up and explained why it was bad and how it could be improved.
However, I had actually written it that way deliberately, because the Doctor is often professorial, expository, and, well, gobby. So, I pointed that out to ChatGPT, and it responded, "You’re absolutely right—long philosophical monologues are very much in keeping with the Doctor’s character, especially in moments of reflection or teaching." Yes, it even italicezed "are". But the thing I hate hate hate is that "You're absolutely right!" If you correct ChatGPT, it always says that (at least in my minimal experience), and I hate that pandering attitude. Stop treating me like a child, ChatGPT. Just says, "Yes, that's true."
It then went on to suggest a really good improvement: rather than having one long paragraph, let Donna respond, just to break it up. (Which, of course, is what I used to say would have really helped Thirteen's lectures, having anyone actually respond to them.) Nice one. I'll keep that in mind. :)
Anyway, long story short, I'm still learning how to get AI to do what I actually want, rather than what it thinks I want, and I absolutely do not let anything it says go without double-checking it.
I've also been starting to use AI for work tasks, like building tools and testing, and I wanted to talk about that, but I've already spent way too long writing the above, so next time!
no subject
Date: 2026-03-14 01:39 am (UTC):)
I do appreciate your rule that AI is a tool and you’re not using it for the parts you enjoy doing. That’s actually a nice change to see something other than people complaining about AI slop . ( Though, I’ve seen some pretty bad AI generated art ).
… That banter is pretty funny! 😆