Which AI to use for professional writing?
September 2, 2024 10:08 PM Subscribe
I know, I know. Here is the deal--I was brought in as an additional author on a textbook. The previous edition has many virtues, but the writing style is not among them. It is turgid, clumsy, and difficult to read. Even Grammarly would choke on it. Can AI fix it, or at least get me started?
I was reading Teaching with AI (it is pretty good) and it occurred to me that maybe I can run the chapters through AI for a first pass at disenshittifying the prose. I want to train the AI on some of my previous writing, including my earlier book and my blog, and set it loose. Has anyone done something like this? Which AI did you use? How much cleanup afterward? I am open to step-by-step guides, software recs, or cautionary tales.
I was reading Teaching with AI (it is pretty good) and it occurred to me that maybe I can run the chapters through AI for a first pass at disenshittifying the prose. I want to train the AI on some of my previous writing, including my earlier book and my blog, and set it loose. Has anyone done something like this? Which AI did you use? How much cleanup afterward? I am open to step-by-step guides, software recs, or cautionary tales.
confluency is 100% right here. Listen to them.
posted by Paul Slade at 12:35 AM on September 3 [3 favorites]
posted by Paul Slade at 12:35 AM on September 3 [3 favorites]
Nthing that this is a bad use case for AI.
Back when I was a technical editor, if a document was so poorly written that it would be a massive chore to edit it, I would simply rewrite the whole thing. As in, I'd put the original on one screen and open a new Word doc on another and start typing.
I'd preserve all the meaning that the author had been trying to convey, but in correct English and with significantly more clarity.
As long as you type decently quickly, it's much
easier than trying to edit all the errors and confusing phrasing out of the original document.
posted by Jacqueline at 1:41 AM on September 3 [22 favorites]
Back when I was a technical editor, if a document was so poorly written that it would be a massive chore to edit it, I would simply rewrite the whole thing. As in, I'd put the original on one screen and open a new Word doc on another and start typing.
I'd preserve all the meaning that the author had been trying to convey, but in correct English and with significantly more clarity.
As long as you type decently quickly, it's much
easier than trying to edit all the errors and confusing phrasing out of the original document.
posted by Jacqueline at 1:41 AM on September 3 [22 favorites]
Grammarly uses AI. You can try going to ChatGPT and uploading as much of your writing as possible and ask it to do this but the secret is that even that takes work. I was chastised at work for bringing this up because of the belief AI was a magic money maker. If you truly are hitting a wall it can be helpful to brainstorm against but I’d time box it. As someone who has done hard work in this area it is going to look great at first then get progressively worse.
posted by geoff. at 2:01 AM on September 3 [2 favorites]
posted by geoff. at 2:01 AM on September 3 [2 favorites]
In the spirit of answering the question, I’d suggest ClaudeAI. We have been using it and I’m pleased with how it has saved us time. It’s been pretty good at imitating different writing styles.
Notes
- You must still commit to fully proofreading with a critical eye. This part is the hardest because it makes it tantalising to get lazy. But you MUST commit to this.
- Start small, the training takes time. You have to spoon feed it and correct it as you go. Even then, I’ve found it sometimes forgets what I trained it on.
- You also need to train it on the big themes, goals, tone of voice you are after. It won’t get it right just a paragraph at a time.
- train train train.
What I will caveat is that this might not be a timesaver for you for a one off project. We are investing our time and training for long term productivity pay off. All the work you put in to teach the damn thing might be better spend doing the damn thing yourself.
posted by like_neon at 4:59 AM on September 3 [6 favorites]
Notes
- You must still commit to fully proofreading with a critical eye. This part is the hardest because it makes it tantalising to get lazy. But you MUST commit to this.
- Start small, the training takes time. You have to spoon feed it and correct it as you go. Even then, I’ve found it sometimes forgets what I trained it on.
- You also need to train it on the big themes, goals, tone of voice you are after. It won’t get it right just a paragraph at a time.
- train train train.
What I will caveat is that this might not be a timesaver for you for a one off project. We are investing our time and training for long term productivity pay off. All the work you put in to teach the damn thing might be better spend doing the damn thing yourself.
posted by like_neon at 4:59 AM on September 3 [6 favorites]
At best, your plan would take longer than a traditional edit, because you have to do a full careful edit after, and also learn whatever AI you're using as a randomizer. And at the end, it will have a definite stink of AI tone to anyone who's read a bit of AI output.
posted by SaltySalticid at 5:19 AM on September 3 [2 favorites]
posted by SaltySalticid at 5:19 AM on September 3 [2 favorites]
You need to have a serious talk with your publisher before you even touch an AI on this project. It may nullify their (and your) copyright of the new edition. Case law on this topic is still evolving.
posted by heatherlogan at 5:59 AM on September 3 [17 favorites]
posted by heatherlogan at 5:59 AM on September 3 [17 favorites]
If, despite the very good advice above, you still decide to go forward with AI, use restricted scope AI tools such as Grammarly, Language Tool, or Microsoft Editor to check the document only for spelling, grammar, or tone. They will all three do this in a traditional flag-by-flag way, allowing you to work your way through a document and decide one item at a time whether the tool has validly flagged something or not. In this way you will improve the document but will not blindly allow it to be rewritten. What you most need to avoid is loading the entire thing into AI and having AI change it all in one go and *then* reviewing it — you need to see every single thing that it thinks needs to be changed *before* it is changed, not after.
posted by Mo Nickels at 7:00 AM on September 3 [8 favorites]
posted by Mo Nickels at 7:00 AM on September 3 [8 favorites]
I write reports for my organization, combining content that different sectors give me on their subject matter. Their writing is often terrible and since I am not a subject matter expert it's sometimes difficult to know how to re-write it in a clear way. I sometimes plug in the particularly terrible sections to ChatGPT and ask it to re-write them in a simpler way and then I use that to help me understand what the writing is getting at and as a starting-off point for editing. It is definitely useful for that, but I always have the sectors review the final document and approve the full reports with the edited content before I move them forward to ensure there are no factual errors.
posted by urbanlenny at 7:40 AM on September 3 [1 favorite]
posted by urbanlenny at 7:40 AM on September 3 [1 favorite]
Before investing time into it, I would just paste some especially gnarly chunks into Claude 3.5 Sonnet and just ask it to rewrite it to be clearer. If the result is at all passable then you might move on to trying to coax a particular style out of Claude.
I would agree with all the above that it almost certainly isn't going to be able to do this unsupervised. At best, these tools work at the level of an eager intern with very good grammar that has a dim understanding of the topic. At worst it's an overwhelmed intern that resorts to making stuff up when it reaches the edge of its understanding. That doesn't mean they can't be useful, I've found Claude to be a surprisingly useful coding assistant (and even so it sometimes completely confabulates answers).
posted by BungaDunga at 8:52 AM on September 3
I would agree with all the above that it almost certainly isn't going to be able to do this unsupervised. At best, these tools work at the level of an eager intern with very good grammar that has a dim understanding of the topic. At worst it's an overwhelmed intern that resorts to making stuff up when it reaches the edge of its understanding. That doesn't mean they can't be useful, I've found Claude to be a surprisingly useful coding assistant (and even so it sometimes completely confabulates answers).
posted by BungaDunga at 8:52 AM on September 3
I'd use Claude for this. Specifically I'd pay for a Pro plan and use the "Opus" model (the largest model). Claude has a larger context window than other models, and Opus seems to handle text analysis better than anything else I've tried to date.
You should also read Anthropic's documentation for working with large contexts for some guidance on how best to feed your content into Claude. You probably want to read the whole prompt engineering section at some point if you decide you want to use Claude in this way.
posted by dorothy hawk at 9:24 AM on September 3 [1 favorite]
You should also read Anthropic's documentation for working with large contexts for some guidance on how best to feed your content into Claude. You probably want to read the whole prompt engineering section at some point if you decide you want to use Claude in this way.
posted by dorothy hawk at 9:24 AM on September 3 [1 favorite]
Something no one else has touched on: it's unclear what topic your textbook covers. You may have declined to share that for privacy reasons, but there are strong reasons to double down on some of the warnings above, depending on the potential for significant real world consequences if the AI-ified text is inaccurate: structural engineering, medicine, etc.
posted by cupcakeninja at 8:24 AM on September 4 [2 favorites]
posted by cupcakeninja at 8:24 AM on September 4 [2 favorites]
You are not logged in, either login or create an account to post comments
Even if whatever model you use makes the text superficially sound good, you literally cannot trust the output to be coherent or correct. You (or the original author) would have to go over it word for word and compare it to the original to verify that it hadn't subtly (or unsubtly) changed the meaning. There are multiple cautionary tales about AI "summaries" failing to summarise text accurately, and obscuring meaning. And this is a textbook! Do you want to do this? Do you think the original author would want to do this? To be frank, if a collaborator suggested automatically generated copy edits to words that I, a human, had written, I would be not just frustrated but enraged.
Generative AI is not the magic shortcut that a lot of people want it to be. Using it is not going to be easier than doing the edits from scratch yourself, and may make the situation exponentially worse.
posted by confluency at 12:08 AM on September 3 [70 favorites]