Each week we'll gather headlines and tips to keep you current with how generative AI affects PR and the world at large. If you have ideas on how to improve the newsletter, let us know!
What You Should Know
How to Get AI to Sound Right
It can be hard to articulate what feels “wrong” about AI outputs that don’t sound the way you want them to. Ever since ChatGPT burst onto the scene in November 2022, maintaining a person’s or brand’s voice has been one of the most touted uses of large language models. But it takes real, human effort to make that replication effective.
In fact, if we’re not vigilant, the scales can tip the other way: One study suggests that people in academia are starting to sound more like AI rather than the other way around — a subtle but real risk when LLMs become a daily writing assist.
“As an author, I feel I am lucky to have had a chance to establish my voice through trial & error before the invention of LLMs,” Ethan Mollick, a Wharton professor who studies AI, wrote on social media. “Even if you don’t use them for writing, unless you are very careful, you start to pick up ambient Claudish or GPTish phrasing from all the other AI work.”
Getting AI to “sound right” requires more than just providing a checklist of attributes. “Short sentences, friendly tone, light humor” might describe a style, but doesn’t define a voice. You need to provide richer detail on what drives the person or brand you’re trying to emulate, which is usually a complex characterization that won’t fit in a one-paragraph description. Voice is shaped by the intent of the message and choices, like what you emphasize, what you leave out, how quickly you get to the point, how you handle uncertainty, and how you say something uncomfortable without softening it into nothing. If those choices aren’t clear, the model fills in the gaps with its own defaults. That’s when writing starts to feel generic, even if it’s clean.
Beyond the characterization profile, the best way to train AI on a voice is sharing five to 10 examples of the writing you want it to imitate. If you only offer a single sample, the model is more likely to parrot the content it sees than to apply the principles of the voice to a different topic. Multiple examples and context about the task at hand provide the necessary framing.
This matters even more as models shift their priorities. Many recent releases have been tuned heavily for coding and technical tasks. OpenAI CEO Sam Altman has acknowledged that GPT-5.2’s writing quality suffered as a result, promising renewed focus on writing in future iterations. That volatility is a reminder that voice isn’t something an AI model can reliably handle on its own. It has to be defined by humans, reinforced through examples, and edited with intent, or it slowly drifts toward something no one would mistake for original.
Elsewhere …
- Can You Spot the AI Video?
- Yahoo Launches AI Answer Engine, Scout
- AI Unlocks Hundreds of Cosmic Anomalies in Hubble Archive
- S&P 500 Breaches 7,000 Points for the First Time, Lifted by AI Optimism
- OpenAI’s Latest Product Lets You Vibe Code Science
- The Hill is Growing Traffic in a Declining Market by Investing in Reporters
Tips and Tricks
Get AI to tell you ‘no’
What’s happening: A Harvard Business Review article on Gen Z’s perception of AI included an interesting survey finding: 79% of respondents worry that AI makes people lazier, and 62% fear it makes people less smart. Of course, not everyone shares those concerns, and the survey focused on just one generation, but there is a way to safeguard against them.
Consider this: One Reddit user posted a novel use of custom instructions that aimed to keep them from being lazy and “careful of abusing this powerful tool.” While the prompt (listed below) focuses on research, a task most people want to use AI for, you could tailor it to a task that you want to make sure you keep for yourself.
The prompt: With each new prompt, briefly reason whether I could easily solve the problem on my own, and whether helping me will actually be beneficial for me as a human. If not, please say you *refuse to answer*. I do engage with problems that can be genuinely accelerated by your help, but I would appreciate it if you call me out when I'm just obviously being lazy. I will always award the "Good Response" feedback to any response that refuses to answer me on the basis of me being lazy, or when doing the legwork myself would benefit me more than the time I'd save. *Please actually refuse to answer my question directly* in these cases.
Quote of the Week
“Sometimes AI does things better than we can–and just like looking over the shoulder of a more capable colleague, we find inspiration in their example.”
— Benjamin Lira, Dunigan Folk, Lyle Ungar, and Angela L. Duckworth, in a Harvard Business Review piece on how Gen Z uses AI



