Each week we'll gather headlines and tips to keep you current with how generative AI affects PR and the world at large. If you have ideas on how to improve the newsletter, let us know!
What You Should Know
How to Say What AI Can’t
One of the stranger challenges in human communication right now is … well, making it seem human.
AI plays a big role in the overwhelming amount of content produced on a daily basis. According to Graphite, there are now more articles written by AI than by humans. At the same time, editors and reporters are growing wary. Many outlets ban AI use for contributed content and flag submissions that trip AI detectors. Same for reporters looking for expert commentary. Even if the copy is fully human-written, if it feels robotic or generic, it can raise eyebrows.
Using AI to help draft content isn’t inherently wrong. More than three-quarters of communicators are doing it in some form, according to Muck Rack. The problem isn’t the tool, but the absence of perspective.
If content could plausibly have been produced by anyone with a prompt and five minutes, it probably won’t stand out. And standing out is pretty core to public relations.
One thing a subject matter expert will have that AI never will is lived experience. Instead of asking SMEs for responses to a particular topic, try asking them questions that might elicit something thought-provoking:
- What are you hearing from your customers?
- What is the market getting wrong?
- Is there nuance the media is missing?
- What do you think happens next, and why?
Those answers inject texture. They introduce stakes. They create specificity that LLMs can’t scrape from the internet. AI struggles with informed dissent. It can’t sit in a boardroom or field calls from customers and prospects. It doesn’t have reputational skin in the game. But you do.
It’s no longer enough for content to be clear and competent. It has to feel anchored in a real person’s experience, or say something that others won’t. If you want to stand out, push for the detail only your team or your client can provide. That’s something AI can’t fake.
Elsewhere …
Tips and Tricks
❓ Question first, context later
What’s happening: A recent study by Google researchers revealed why some prompts aren’t very effective, and that repeating them may help. Most non-reasoning large language models, the study says, only read tokens in order. They can look back, but never forward.
The study suggested that prompts that first include the context, then a question, were less effective, and that repeating the prompt improved the output 67% of the time. It also said that repeating the prompt doesn’t change the length or format of the outputs, which may make it an effective practice when reasoning isn’t used.
Also: The study showed prompts that asked a question before context were more effective at baseline. Repeating those prompts resulted in much smaller gains than repeating the context-first prompts. In other words, a non-reasoning LLM will perform better if you say, “Can you help me write a report? Here’s the raw data,” rather than the other way around.
The caveat: The study used seven models to test its theory: Gemini 2.0 Flash, Gemini 2.0 Flash Lite, GPT-4o-mini, GPT-4o, Claude 3 Haiku, Claude 3.7 Sonnet, and Deepseek V3. Many of the newer LLMs, like Claude Opus 4.6 and OpenAI’s GPT-5.2 Thinking, are reasoning models, and already get the effects of this technique automatically.
Bottom line: If you don’t have access to reasoning models, try typing your prompt twice in the same message. At the very least, if you lead with a question or explain a task you want the AI to help with and then give it more context, you’re setting yourself up for a better output.
Quote of the Week
“The model was the product, the app was the website, and the harness was minimal. You typed, it responded, you typed again. Now the same model can behave very differently depending on what harness it’s operating in. Claude Opus 4.6 talking to you in a chat window is a very different experience from Claude Opus 4.6 operating inside Claude Code, autonomously writing and testing software for hours at a stretch. GPT-5.2 answering a question is a very different experience from GPT-5.2 Thinking navigating websites and building you a slide deck.
“It means that the question ‘which AI should I use?’ has gotten harder to answer, because the answer now depends on what you’re trying to do with it.”
— Ethan Mollick, Professor at the Wharton School of the University of Pennsylvania, in a guide explaining which AI tools and models to use



