Each week we'll gather headlines and tips to keep you current with how generative AI affects PR and the world at large. If you have ideas on how to improve the newsletter, let us know!
What You Should Know
OpenAI Says You Can Get More out of AI
If you were betting on which states are using AI at work the most, New York and California might feel like safe picks. Arizona or Colorado? Probably not, but recent Census Bureau data shows them at the top of the list, highlighting just how uneven and unexpected adoption looks across the country.
That state-by-state spread is useful context, but OpenAI says that regardless of where you work or how often you use AI, you’re probably leaving some toothpaste in the tube.
OpenAI calls it a “capability overhang” — the gap between what AI systems can do and the value users reap. Many stop at surface-level tasks like drafting emails or content, summarizing notes, or brainstorming without a full set of instructions and context. In those cases, the results are likely underwhelming.
That idea echoed through this year’s World Economic Forum in Davos, Switzerland. PwC Global Chair Mohamed Kande told Fortune that 56% of companies are getting “nothing out of” their AI investments, largely because “you have to go to the basics” to see a benefit. In other words, access alone doesn’t drive value. How people are trained, encouraged, and how they experiment is what creates ROI on that access.
AI is only as good as you prompt it to be. Kande noted that “AI moves so fast,” and new tools, platforms, and capabilities are constantly rolling out, but those innovations alone aren’t silver bullets. The magic ingredient is the human touch.
Asking for a blog post on 2026 trends in wealth management or cybersecurity won’t bear quality content unless there’s context. What patterns have emerged in those spaces? What are customers in those industries saying their challenges are? If you don’t provide that context, you’ll only ever get a generic result.
Beyond content, specificity and context can help AI help you in other ways. Rather than asking for basic shortcuts, walk through your day-to-day work and where it slows down. With enough collaboration, AI can move from assisting with tasks to helping design processes and tools that are genuinely useful.
Iterating on prompts, challenging outputs, and editing aggressively is where value compounds and “capability overhang” starts to shrink. The opportunity now is less about catching up to the technology and more about making ourselves a valuable component of an AI process. There’s still plenty of toothpaste left — as long as you’re willing to keep squeezing.
Elsewhere …
- LISTEN: How a Home Remodeler Used AI to Scale from $10M to $1B in 10 years
- OpenAI to Begin Testing Ads on ChatGPT in the US
- Anthropic, Google, and Microsoft Fight to Win Teachers
- AI at Davos 2026: What Tech Leaders Hope and Fear This Year
- OpenAI Plans the Debut of Its First AI Hardware Device in 2026
Tips and Tricks
A new way to recall
What’s happening: Over the past year, OpenAI says it has improved memory capabilities in ChatGPT. It hasn’t always been clear how it improves memory, but the latest update at least shows an example of how you might benefit.
How so?: If you’re looking for a specific conversation, you can use the search function or rely on a naming convention for your conversations, but an OpenAI software engineer showed in a post on X how you can also conversationally ask ChatGPT about old conversations. You can pick up where you left off in that conversation from last year, or even reference it in the start of a new one.
Quote of the Week
“The automation of knowledge work using LLMs is the key focus of many enterprise generative AI pilots. For certain tasks, the accuracy of the LLM may not be ‘good enough,’ and it may be tempting to conclude that the task is not a good fit for automation using LLMs. But rather than comparing the LLM’s accuracy to the best possible (i.e., 100%), it is better to compare it to the accuracy of humans doing the work right now and to track the changing human-LLM accuracy gap for that task. Maybe humans achieve 95% accuracy and the LLM achieves ‘only’ 90%.
“The key thing to remember is that as frontier LLMs get more capable, their accuracy will continue to improve, while human accuracy will likely be unchanged. So it is quite possible that LLM accuracy surpasses human accuracy in 2026 for many enterprise tasks.
“What are these tasks? How much business value do they represent? How much employment is at risk? These are some of the questions on my radar for 2026.”
— Rama Ramakrishnan, Professor of the Practice, AI/Machine Learning at MIT Sloan, in a post about what 2026 will bring in the AI era



