What You Should Know
Setting the Record Straight About AI in News Coverage
As AI increasingly becomes part of the mainstream news cycle, outlets scramble to cover the latest developments. One recent trend seems to be coverage of specific AI hallucinations — instances where large language models produce false or nonsensical information. These reports paint a picture of unreliable, potentially dangerous technology run amok. What these stories fail to convey is that hallucinations are not a new phenomenon in AI.
Since the early days of generative models, researchers and developers have grappled with the challenge of occasional inaccurate outputs. That’s why the human element will always be critical to using AI. Oversight and validation — part of every workflow — reduce the risk of blunders. The sensationalized coverage obscures the fact that steady progress is being made in reducing hallucinations and improving model reliability.
This skewed portrayal of AI risks misinforming the public and policymakers about the technology’s true state and trajectory. By focusing on isolated incidents instead of informing audiences about how to mitigate them, news coverage potentially hampers productive discussions about the real benefits and challenges of AI development.
Communications professionals should take a more nuanced approach, acknowledging both the long-standing nature of hallucinations and ongoing efforts to address them. Positioning the conversation that way shows you understand how the technology works and that you’re being responsible with how your brand is using AI.
Elsewhere …
- Amazon’s Dr. Angela Shippy Wants to Help AI Transform the Healthcare System
- PODCAST: Taking the ‘Temperature’ of Creativity with AI
- Claude Releases Updated Model ‘Exceptional at Writing High-Quality Content’
- Central Banks Must Prepare for Profound Impact of AI, BIS Says
- New Startup Helps Authors Strike Licensing Deals with AI Companies
Tips and Tricks
🔧 How to use Claude’s new Artifacts tool
What’s happening: Last week, Anthropic unveiled Claude 3.5 Sonnet, the first update to its line of Claude 3 models. The new model is twice as fast as Claude 3 Opus, “shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone.” Claude 3.5 Sonnet also came with a new feature called Artifacts, which changes how users interact with AI.
How it works: When you enable Artifacts and prompt Claude, you see two windows. On the left is the area where you’d prompt Claude like you would most other AI tools. On the right is where Claude generates what you’re asking for — so far it’s able to generate code, text documents, and web designs. Text documents are probably most useful to communicators, but Claude shows how easy it is to create something unique like a basic 8-bit video game just by using natural language.
Try it out: Once you’ve enabled Artifacts, simply tell Claude what you’re looking to create, and it will do so on the right-hand side of your screen. (You can toggle it back off by clicking on the beaker icon when you start a new thread.) At the bottom of that window, you’ll see arrows so you can click between versions you’ve been iterating so you don’t have to scroll up and find them. You can also quickly either copy the text to your clipboard or download a file.
Quote of the Week
“AI is sort of like bacon. On its own, it’s not a meal, but when you mix it with other things, it can make them taste a lot better.”
— Adam C. Fisher, Director of Enterprise Project Staff in the Office of Pharmaceutical Quality at the FDA’s Center for Drug Evaluation and Research, during a Q&A session with global regulatory leaders at Gregory FCA client DIA’s Global Annual Meeting last week
