This Week in AI — 💻 Why picking the right model matters more than the tool

Mar 4, 2026 | AI

Each week we'll gather headlines and tips to keep you current with how generative AI affects PR and the world at large. If you have ideas on how to improve the newsletter, let us know!

What You Should Know 


Why the AI Model Matters More than the Tool 

Most people using AI at work think in terms of tools. ChatGPT. Claude. Gemini. Pick one, type a prompt, get an answer. Like much about AI these days, that way of thinking is already outdated.

The real choice goes beyond just the platform to the model running within it. So far this week, OpenAI rolled out GPT-5.3 Instant, and Google introduced Gemini 3.1 Flash-Lite. That makes Anthropic’s default model, Claude Sonnet 4.6, the elder statesman at :: checks notes:: the ripe old age of two weeks. Understanding the differences between these models can save time and produce better outputs, depending on the task at hand.

GPT-5.3 Instant is designed to respond quickly while producing more grounded answers and 26.8% fewer hallucinations than previous models when using the web. OpenAI even called out that this model is “more accurate, less cringe” than its predecessor GPT-5.2, which would often come across as “overbearing or making unwarranted assumptions” about the user. You may notice an improvement in research, too, because this model is “less likely to overindex on web results,” which GPT-5.2 often does with links to questionable sources.

Claude Sonnet 4.6 is designed to handle complex reasoning and extremely long context windows, meaning it can process large documents or multiple sources in a single conversation. That’s useful when the task involves analysis rather than quick drafting, like feeding it a long report and asking for key insights or stress-testing messaging against potential criticism. However, you might prefer Opus 4.6 (released in early February), which is lauded for its writing capabilities.

Then there’s Gemini 3.1 Flash-Lite, which Google built for speed and efficiency and is meant for developers. So far, it’s only available via API, but it could soon become Gemini’s default model, replacing Gemini 3 Flash, which debuted in December.

With new models arriving every few weeks, your workflow isn’t something you can set once and forget for months on end. It’s worth getting in the habit of checking the model selector before you start a task and thinking about what the work actually requires: speed, reasoning, writing quality, or reliability. The model that works best for quick drafting may not be the one you want analyzing a 60-page report or generating dozens of variations for social. 

Picking ChatGPT, Claude, or Gemini used to be the entire decision. Increasingly, it’s just one consideration.

Elsewhere …

Tips and Tricks

📚 Build a stronger knowledge base in ChatGPT projects

What’s happening: Projects are one way to organize chats on the same topic in ChatGPT, and the platform recently expanded its capabilities. Previously, the projects’ knowledge base came only from uploaded documents, but now you can also link to files and folders within Google Drive.

Why it matters: Most communicators use AI one conversation at a time. You open a chat, paste context, get an answer, and move on. Projects change that by letting you build a running library of source material that the AI can reference whenever you’re working on that topic.

For example, you could create a project for a client and add background documents from Drive and briefing notes you paste in yourself. Instead of re-explaining the context every time, the AI already has it.

How to use it: Start thinking of projects less like folders and more like a living reference library. If you have a repository of new material in a Google Drive folder, like a strategy doc, research report, or outreach tracker, the project will automatically consider those documents if you’ve linked to that folder in the source area.

Over time, that project becomes a shared context the AI can pull from, which makes your prompts shorter and the outputs more useful.

Quote of the Week

“Now there is a flood of relatively low-quality content. Because the quantity is so large, it congests the recommendation systems, so it gets harder to encounter the truly high-quality content.

“For professionals, the best thing for them to do is learn to use generative AI and combine it into their workflow. At the same time, they have to pay attention to whether consumers like the way they incorporate generative AI.”

— Tianxin Zou, Ph.D., Professor of Marketing at the University of Florida’s Warrington College of Business, on a paper he co-authored about the effects of AI slop

How Was This Newsletter?

😀 😐 🙁

Dave Isaac

Want to stay up on the latest in AI?

Subscribe to stay up to date on the latest on how you can grow your brand and build an audience through media, content, social media, and digital marketing by harnessing AI.