Headlines You Should Know
Â
OpenAI Announces âPreparedness Frameworkâ to Track and Mitigate AI Risks
Forget the Friday at 5 news dump â this is the just-before-you-slam-the-
The ChatGPT creator published a blog post yesterday with the ominous title âPreparedness.â Within, OpenAI unveiled its âprocesses to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.â But why? Is this the precursor to artificial general intelligence (AGI), which has been the companyâs goal since Day 1? Some have theorized that. Is it just mop-up duty after safety concerns were voiced around the time of Sam Altmanâs temporary firing? Perhaps so, now that the board has veto power over launching new systems.
While it feels like thereâs another shoe waiting to drop here, there is a widespread recognition that innovation in the AI space needs to be tempered to the speed of safe deployment.Â
Elsewhere âŠ
- LISTEN: Redefining PR in the Age of AI
- Pakistanâs Former Prime Minister Uses AI to Campaign From Prison
- REVIEW: Googleâs New AI Music Generator, MusicFX
- GPT and Other AI Models Canât Analyze an SEC Filing, Researchers Find
Tips and Tricks
 Prompt engineering, according to OpenAIÂ
Whatâs happening: The term âprompt engineeringâ may seem better suited for April or May in this year of swift development for AI. After all, getting good output from chatbots is less about formulaic instructions than plain âol solid context these days. But OpenAI is throwing it back to a few months ago, using that title for a blog post published in the documentation section of its website.Â
OpenAI says:Â The blog outlines six strategies, and if youâve been visiting this space each week, many will look familiar. Hereâs one we havenât tackled: Giving the model time to âthink.â OpenAI puts it this way: âIf asked to multiply 17 by 28, you might not know it instantly, but can still work it out with time. Similarly, models make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Asking for a âchain of thoughtâ before an answer can help the model reason its way toward correct answers more reliably.â
Our take:Â This is an interesting way to reduce hallucinations. Sometimes, even with reference material, AI doesnât pick up on the context youâve laid out for it and goes down the wrong path. This is why itâs so important to review outputs thoroughly â not all hallucinations will be obvious.Â
If you recognize a part of the content youâre building isnât going the way you want, ask the AI to talk through just that specific section. Working in bite-size chunks generates better responses from the AI anyway, and makes editing a draft less cumbersome. In addition to asking for a chain of thought, you can also ask the AI to take another run at that section with the specific context that it was missing.
Quote of the Week
âIf I had to make a prediction, in high-income countries like the United States, I would guess that we are 18â24 months away from significant levels of AI use by the general population. In African countries, I expect to see a comparable level of use in three years or so. Thatâs still a gap, but itâs much shorter than the lag times weâve seen with other innovations.â
â Bill Gatesâ 2024 predictions on his blog, GatesNotes
