The AI Model Risk Catalog reveals a troubling disconnect: what AI developers and researchers worry about isn't what's actually harming people in the real world. By analyzing nearly half a million model cards from Hugging Face, this research exposes how developers fixate on technical glitches and bias (44% of their concerns) while largely ignoring the fraud and manipulation that accounts for 22% of actual AI incidents.
We extracted 2,863 unique risks from developer documentation and compared them against both researcher predictions (from MIT's Risk Repository) and real-world harms (from the AI Incident Database). The findings are stark: researchers focus heavily on governance and societal impacts that for now rarely materialize as incidents, while both groups miss the human-interaction risks—like deepfakes scamming immigrants or AI-generated political disinformation—that dominate news headlines.
This isn't just another risk taxonomy. The AI Model Risk Catalog links specific risks to individual models, creating a granular reference that shows, for instance, how a text-to-video model's "outputs realistic faces" warning connects to actual fraud cases. It reveals how developers write vague warnings like "increases risk to users if used irresponsibly" while missing concrete dangers like social engineering attacks.
The catalog serves as a reality check for the AI community: developers need to look beyond code bugs to consider how bad actors exploit their systems, researchers should ground their concerns in actual incident data, and everyone needs clearer, structured risk reporting that bridges the gap between theoretical worries and real-world harms. Like comparing a weather forecast to actual storm damage, this work shows where our risk predictions align with reality—and where we're worringly off the mark.
N.B.: If you do not receive the instruction message within a few hours, please check your junk/spam e-mail folder just in case the email was moved there.