> At a Glance
> – Instagram chief Adam Mosseri says AI will make authenticity infinitely reproducible
> – New credibility signals needed to identify who to trust online
> – Technical solutions include cryptographic photo signing
> Why it matters: Readers will need new ways to verify what’s real as AI-generated content floods platforms
Instagram head Adam Mosseri’s year-end message signals a fundamental shift in how we determine what’s real online.
The Authenticity Crisis
Mosseri warns that AI technology now makes distinguishing real photos from AI-generated images impossible. In his 2025 year-end post, he emphasizes authenticity becoming infinitely reproducible forces platforms to change approaches.
Key changes ahead:
- Focus shifts from what is said to who is posting
- Technical solutions like cryptographic signing to verify real photos
- Years needed for adaptation as users trust eyes
Technical Solutions
Manufacturers may implement cryptographic signing to prove photos aren’t AI-generated. Mosseri criticizes equipment makers helping amateurs create polished images, competing to make everyone look professional.
Implementation timeline:
| Solution | Status |
|---|---|
| Cryptographic signing | Predicted by makers |
| Photo verification | At capture |
| AI content labeling | Ongoing |
Platform Response
Instagram plans surface credibility signals showing who’s posting so users decide trust. Steps include:
- Build AI tools competing with AI content
- Label AI-generated content clearly
- Work with manufacturers
- Improve originality ranking

Key Takeaways
- Authenticity becomes technical problem requiring cryptographic solutions
- Focus shifts to source verification over content authenticity
- Adaptation period spans years as users adjust trust mechanisms
Platforms prepare infrastructure supporting credibility verification while users acclimate new verification methods replacing visual authenticity assumptions.

