AI Summary
TLDR: Author headshots have become a parsed E-E-A-T signal for AI search engines in 2026. AI crawlers extract author photos via Person schema’s image property, validate them against other sources (LinkedIn profile photos, conference photos), and use the consistency to score author trustworthiness. Generic stock-photo headshots, missing photos, or inconsistent photos across platforms suppress citation authority. The standard: real photo, consistent across all platforms, properly tagged in Person schema, ideally with metadata indicating who took it and when.
Why author photos became an AI signal
AI engines have been investing in author identity verification for two years. The goal is to distinguish real authors with track records from generated personas (which became a problem when LLMs made it trivial to create fake author bios at scale).
Photos are one verification signal in a stack. The engine asks: does this author photo match the LinkedIn photo, the conference speaker photo, the podcast guest photo? If yes, the author is more likely real and trustworthy. If no – or if there is no photo at all – the trust score drops.
The photo standard for AI citation authority
Five characteristics of a citation-grade author photo:
- Real person (not stock photo or AI-generated).
- Clear face (not a logo, avatar, or distant shot).
- Consistent across platforms (same photo or photos from same session on LinkedIn, About page, social profiles).
- Recent (within 2 years; older photos suggest stale identity).
- Professional context (well-lit, neutral background, not a vacation selfie).
This is not vanity. It is a parsed signal. Stock photo headshots are detectable via reverse image search and AI engines do reverse-search author photos at scale.
Person schema with image property: the markup
Required Person schema fields for image:
- image: URL of the photo (use the highest resolution version).
- image with ImageObject: use ImageObject to add width, height, caption, and creator (photographer name).
- sameAs: links to the author’s other profiles (LinkedIn, Twitter/X, etc.) where the same photo appears.
Adding ImageObject with creator metadata signals provenance and is a small but real positive signal. Most sites skip this.
Multi-photo strategy: variety with consistency
Best-in-class authors have 3 to 5 photos in active circulation:
- Primary headshot (used on About page and most platforms).
- Casual photo (LinkedIn alternative, podcast guest features).
- Speaking photo (from a conference or panel).
- Working photo (at desk, lab, etc., for editorial features).
- Group photo with team or co-founder (for company About page).
All photos should be from the same era (within a year of each other) and clearly recognisable as the same person. Variety signals professional presence; consistency signals identity stability.
Common photo mistakes that suppress E-E-A-T
- No photo at all. Author appears as text-only entity. Highest suppression risk.
- Stock photo headshot. Detectable via reverse image search; flags as low-trust.
- Cartoon or avatar. Acceptable for some contexts (Twitter, gaming) but signals informality on professional content.
- Very old photo (5+ years). Suggests stale or non-current identity.
- Different photo on every platform. Confuses identity verification.
- Group photo with no labelling. Engine cannot determine which person is the author.
Most B2B sites have at least 2 of these issues. Fixing them is a 1 to 2 hour task that pays back over years of citations.
Photo provenance: documenting where photos came from
Higher-trust signal: include provenance metadata for author photos:
- Photographer name (in image alt text or schema).
- Date taken (in EXIF or in caption).
- Studio or context (e.g., ‘Headshot from 2024 Series B announcement’).
- Copyright notice (signals professional photography).
This is overkill for most sites but matters for high-stakes content (medical, financial, legal) where E-E-A-T scrutiny is highest.
AI-generated photos: a 2026 trap to avoid
Some founders have started using AI-generated headshots (MidJourney, generated portraits). This is detectable and harmful:
- AI-generated photos have detectable artefacts (eye asymmetry, hand issues, background inconsistencies).
- AI engines have classifier models specifically for detecting AI-generated images.
- When detected, the trust score drops sharply because it suggests author identity manipulation.
Use real photographs. Even an iPhone selfie in good light is better than an AI-generated ‘professional’ headshot.
Frequently Asked Questions
How recent should author photos be?
Can I use the same photo on every platform?
Will black-and-white photos hurt my E-E-A-T?
Do team author pages need individual photos?
What about anonymous or pseudonymous authors?
Want this implemented for your brand?
I help growth-stage companies own their category in AI search. Get an author E-E-A-T audit.