⚙️ Active vs Passive Intermediaries
A critical distinction affecting safe harbour eligibility emerged from e-commerce litigation:
Factors Indicating "Active" Status (Louboutin Factors)
- Payment processing/facilitation
- Delivery/logistics handling
- Product curation/selection
- Customer service provision
- Authenticity guarantees
- Pricing involvement
- Editorial control over listings
📋 The "Enablement" Test
Recent judgments have introduced an "enablement" test under Section 79(3)(a) — examining whether platforms enable infringement:
🤖 AI Content Moderation
Platforms increasingly use AI for content moderation. IT Rules 2021 mandate SSMIs to deploy technology for proactive identification of certain content:
AI Moderation Challenges
| Challenge | Impact |
|---|---|
| False Positives | Legitimate content incorrectly removed — affects free speech |
| False Negatives | Harmful content missed — platform liability risk |
| Context Blindness | AI cannot understand sarcasm, satire, news reporting |
| Evolving Content | New forms of harmful content evade detection |
| Language Diversity | Indian languages poorly supported by AI tools |
🌐 AI as Intermediary — Emerging Issues
Generative AI platforms (ChatGPT, Gemini, etc.) present novel intermediary liability questions:
When AI Platforms May Be Intermediaries
- Allowing users to share AI-generated content with others
- Hosting user-created AI chatbots accessible to public
- AI search tools aggregating third-party content
- Platforms generating content from user prompts using web data
When AI Platforms May NOT Be Intermediaries
- AI autonomously generating content from proprietary training data
- No third-party content hosting — purely computational output
- Direct AI-to-user interaction without sharing features
📊 Platform Transparency Reports
SSMIs must publish monthly compliance reports under Rule 4(1)(d):
| Disclosure Requirement | Purpose |
|---|---|
| Complaints received from users | Volume of grievances |
| Actions taken on complaints | Response rate |
| Content removed proactively | AI moderation effectiveness |
| Government/court orders received | State intervention transparency |
| Content removed per orders | Compliance rate |
| Accounts suspended | Enforcement actions |
These reports are published on platform websites (e.g., Google Transparency Report, Meta Transparency Center) and provide valuable data on content moderation at scale.
⚖️ Practical Compliance Checklist
- Publish Terms of Service, Privacy Policy prominently
- Inform users about prohibited content categories
- Appoint Grievance Officer (name, contact details public)
- Acknowledge complaints within 24 hours
- Resolve complaints within 15 days
- Remove content within 36 hours of court/govt order
- Retain user data for 180 days post-account closure
- Assist law enforcement within 72 hours
- Appoint Chief Compliance Officer (India-resident senior employee)
- Appoint Nodal Contact Person (24x7 law enforcement coordination)
- Appoint Resident Grievance Officer
- Enable first originator traceability (for specified content)
- Deploy technology for CSAM detection
- Publish monthly compliance reports
- Voluntary user verification mechanism