⚙️ Active vs Passive Intermediaries

A critical distinction affecting safe harbour eligibility emerged from e-commerce litigation:

Christian Louboutin Sas v. Nakul Bajaj (Darveys.com)
2018 Delhi High Court — Single Judge
Landmark distinction: Court found Darveys.com was an "active" intermediary because it: processed payments, arranged deliveries, curated products, provided customer support, and guaranteed authenticity. Such platforms go beyond "mere hosting" and may lose S.79 protection. The court listed multiple factors to assess intermediary status.

Factors Indicating "Active" Status (Louboutin Factors)

  • Payment processing/facilitation
  • Delivery/logistics handling
  • Product curation/selection
  • Customer service provision
  • Authenticity guarantees
  • Pricing involvement
  • Editorial control over listings
Amazon v. Amway India Enterprises
2020 Delhi HC Division Bench
Overturned the distinction: The Division Bench held that Section 79 does NOT distinguish between "active" and "passive" intermediaries. The safe harbour applies to ALL intermediaries meeting the statutory conditions. The Christian Louboutin single-judge ruling was effectively set aside on this point.
✅ Current Position
Per Amazon v. Amway (Division Bench), Section 79 protection is available to all intermediaries complying with due diligence requirements. The active/passive distinction is NOT a statutory requirement. However, platforms with greater control may still face scrutiny under S.79(2)(b) — "does not select or modify information."

📋 The "Enablement" Test

Recent judgments have introduced an "enablement" test under Section 79(3)(a) — examining whether platforms enable infringement:

Google LLC v. DRS Logistics
2023 Delhi HC Division Bench
Google's Ads Programme allowing advertisers to purchase trademarks as keywords constitutes "use" under trademark law. If such use causes confusion, Google cannot claim S.79 safe harbour as its programme "enables" the infringement. The court applied an enablement analysis beyond traditional "actual knowledge."
Puma SE v. Indiamart Intermesh Ltd.
2024 Delhi HC
Indiamart's platform allowing sellers to select brand names (including Puma) for selling products, without IP verification, "enables" trademark infringement. The platform's setup facilitates infringement, potentially exceeding the protection of S.79.
⚠️ Implications of Enablement Test
The enablement test goes beyond "actual knowledge." If a platform's design or features inherently facilitate infringement, safe harbour may be denied. This places greater burden on platforms to implement safeguards proactively — contrary to the Shreya Singhal "no proactive monitoring" principle.

🤖 AI Content Moderation

Platforms increasingly use AI for content moderation. IT Rules 2021 mandate SSMIs to deploy technology for proactive identification of certain content:

AI Moderation Challenges

ChallengeImpact
False PositivesLegitimate content incorrectly removed — affects free speech
False NegativesHarmful content missed — platform liability risk
Context BlindnessAI cannot understand sarcasm, satire, news reporting
Evolving ContentNew forms of harmful content evade detection
Language DiversityIndian languages poorly supported by AI tools
✅ Best Efforts, Not Strict Liability
Per IT Rules, proactive monitoring is based on "best efforts" — platforms aren't strictly liable for AI failures. Section 79 immunity remains if platforms act expeditiously upon actual knowledge (court/govt order). However, gross negligence in deploying moderation could still be scrutinized.

🌐 AI as Intermediary — Emerging Issues

Generative AI platforms (ChatGPT, Gemini, etc.) present novel intermediary liability questions:

When AI Platforms May Be Intermediaries

  • Allowing users to share AI-generated content with others
  • Hosting user-created AI chatbots accessible to public
  • AI search tools aggregating third-party content
  • Platforms generating content from user prompts using web data

When AI Platforms May NOT Be Intermediaries

  • AI autonomously generating content from proprietary training data
  • No third-party content hosting — purely computational output
  • Direct AI-to-user interaction without sharing features
Moffatt v. Air Canada
2024 Canada
Air Canada argued its chatbot was a "separate legal entity" — rejected. Courts held the company deploying AI remains responsible for AI outputs. AI is not a separate legal person capable of bearing liability independently.
⚠️ MeitY Advisory on AI
MeitY has advised that organizations deploying AI models must comply with intermediary obligations, including preventing illegal content generation. The operator/deployer is treated as the intermediary, not the AI itself.

📊 Platform Transparency Reports

SSMIs must publish monthly compliance reports under Rule 4(1)(d):

Disclosure RequirementPurpose
Complaints received from usersVolume of grievances
Actions taken on complaintsResponse rate
Content removed proactivelyAI moderation effectiveness
Government/court orders receivedState intervention transparency
Content removed per ordersCompliance rate
Accounts suspendedEnforcement actions

These reports are published on platform websites (e.g., Google Transparency Report, Meta Transparency Center) and provide valuable data on content moderation at scale.

⚖️ Practical Compliance Checklist

✅ For All Intermediaries
  • Publish Terms of Service, Privacy Policy prominently
  • Inform users about prohibited content categories
  • Appoint Grievance Officer (name, contact details public)
  • Acknowledge complaints within 24 hours
  • Resolve complaints within 15 days
  • Remove content within 36 hours of court/govt order
  • Retain user data for 180 days post-account closure
  • Assist law enforcement within 72 hours
⚠️ Additional SSMI Requirements (50L+ Users)
  • Appoint Chief Compliance Officer (India-resident senior employee)
  • Appoint Nodal Contact Person (24x7 law enforcement coordination)
  • Appoint Resident Grievance Officer
  • Enable first originator traceability (for specified content)
  • Deploy technology for CSAM detection
  • Publish monthly compliance reports
  • Voluntary user verification mechanism