Part 6 of 8

AI Ethics, Bias & Fairness

Understand legal implications of algorithmic bias, discrimination in AI systems, constitutional dimensions under Article 14, fairness auditing requirements, and DPDPA provisions on automated decision-making.

~90 minutes 5 Sections Audit Frameworks

6.1 Understanding Algorithmic Bias

Algorithmic bias occurs when AI systems produce systematically unfair outcomes for certain groups. Understanding bias sources is essential for compliance and ethical AI deployment.

Types of AI Bias

Bias Type Source Example
Historical Bias Training data reflects past discrimination Hiring AI trained on historical data where women were underrepresented
Representation Bias Training data underrepresents certain groups Facial recognition trained mostly on lighter skin tones
Measurement Bias Flawed metrics or proxies Using zip code as proxy for creditworthiness (correlates with race)
Aggregation Bias One-size-fits-all model for diverse groups Medical AI trained on one population, deployed on another
Evaluation Bias Testing on non-representative data AI performs well on test set but fails on real-world diversity

High-Risk Areas for AI Bias

  • Hiring & Recruitment: Resume screening, interview evaluation
  • Lending & Credit: Loan approvals, credit scoring
  • Healthcare: Diagnosis, treatment recommendations, triage
  • Criminal Justice: Risk assessment, predictive policing
  • Insurance: Underwriting, claims processing
  • Education: Admissions, grading, proctoring
Key Insight

AI bias is not just a technical problem - it has legal consequences. Discriminatory AI outcomes can violate constitutional equality, anti-discrimination laws, consumer protection, and sector-specific regulations.

6.2 Constitutional Framework

Algorithmic discrimination engages fundamental rights under the Indian Constitution, particularly Article 14 (equality) and Article 15 (non-discrimination).

Article 14 - Right to Equality

"The State shall not deny to any person equality before the law or the equal protection of the laws within the territory of India." Article 14, Constitution of India

Application to AI Decisions

  • State Action: Government AI systems directly bound by Article 14
  • Horizontal Application: Through DPDPA, consumer laws, private AI also restricted
  • Reasonable Classification: AI classifications must have rational nexus to purpose

Article 15 - Prohibited Discrimination

Article 15(1)
The State shall not discriminate against any citizen on grounds only of religion, race, caste, sex, place of birth or any of them.

AI systems must not discriminate based on protected characteristics, even indirectly through proxy variables.

Proxy Discrimination Problem

AI may use facially neutral variables that correlate with protected characteristics:

  • Pin Code → Caste/Religion: Residential segregation patterns
  • Name → Religion/Caste: Names often indicate community
  • Language → Region: Language preferences correlate with origin
  • Browsing History → Religion: Content consumption patterns
Legal Risk

Even if AI developers do not intend discrimination, using variables that serve as proxies for protected characteristics can constitute indirect discrimination. Courts may apply disparate impact analysis.

Article 21 - Privacy & Dignity

AI profiling and automated decisions also implicate Article 21 rights:

  • Puttaswamy: Informational privacy includes control over personal data
  • Dignity: AI decisions reducing persons to algorithmic scores may violate dignity
  • Due Process: Consequential AI decisions may require procedural safeguards

6.3 DPDPA & Automated Decision-Making

The Digital Personal Data Protection Act, 2023 contains provisions relevant to AI-based automated decision-making and profiling.

Section 11 - Rights of Data Principal

Data principals have rights that impact AI processing:

  • Right to Access: Know what personal data is being processed by AI
  • Right to Correction: Correct inaccurate data feeding AI decisions
  • Right to Erasure: Request deletion of data used for AI profiling
  • Grievance Redress: Challenge AI decisions affecting them

Consent Requirements

AI processing of personal data requires valid consent under DPDPA:

  1. Specific Purpose: Consent for AI processing must specify the purpose
  2. Informed: Data principal must understand AI will be used
  3. Freely Given: Not conditioned on unrelated services
  4. Withdrawable: Can withdraw consent from AI processing
Practical Implication

Privacy notices must disclose: (1) AI is used in processing, (2) Purpose of AI use, (3) Categories of data processed by AI, (4) Consequences of AI decisions. Vague disclosures may invalidate consent.

Significant Data Fiduciaries

Large-scale AI processors may be designated as Significant Data Fiduciaries with enhanced obligations:

  • Data Protection Impact Assessment: Mandatory for high-risk AI processing
  • Periodic Audits: Independent verification of compliance
  • Data Protection Officer: Dedicated officer for oversight

6.4 Fairness Auditing

Fairness audits assess AI systems for discriminatory outcomes. While not yet mandatory in India, they are increasingly expected as best practice and may become regulatory requirements.

Fairness Metrics

Metric Definition Use Case
Demographic Parity Equal positive outcome rates across groups Hiring, lending approvals
Equal Opportunity Equal true positive rates across groups Risk assessment, detection systems
Equalized Odds Equal TPR and FPR across groups Criminal justice, medical diagnosis
Calibration Predicted probabilities accurate across groups Credit scoring, insurance
Individual Fairness Similar individuals treated similarly Personalized recommendations
Impossibility Theorem

Mathematical research shows that different fairness metrics are often mutually incompatible. Achieving demographic parity may sacrifice calibration, and vice versa. Organizations must choose which fairness criterion is most appropriate for their context.

Audit Framework

  1. Define Protected Groups: Identify relevant characteristics (gender, caste, religion, region)
  2. Select Fairness Metrics: Choose appropriate measures for the use case
  3. Collect Disaggregated Data: Gather outcome data by protected groups
  4. Measure Disparities: Calculate metric differences across groups
  5. Root Cause Analysis: Identify sources of bias (data, features, model)
  6. Mitigation Strategies: Implement corrections (resampling, feature removal, algorithmic adjustments)
  7. Continuous Monitoring: Ongoing tracking of fairness metrics
Practice Advisory

Advise clients to conduct fairness audits before deployment and periodically thereafter. Document audit methodology, findings, and remediation actions. This creates evidence of good faith compliance with anti-discrimination obligations.

6.5 Ethical Frameworks & Governance

Beyond legal compliance, ethical AI governance helps organizations anticipate regulatory requirements and build stakeholder trust.

NITI Aayog Responsible AI Principles

India's seven principles for responsible AI provide an ethical framework:

  1. Safety & Reliability: Robust, safe AI throughout lifecycle
  2. Equality: Non-discriminatory AI outcomes
  3. Inclusivity: AI benefits accessible to all
  4. Privacy & Security: Data protection in AI systems
  5. Transparency: Explainable AI decisions
  6. Accountability: Clear responsibility for AI outcomes
  7. Positive Human Values: AI aligned with societal well-being

AI Ethics Committee

Organizations deploying high-risk AI should consider establishing AI ethics committees:

  • Composition: Cross-functional - legal, technical, business, external experts
  • Mandate: Review AI projects for ethical risks, approve high-risk deployments
  • Authority: Power to halt or modify AI projects
  • Reporting: Regular reports to board/senior management

Bias Mitigation Strategies

Pre-Processing Approaches

  • Resampling training data for balance
  • Removing or modifying biased features
  • Synthetic data generation for underrepresented groups

In-Processing Approaches

  • Fairness constraints in optimization
  • Adversarial debiasing
  • Fair representation learning

Post-Processing Approaches

  • Threshold adjustment per group
  • Calibration adjustments
  • Human review for borderline cases

Key Takeaways

  • AI bias has multiple sources: historical, representation, measurement, aggregation, evaluation
  • Constitutional: Article 14/15 apply to discriminatory AI, including proxy discrimination
  • DPDPA: Requires consent, transparency, and rights for AI-based processing
  • Fairness metrics are mathematically incompatible - choose appropriate criteria for context
  • Conduct fairness audits before deployment and periodically thereafter
  • Establish AI ethics governance: committees, policies, review processes
  • Bias mitigation through pre-processing, in-processing, and post-processing techniques
  • Document all bias assessment and mitigation efforts for compliance evidence