admissions@cyberlawacademy.com | +91-XXXXXXXXXX
Part 3 of 5

Prompt Types: Zero-shot, Few-shot, Chain-of-Thought

Different tasks demand different prompting strategies. Master the three fundamental approaches and learn when to apply each for optimal results in legal and professional contexts.

~70 minutes 5 Sections 9 Examples

3.1 Overview of Prompting Strategies

Prompting strategies determine how much guidance and context you provide to the AI. The right strategy depends on task complexity, desired output format, and how much the AI needs to "learn" your specific requirements within the conversation.

0

Zero-shot

Direct request with no examples. Relies entirely on the model's pre-existing knowledge.

1-5

Few-shot

Provide examples of desired input-output pairs. Teaches the pattern through demonstration.

...

Chain-of-Thought

Guide step-by-step reasoning. Essential for complex analysis and multi-part problems.

💡 Key Principle

These strategies aren't mutually exclusive. Advanced prompts often combine few-shot examples WITH chain-of-thought instructions. Start simple and add complexity only when needed.

3.2 Zero-shot Prompting

Zero-shot prompting asks the AI to perform a task without providing any examples. You rely entirely on the model's training and clear instructions. This is often sufficient for straightforward tasks where the expected output format is standard.

When to Use Zero-shot

  • Standard tasks: Summarization, translation, simple Q&A
  • Common formats: When the output format is well-established
  • Quick queries: When you need fast answers without setup
  • Exploration: Initial attempts before adding complexity
Zero-shot Example Zero-shot
Prompt:

Summarize the key provisions of Section 79 of the IT Act, 2000 regarding intermediary liability in India. Focus on the safe harbour protection requirements.
Why This Works
The task is straightforward (summarization), the subject is well-documented in the model's training data, and the desired format (summary of key provisions) is standard. No examples needed.

Zero-shot Limitations

  • May not match your specific format preferences
  • Inconsistent output structure across multiple queries
  • Less effective for specialized or nuanced tasks
  • May miss context-specific requirements
⚖️ Legal Practice Application

Zero-shot works well for: defining legal terms, explaining standard provisions, translating legal documents, and initial research queries. It's your starting point -- add complexity only if results aren't satisfactory.

3.3 Few-shot Prompting

Few-shot prompting provides the AI with examples of the desired input-output pattern before presenting your actual task. This "teaches" the model your specific requirements through demonstration, dramatically improving consistency and format adherence.

In-Context Learning
The ability of LLMs to learn new tasks from examples provided within the prompt itself, without any parameter updates or retraining. Few-shot prompting leverages this capability.

Structure of Few-shot Prompts

  1. Task description: Explain what you want done
  2. Example 1: Input -> Desired Output
  3. Example 2: Input -> Desired Output (optional)
  4. Example 3: Input -> Desired Output (for complex tasks)
  5. Actual input: Your real query
Few-shot Example: Issue Spotting Few-shot
Task: Identify the primary legal issue in the following scenarios and classify the applicable area of law.

Example 1:
Scenario: A company emails confidential customer data to the wrong recipient.
Issue: Data breach and unauthorized disclosure
Area: Data Protection Law (DPDPA/IT Act)

Example 2:
Scenario: A website uses a celebrity's photo without permission for advertising.
Issue: Unauthorized use of personality rights
Area: Intellectual Property / Tort Law

Now analyze:
Scenario: An employee posts negative comments about their employer on social media, revealing trade secrets about an upcoming product.
Expected Output Format
The AI will follow the demonstrated pattern:
Issue: [Identified issue]
Area: [Applicable law area]

How Many Examples?

Examples When to Use Trade-off
1-shot Simple format requirements, standard tasks Minimal token cost, may be insufficient
2-3 shot Most practical applications, custom formats Good balance of guidance and efficiency
4-5 shot Complex patterns, high consistency needs Higher token cost, diminishing returns
Best Practice

Make your examples diverse. If all examples are too similar, the AI may overfit to those specific patterns. Include variety in your examples to demonstrate the range of acceptable inputs and outputs.

3.4 Chain-of-Thought Prompting

Chain-of-Thought (CoT) prompting asks the AI to show its reasoning process step by step before arriving at a conclusion. This dramatically improves performance on complex reasoning tasks, mathematical problems, and multi-factor legal analysis.

Why CoT Works

When LLMs generate intermediate reasoning steps, they effectively "think aloud," which:

  • Breaks complex problems into manageable sub-problems
  • Reduces errors by making each step explicit and checkable
  • Allows you to identify where reasoning goes wrong
  • Produces more reliable conclusions on multi-step analysis
Chain-of-Thought Legal Analysis Chain-of-Thought
Question: Is this non-compete clause enforceable under Indian law?

Clause: "Employee agrees not to work for any competitor within India for 3 years after termination."

Think through this step by step:
1. First, identify the applicable legal provisions
2. Analyze the temporal scope (3 years)
3. Analyze the geographic scope (all of India)
4. Consider the restraint of trade doctrine under Section 27 of the Indian Contract Act
5. Apply relevant case law principles
6. Reach a conclusion with caveats
Why This Works Better
By explicitly requesting step-by-step analysis, you ensure the AI considers each factor systematically rather than jumping to a conclusion. This mirrors proper legal analysis methodology.

CoT Trigger Phrases

Simple phrases that activate chain-of-thought reasoning:

  • "Let's think through this step by step"
  • "Walk me through your reasoning"
  • "Analyze this systematically, showing your work"
  • "First... then... finally..."
  • "Break this down into components"
💡 Research Insight

Studies show that simply adding "Let's think step by step" to prompts can improve accuracy on reasoning tasks by 10-40%. The improvement is most significant for complex, multi-step problems.

3.5 Choosing the Right Strategy

Strategy selection depends on task complexity, format requirements, and the nature of the problem. Use this decision framework to choose appropriately.

Strategy Selection Guide
Task Characteristic
Zero-shot
Few-shot
CoT
Simple, standard task
Best
OK
Overkill
Custom output format
Risky
Best
OK
Multi-step reasoning
Poor
OK
Best
Classification task
OK
Best
OK
Legal analysis
Risky
OK
Best

Combining Strategies

For complex legal work, combine strategies for optimal results:

Combined Strategy: Few-shot + CoT Advanced
Task: Analyze contract clauses for enforceability issues.

Example with reasoning:
Clause: "Disputes shall be resolved exclusively in Singapore courts."
Analysis:
Step 1: Identify clause type -> Jurisdiction clause
Step 2: Check if parties can contractually agree to foreign jurisdiction -> Generally yes, per Indian law
Step 3: Consider if exclusive jurisdiction ousts Indian courts entirely -> May be problematic
Step 4: Assess reasonableness and connection to transaction -> Singapore has banking connection
Conclusion: Likely enforceable but recommend non-exclusive wording as safer alternative.

Now analyze this clause with the same step-by-step approach:
Clause: [Your actual clause]
⚖️ Legal Practice Recommendation

For legal analysis tasks, default to Chain-of-Thought. The explicit reasoning allows you to verify each analytical step, catch errors in logic, and ensures the AI considers all relevant factors rather than jumping to conclusions.

Key Takeaways

  • Zero-shot: Use for simple, standard tasks where format is established; fast but less controllable
  • Few-shot: Use when you need specific output formats or consistent classification; 2-3 examples usually suffice
  • Chain-of-Thought: Essential for complex reasoning, legal analysis, and multi-step problems
  • Strategies can be combined -- few-shot examples showing CoT reasoning is a powerful pattern
  • "Let's think step by step" is often the single most impactful phrase for improving complex task performance
  • Start simple (zero-shot), add complexity only when results are unsatisfactory