3.1 Overview of Prompting Strategies
Prompting strategies determine how much guidance and context you provide to the AI. The right strategy depends on task complexity, desired output format, and how much the AI needs to "learn" your specific requirements within the conversation.
Zero-shot
Direct request with no examples. Relies entirely on the model's pre-existing knowledge.
Few-shot
Provide examples of desired input-output pairs. Teaches the pattern through demonstration.
Chain-of-Thought
Guide step-by-step reasoning. Essential for complex analysis and multi-part problems.
These strategies aren't mutually exclusive. Advanced prompts often combine few-shot examples WITH chain-of-thought instructions. Start simple and add complexity only when needed.
3.2 Zero-shot Prompting
Zero-shot prompting asks the AI to perform a task without providing any examples. You rely entirely on the model's training and clear instructions. This is often sufficient for straightforward tasks where the expected output format is standard.
When to Use Zero-shot
- Standard tasks: Summarization, translation, simple Q&A
- Common formats: When the output format is well-established
- Quick queries: When you need fast answers without setup
- Exploration: Initial attempts before adding complexity
Summarize the key provisions of Section 79 of the IT Act, 2000 regarding intermediary liability in India. Focus on the safe harbour protection requirements.
Zero-shot Limitations
- May not match your specific format preferences
- Inconsistent output structure across multiple queries
- Less effective for specialized or nuanced tasks
- May miss context-specific requirements
Zero-shot works well for: defining legal terms, explaining standard provisions, translating legal documents, and initial research queries. It's your starting point -- add complexity only if results aren't satisfactory.
3.3 Few-shot Prompting
Few-shot prompting provides the AI with examples of the desired input-output pattern before presenting your actual task. This "teaches" the model your specific requirements through demonstration, dramatically improving consistency and format adherence.
Structure of Few-shot Prompts
- Task description: Explain what you want done
- Example 1: Input -> Desired Output
- Example 2: Input -> Desired Output (optional)
- Example 3: Input -> Desired Output (for complex tasks)
- Actual input: Your real query
Example 1:
Scenario: A company emails confidential customer data to the wrong recipient.
Issue: Data breach and unauthorized disclosure
Area: Data Protection Law (DPDPA/IT Act)
Example 2:
Scenario: A website uses a celebrity's photo without permission for advertising.
Issue: Unauthorized use of personality rights
Area: Intellectual Property / Tort Law
Now analyze:
Scenario: An employee posts negative comments about their employer on social media, revealing trade secrets about an upcoming product.
Issue: [Identified issue]
Area: [Applicable law area]
How Many Examples?
| Examples | When to Use | Trade-off |
|---|---|---|
| 1-shot | Simple format requirements, standard tasks | Minimal token cost, may be insufficient |
| 2-3 shot | Most practical applications, custom formats | Good balance of guidance and efficiency |
| 4-5 shot | Complex patterns, high consistency needs | Higher token cost, diminishing returns |
Make your examples diverse. If all examples are too similar, the AI may overfit to those specific patterns. Include variety in your examples to demonstrate the range of acceptable inputs and outputs.
3.4 Chain-of-Thought Prompting
Chain-of-Thought (CoT) prompting asks the AI to show its reasoning process step by step before arriving at a conclusion. This dramatically improves performance on complex reasoning tasks, mathematical problems, and multi-factor legal analysis.
Why CoT Works
When LLMs generate intermediate reasoning steps, they effectively "think aloud," which:
- Breaks complex problems into manageable sub-problems
- Reduces errors by making each step explicit and checkable
- Allows you to identify where reasoning goes wrong
- Produces more reliable conclusions on multi-step analysis
Clause: "Employee agrees not to work for any competitor within India for 3 years after termination."
Think through this step by step:
1. First, identify the applicable legal provisions
2. Analyze the temporal scope (3 years)
3. Analyze the geographic scope (all of India)
4. Consider the restraint of trade doctrine under Section 27 of the Indian Contract Act
5. Apply relevant case law principles
6. Reach a conclusion with caveats
CoT Trigger Phrases
Simple phrases that activate chain-of-thought reasoning:
- "Let's think through this step by step"
- "Walk me through your reasoning"
- "Analyze this systematically, showing your work"
- "First... then... finally..."
- "Break this down into components"
Studies show that simply adding "Let's think step by step" to prompts can improve accuracy on reasoning tasks by 10-40%. The improvement is most significant for complex, multi-step problems.
3.5 Choosing the Right Strategy
Strategy selection depends on task complexity, format requirements, and the nature of the problem. Use this decision framework to choose appropriately.
Combining Strategies
For complex legal work, combine strategies for optimal results:
Example with reasoning:
Clause: "Disputes shall be resolved exclusively in Singapore courts."
Analysis:
Step 1: Identify clause type -> Jurisdiction clause
Step 2: Check if parties can contractually agree to foreign jurisdiction -> Generally yes, per Indian law
Step 3: Consider if exclusive jurisdiction ousts Indian courts entirely -> May be problematic
Step 4: Assess reasonableness and connection to transaction -> Singapore has banking connection
Conclusion: Likely enforceable but recommend non-exclusive wording as safer alternative.
Now analyze this clause with the same step-by-step approach:
Clause: [Your actual clause]
For legal analysis tasks, default to Chain-of-Thought. The explicit reasoning allows you to verify each analytical step, catch errors in logic, and ensures the AI considers all relevant factors rather than jumping to conclusions.
Key Takeaways
- Zero-shot: Use for simple, standard tasks where format is established; fast but less controllable
- Few-shot: Use when you need specific output formats or consistent classification; 2-3 examples usually suffice
- Chain-of-Thought: Essential for complex reasoning, legal analysis, and multi-step problems
- Strategies can be combined -- few-shot examples showing CoT reasoning is a powerful pattern
- "Let's think step by step" is often the single most impactful phrase for improving complex task performance
- Start simple (zero-shot), add complexity only when results are unsatisfactory
