admissions@cyberlawacademy.com | +91-XXXXXXXXXX
Part 5 of 5

Common Pitfalls and Debugging Prompts

Even well-crafted prompts can fail. Learn to identify the most common prompting mistakes, diagnose problems in AI outputs, and systematically improve your prompts through iterative refinement.

~55 minutes 5 Sections Debug Checklist

5.1 The Seven Deadly Prompting Sins

These common mistakes derail AI interactions. Learning to recognize and avoid them will dramatically improve your results and save time on prompt iterations.

1

Vague Instructions

Ambiguous requests force the AI to guess what you want, leading to off-target responses.

Vague
"Help me with this contract."
Specific
"Review this contract's indemnification clause (Section 8) and identify provisions that expose the service provider to unlimited liability."
Fix

Always specify: (1) What action to take, (2) What section/aspect to focus on, (3) What you want to learn or achieve.

2

Missing Context

Without sufficient background, the AI applies generic knowledge instead of domain-specific expertise.

No Context
"Is this force majeure clause valid?"
With Context
"Under Indian contract law, analyze whether this force majeure clause would be enforceable. The contract is between two Indian companies for IT services, and the clause was triggered due to a ransomware attack."
Fix

Include jurisdiction, party types, relevant facts, and the specific legal question you need answered.

3

Overloaded Prompts

Asking for too many things at once overwhelms the model and produces incomplete responses.

Overloaded
"Review this contract, identify all issues, suggest fixes, compare to market standard, provide a risk score, draft a negotiation letter, and summarize for the client."
Fix

Break complex tasks into sequential prompts. First identify issues, then address each issue type in follow-up prompts. Quality over quantity.

4

Assuming Knowledge Cutoff Awareness

Asking about recent events, new laws, or current information that post-dates the model's training.

Problematic
"What are the latest amendments to the IT Act passed this year?"
Fix

Always specify the relevant time period. Better: "What amendments to the IT Act were made between 2020-2023?" Even better: provide the recent amendment text and ask for analysis.

5

Trusting Citations Without Verification

Accepting case names, section numbers, or legal references without independently verifying them.

Critical Rule

NEVER submit AI-generated legal citations to court without verification. Ask the AI to explain the legal principle WITHOUT citing specific cases, then find the citations yourself. Multiple lawyers have faced sanctions for fake AI citations.

6

Ignoring Output Format

Not specifying how you want information structured leads to unusable responses.

No Format
"Tell me about the issues in this contract."
Formatted
"Identify issues in this contract. For each issue: (1) Quote the problematic clause, (2) Explain the risk, (3) Suggest revised language. Present as a numbered list."
7

Single-Shot Mindset

Expecting perfect results from one prompt instead of iterating and refining.

Fix

Treat AI interaction as a conversation. Start with an initial prompt, evaluate the response, then ask follow-up questions or request modifications. "That's helpful, but can you expand on point 3?" is a valid and useful follow-up.

5.2 Diagnosing Problem Outputs

When the AI's response misses the mark, systematic diagnosis helps identify what went wrong. Different symptoms point to different causes.

Symptom Likely Cause Solution
Response is too generic Insufficient context or specifics Add jurisdiction, facts, constraints
Wrong topic or angle Ambiguous instructions Clarify exactly what you want analyzed
Too short/superficial No depth or detail requested Ask for comprehensive analysis, specific length
Too long/rambling No constraints on length Specify word count or "be concise"
Wrong format No format specified Explicitly state desired structure
Factually incorrect Hallucination or outdated info Verify independently; provide source material
Contradicts itself Conflicting instructions Review prompt for conflicting requirements
Refuses to answer Safety filters triggered Reframe request; provide legitimate context
Debugging Technique

Ask the AI to explain its interpretation: "Before you answer, tell me how you understood my question." This reveals misalignments between your intent and the AI's understanding before it commits to a potentially wrong response.

5.3 The Iterative Refinement Process

Expert prompt engineers rarely get perfect results on the first try. They follow a systematic process of testing, evaluating, and refining their prompts.

The TEAR Cycle: Test, Evaluate, Adjust, Repeat
T
Test: Run your prompt and observe the full output. Don't stop reading at the first good paragraph -- issues often appear later.
E
Evaluate: Compare output to your ideal. What's good? What's missing? What's wrong? What's the most critical gap?
A
Adjust: Make ONE focused change to address the most critical issue. Don't change multiple things at once -- you won't know what worked.
R
Repeat: Test again with the adjusted prompt. Continue until output meets requirements or you identify the prompt's limits.

Effective Follow-up Prompts

  • "Expand on point X" - Get more detail on a specific aspect
  • "That's not quite right because..." - Correct a misunderstanding
  • "Now apply this to the specific facts I provided" - Ground abstract analysis
  • "Can you restructure this as..." - Change output format
  • "What did you assume about X?" - Surface hidden assumptions
  • "Be more specific about..." - Reduce generality
💡 Key Insight

The conversation history IS part of your prompt. Each follow-up refines the AI's understanding. A 5-turn conversation with targeted refinements often produces better results than any single "perfect" prompt.

5.4 Verification and Quality Control

Professional use of AI requires systematic verification. Develop habits that catch errors before they cause problems.

The Verification Checklist

  1. Factual claims: Can each factual statement be independently verified?
  2. Legal citations: Does each cited case/statute actually exist? Does it say what the AI claims?
  3. Logic flow: Does the reasoning follow logically from premises to conclusion?
  4. Completeness: Are there obvious issues or factors the AI missed?
  5. Currency: Is the analysis based on current law, or outdated provisions?
  6. Bias check: Is the analysis balanced, or does it favor one interpretation?
⚠️ Professional Responsibility

You, not the AI, are responsible for work product. "The AI said so" is not a defense to professional liability. Every AI output used in professional work must be reviewed and verified to the same standard as work from a junior associate.

Using AI to Check AI

A useful technique: ask a second AI (or the same AI in a new conversation) to critique the first response.

  • "Review this analysis and identify any logical flaws or missing considerations"
  • "Play devil's advocate: what's wrong with this conclusion?"
  • "What would a judge likely question about this argument?"

5.5 Best Practices Summary

Consolidating the lessons from this module into actionable best practices for daily use.

Before Prompting

  • Clearly define what you need from this interaction
  • Identify the minimum context required for a useful response
  • Decide on the output format that will be most useful
  • Consider which prompting strategy fits the task (zero-shot, few-shot, CoT)

While Prompting

  • Start with the most important information first
  • Be explicit about what you want and don't want
  • Use clear structure (numbered lists, sections)
  • Specify constraints (length, format, jurisdiction)
  • For complex tasks, use chain-of-thought reasoning

After Receiving Output

  • Read the entire response, not just the beginning
  • Verify all factual claims and citations independently
  • Ask follow-up questions to address gaps
  • Iterate if the response doesn't meet your needs
  • Document successful prompts for reuse
"The goal is not to write the perfect prompt on the first try. The goal is to quickly converge on a prompt that reliably produces the output you need." Prompt Engineering Best Practice

Key Takeaways

  • The "Seven Deadly Sins": vague instructions, missing context, overloading, cutoff unawareness, trusting citations, ignoring format, single-shot mindset
  • Diagnose problems systematically: match symptoms to causes for targeted fixes
  • Use the TEAR cycle: Test, Evaluate, Adjust, Repeat -- change ONE thing at a time
  • NEVER trust legal citations without independent verification -- hallucination is common
  • Professional responsibility remains with you; AI is a tool, not a substitute for judgment
  • Build a library of successful prompts for common tasks to improve efficiency over time