5.1 The Seven Deadly Prompting Sins
These common mistakes derail AI interactions. Learning to recognize and avoid them will dramatically improve your results and save time on prompt iterations.
Vague Instructions
Ambiguous requests force the AI to guess what you want, leading to off-target responses.
Always specify: (1) What action to take, (2) What section/aspect to focus on, (3) What you want to learn or achieve.
Missing Context
Without sufficient background, the AI applies generic knowledge instead of domain-specific expertise.
Include jurisdiction, party types, relevant facts, and the specific legal question you need answered.
Overloaded Prompts
Asking for too many things at once overwhelms the model and produces incomplete responses.
Break complex tasks into sequential prompts. First identify issues, then address each issue type in follow-up prompts. Quality over quantity.
Assuming Knowledge Cutoff Awareness
Asking about recent events, new laws, or current information that post-dates the model's training.
Always specify the relevant time period. Better: "What amendments to the IT Act were made between 2020-2023?" Even better: provide the recent amendment text and ask for analysis.
Trusting Citations Without Verification
Accepting case names, section numbers, or legal references without independently verifying them.
NEVER submit AI-generated legal citations to court without verification. Ask the AI to explain the legal principle WITHOUT citing specific cases, then find the citations yourself. Multiple lawyers have faced sanctions for fake AI citations.
Ignoring Output Format
Not specifying how you want information structured leads to unusable responses.
Single-Shot Mindset
Expecting perfect results from one prompt instead of iterating and refining.
Treat AI interaction as a conversation. Start with an initial prompt, evaluate the response, then ask follow-up questions or request modifications. "That's helpful, but can you expand on point 3?" is a valid and useful follow-up.
5.2 Diagnosing Problem Outputs
When the AI's response misses the mark, systematic diagnosis helps identify what went wrong. Different symptoms point to different causes.
| Symptom | Likely Cause | Solution |
|---|---|---|
| Response is too generic | Insufficient context or specifics | Add jurisdiction, facts, constraints |
| Wrong topic or angle | Ambiguous instructions | Clarify exactly what you want analyzed |
| Too short/superficial | No depth or detail requested | Ask for comprehensive analysis, specific length |
| Too long/rambling | No constraints on length | Specify word count or "be concise" |
| Wrong format | No format specified | Explicitly state desired structure |
| Factually incorrect | Hallucination or outdated info | Verify independently; provide source material |
| Contradicts itself | Conflicting instructions | Review prompt for conflicting requirements |
| Refuses to answer | Safety filters triggered | Reframe request; provide legitimate context |
Ask the AI to explain its interpretation: "Before you answer, tell me how you understood my question." This reveals misalignments between your intent and the AI's understanding before it commits to a potentially wrong response.
5.3 The Iterative Refinement Process
Expert prompt engineers rarely get perfect results on the first try. They follow a systematic process of testing, evaluating, and refining their prompts.
Effective Follow-up Prompts
- "Expand on point X" - Get more detail on a specific aspect
- "That's not quite right because..." - Correct a misunderstanding
- "Now apply this to the specific facts I provided" - Ground abstract analysis
- "Can you restructure this as..." - Change output format
- "What did you assume about X?" - Surface hidden assumptions
- "Be more specific about..." - Reduce generality
The conversation history IS part of your prompt. Each follow-up refines the AI's understanding. A 5-turn conversation with targeted refinements often produces better results than any single "perfect" prompt.
5.4 Verification and Quality Control
Professional use of AI requires systematic verification. Develop habits that catch errors before they cause problems.
The Verification Checklist
- Factual claims: Can each factual statement be independently verified?
- Legal citations: Does each cited case/statute actually exist? Does it say what the AI claims?
- Logic flow: Does the reasoning follow logically from premises to conclusion?
- Completeness: Are there obvious issues or factors the AI missed?
- Currency: Is the analysis based on current law, or outdated provisions?
- Bias check: Is the analysis balanced, or does it favor one interpretation?
You, not the AI, are responsible for work product. "The AI said so" is not a defense to professional liability. Every AI output used in professional work must be reviewed and verified to the same standard as work from a junior associate.
Using AI to Check AI
A useful technique: ask a second AI (or the same AI in a new conversation) to critique the first response.
- "Review this analysis and identify any logical flaws or missing considerations"
- "Play devil's advocate: what's wrong with this conclusion?"
- "What would a judge likely question about this argument?"
5.5 Best Practices Summary
Consolidating the lessons from this module into actionable best practices for daily use.
Before Prompting
- Clearly define what you need from this interaction
- Identify the minimum context required for a useful response
- Decide on the output format that will be most useful
- Consider which prompting strategy fits the task (zero-shot, few-shot, CoT)
While Prompting
- Start with the most important information first
- Be explicit about what you want and don't want
- Use clear structure (numbered lists, sections)
- Specify constraints (length, format, jurisdiction)
- For complex tasks, use chain-of-thought reasoning
After Receiving Output
- Read the entire response, not just the beginning
- Verify all factual claims and citations independently
- Ask follow-up questions to address gaps
- Iterate if the response doesn't meet your needs
- Document successful prompts for reuse
"The goal is not to write the perfect prompt on the first try. The goal is to quickly converge on a prompt that reliably produces the output you need." Prompt Engineering Best Practice
Key Takeaways
- The "Seven Deadly Sins": vague instructions, missing context, overloading, cutoff unawareness, trusting citations, ignoring format, single-shot mindset
- Diagnose problems systematically: match symptoms to causes for targeted fixes
- Use the TEAR cycle: Test, Evaluate, Adjust, Repeat -- change ONE thing at a time
- NEVER trust legal citations without independent verification -- hallucination is common
- Professional responsibility remains with you; AI is a tool, not a substitute for judgment
- Build a library of successful prompts for common tasks to improve efficiency over time
