Master specification writing to create precise, unambiguous specifications that enable consistent, accurate AI outputs.
Specification writing is the practice of creating precise, complete, and unambiguous descriptions of desired outputs. It transforms vague requests into structured instructions that define exactly what should be produced, how it should be formatted, and what constraints must be satisfied.
A specification answers five questions: what output is needed, what structure it should have, what rules it must follow, what constraints limit the solution, and how success will be measured. Unlike general requirements gathering, specification writing focuses on eliminating interpretation gaps.
Without precise specifications, AI systems produce inconsistent outputs. Two requests that seem similar to a human may generate completely different results. The system must guess at details not explicitly stated, leading to variation in quality, format, and scope.
Poor specifications cause repeated iterations. You might receive output that's technically correct but doesn't match your needs because critical constraints were omitted. The system follows your instructions literally, not your intent. When you don't specify edge cases, it makes arbitrary choices. When you don't define the output structure, it improvises.
Strong specifications reduce back-and-forth by addressing ambiguities upfront. They establish clear expectations and measurable criteria. This matters most at scale: when generating thousands of outputs, variation becomes expensive to fix.
Explicitness: Every requirement must be stated directly. Avoid implying dependencies or assuming shared context. If something matters, write it down. Implicit requirements become sources of variation.
Constraints: Boundaries that limit the solution space. Constraints include format requirements, length limits, forbidden patterns, mandatory inclusions, and structural rules. Each constraint eliminates an entire category of incorrect outputs.
Edge Cases: Boundary conditions and special scenarios that break general rules. Edge cases include empty inputs, maximum values, conflicting requirements, and unusual combinations. Addressing them prevents failures in rare but critical situations.
Validation Criteria: Objective tests that determine whether output meets specifications. Criteria must be measurable and binary—pass or fail. Subjective measures like "high quality" cannot be validated reliably.
Decomposition: Breaking complex specifications into smaller, independently verifiable components. Each component addresses one aspect of the output. Decomposition makes specifications easier to write, test, and modify.
Ideal Scenarios:
Not Ideal For:
Decision Criteria:
Use specification writing when:
1. Task will be executed multiple times
2. Output format/structure is critical
3. Errors have significant costs
4. Multiple people/systems depend on the output
5. Requirements are stable enough to document
Context: Building an integration that consumes AI-generated API responses.
Challenge: AI generates responses in varying formats, breaking the integration parser.
Solution: Write precise output schema specification.
Example Prompt:
Generate API responses for product search results.
## Output Specification
**Format**: JSON only, no markdown code blocks
**Schema**:
```json
{
"results": [
{
"id": "string (product SKU)",
"name": "string (product name, max 100 chars)",
"price": "number (USD, 2 decimal places)",
"inStock": "boolean",
"category": "string (one of: electronics, clothing, home, other)"
}
],
"totalCount": "integer",
"pageNumber": "integer (1-based)"
}
Constraints:
Validation: Response must validate against this schema: [JSON Schema provided]
**Result**: Integration receives consistently formatted responses that parse reliably.
---
### Use Case 2: Code Generation Specification
**Context**: Generating TypeScript functions for a production codebase.
**Challenge**: Generated code lacks proper error handling and type safety.
**Solution**: Specify exact code structure and requirements.
**Example Prompt**:
```markdown
Generate a TypeScript function to fetch user data.
## Code Specification
**Function Signature**:
```typescript
async function fetchUserData(userId: string): Promise<UserData>
Requirements:
Error Handling:
Style Constraints:
Output Format: Only the function code, no usage examples or explanations.
**Result**: Generated code meets production standards without manual refactoring.
---
### Use Case 3: Data Analysis Specification
**Context**: Analyzing customer feedback and generating weekly reports.
**Challenge**: Reports vary in format and miss key metrics each week.
**Solution**: Specify exact report structure and analysis requirements.
**Example Prompt**:
```markdown
Analyze customer feedback and generate weekly summary report.
## Report Specification
**Structure** (in order):
1. Executive Summary (150-200 words)
2. Key Metrics (table format)
3. Top 5 Issues (bulleted list)
4. Representative Feedback (3-5 quotes)
**Key Metrics Table**:
Columns: Metric, Value, Week-over-week change
Required rows:
- Total feedback count
- Average rating (1-5 scale)
- Response time (average hours)
- Issue resolution rate (%)
**Analysis Requirements**:
- Classify feedback into: Product, Service, Billing, Other
- Identify themes (must include at least 3 themes)
- Flag issues appearing >3 times
- Quote selection: 1 positive, 1 neutral, 1 negative minimum
**Format Constraints**:
- Use plain text (no markdown headers)
- Sections separated by blank lines
- Decimal numbers: 2 places max
- Percentages: whole numbers only
Result: Consistent weekly reports that enable trend analysis.
Determine what category of output you're specifying.
Categories:
Why it matters: Each type has different specification requirements.
Ask yourself:
Specify the exact organization of the output.
For structured data:
For code:
For natural language:
Define boundaries on acceptable outputs.
Constraint categories:
Content constraints:
Format constraints:
Quality constraints:
Style constraints:
Create objective tests for specification compliance.
Types of validation:
Structural validation:
Content validation:
Quality validation:
Make criteria binary: Each check should pass or fail objectively.
Specify behavior for boundary conditions and unusual inputs.
Edge cases to address:
Input edge cases:
Output edge cases:
Example specifications:
Include concrete examples of valid and invalid outputs.
Example types:
Complete examples: Full output demonstrating all requirements
Partial examples: Specific patterns or structures
Positive examples: What correct output looks like
Negative examples: What incorrect output looks like (and why)
Examples should:
Critically examine the specification for interpretation gaps.
Ambiguity checks:
Test the specification:
Confirm specification matches actual needs.
Validation questions:
✅ Explicitness: All requirements stated directly, no implied needs
✅ Completeness: All necessary aspects of output specified
✅ Unambiguity: Single correct interpretation of requirements
✅ Testability: Every requirement has corresponding validation method
✅ Edge case coverage: Boundary conditions explicitly addressed
✅ Constraint balance: Sufficient constraints without over-specification
✅ Example quality: Examples illustrate all critical requirements
✅ Clarity: Someone unfamiliar can produce correct output
🚩 Subjective language: Terms like "user-friendly", "intuitive", "engaging"
🚩 Implicit requirements: Assumptions not explicitly stated
🚩 Missing edge cases: Boundary conditions not addressed
🚩 Over-specification: Implementation details constrained unnecessarily
🚩 No validation: Requirements without corresponding tests
🚩 Vague formats: "Be consistent", "follow best practices"
🚩 Conflicting constraints: Requirements that cannot be simultaneously satisfied
🚩 Ambiguous priorities: No guidance on which requirements take precedence
## Output Specification
**Type**: [structured data / code / natural language / mixed]
**Format**: [exact format requirements]
**Structure**:
[Detailed structural requirements]
**Constraints**:
- Must: [positive requirements]
- Must not: [negative requirements]
- Limits: [boundaries]
**Validation Criteria**:
1. [Checkable criterion 1]
2. [Checkable criterion 2]
3. [Checkable criterion 3]
**Edge Cases**:
- [Edge case 1]: [specified behavior]
- [Edge case 2]: [specified behavior]
**Examples**:
[2-3 representative examples]
Structured Data (JSON/XML):
- Schema: [explicit schema definition]
- Required fields: [list]
- Optional fields: [list]
- Data types: [field -> type mapping]
- Value constraints: [ranges, patterns, enums]
Code:
- Language: [specific version]
- Style guide: [link or reference]
- Error handling: [explicit pattern]
- Testing: [test requirements]
- Dependencies: [allowed/prohibited libraries]
Natural Language:
- Tone: [formal/casual/technical]
- Length: [word/character limits]
- Reading level: [grade level or complexity]
- Structure: [sections, headers, formatting]
- Format: [markdown/plain/HTML]
| Bad Criterion | Good Criterion |
|---|---|
| "High quality" | "No spelling errors, fewer than 5 grammar errors per 1000 words" |
| "User-friendly" | "Reading grade level ≤ 8, no jargon without definition" |
| "Fast performance" | "Response time under 200ms for 95th percentile requests" |
| "Good summary" | "150-200 words, covers 3 main points" |
| "Clean code" | "Functions under 50 lines, cyclomatic complexity under 10" |
Tip 1: Write specifications before writing prompts—specifications define what you want, prompts define how to ask for it
Tip 2: Use JSON Schema for structured data—it provides both documentation and validation
Tip 3: Treat specifications as code—version them, review them, test them
Tip 4: Add "why" comments to specifications to explain rationale behind constraints
Tip 5: Create specification templates for recurring output types in your domain
Tip 6: Test specifications by having someone else try to follow them
Tip 7: Maintain a "specification debt" list for things that need clarification
Tip 8: Review specifications after each failure—errors reveal missing requirements
A: Detailed enough that someone unfamiliar with your task can produce correct output without asking questions. Practical test: Give the specification to a colleague (or another AI instance). If they ask clarification questions, your specification needs more detail. Aim for the "Goldilocks zone": specific enough to ensure correctness, general enough to allow valid implementation flexibility.
A: Specify requirements (what), not implementation (how). Implementation details constrain solutions unnecessarily and make specifications fragile. Example: Good: "Function must complete in under 100ms". Bad: "Use async/await with parallel processing". The bad specification prescribes the solution; the good specification defines the requirement. Let the system determine optimal implementation within requirements.
A: Explicitly specify priority hierarchies when constraints might conflict. Use language like "If A and B conflict, prioritize A" or "Constraints in priority order: 1. Security, 2. Performance, 3. Usability". Also consider whether conflicts indicate a fundamental problem with the requirements—sometimes you need to revisit the core goals rather than specifying around contradictions.
A: Specification writing defines WHAT you want (requirements, constraints, acceptance criteria). Prompt engineering defines HOW to ask for it (instruction structure, examples, context management). Specifications are the content; prompts are the delivery mechanism. Good specifications make prompt engineering easier because they provide clear content to communicate. You should write specifications first, then craft prompts to convey them effectively.
A: Every requirement should correspond to a binary check you can perform. Ask: "Can I write an automated test for this?" If the answer is no, the requirement isn't specific enough. "User-friendly output" isn't testable. "Output requires fewer than 3 clicks to complete task" is testable. "High performance" isn't testable. "Response time under 200ms for 95th percentile" is testable. If you can't measure it, you can't enforce it—refine the requirement.
Specification writing relies on context management to understand what information is available and what must be specified. Without context awareness, you might redundantly specify known information or fail to establish necessary background.
Decomposition breaks complex specifications into manageable components, each specified and validated independently. This modular approach makes large specifications tractable and enables iterative refinement.
Validation strategies define how to check outputs against specifications. The specification determines what validation is possible; validation reveals specification gaps. They evolve together.
Prompt engineering translates specifications into system instructions. The specification defines what you want; prompt engineering defines how to ask for it. Strong specifications make prompting more effective.
Error analysis identifies patterns in output failures, revealing missing or unclear specifications. Each error exposes a gap in constraints, criteria, or edge case handling.
Specification writing cannot compensate for fundamentally unclear goals. If you don't know what you want, specifications won't help. They translate intent into instructions, not create intent.
Specifications don't guarantee execution quality. They define the target, not the path. A perfect specification still produces incorrect output if the system cannot fulfill the requirements. Capability differs from specification.
Over-specification can backfire by constraining the system unnecessarily. Micromanaging every detail prevents optimization and creates fragile specifications that break with small requirement changes. Balance specificity with flexibility.
Specifications don't substitute for testing. They define expected behavior, but only validation confirms actual behavior. Specifications and testing are complementary quality assurance mechanisms.
Static specifications cannot handle dynamic requirements well. If needs change rapidly, the overhead of maintaining specifications may exceed their value. Specifications work best for stable or slowly evolving requirements.
Note: This skill is not yet in the main relationship map. Relationships will be defined as the skill library evolves.
Context Management: Specification writing relies on context management to understand what information is available and what must be specified.
Task Decomposition: Decomposition breaks complex specifications into manageable components, each specified and validated independently.
Constraint Encoding: Specifications provide the requirements that constraint encoding translates into explicit rules.
Understanding the fundamentals of Claude Skills and how they differ from traditional prompts
Master advanced reasoning techniques to unlock Claude's full analytical capabilities
Structure your coding tasks for better, more maintainable code
Build autonomous agents that can complete complex multi-step tasks