Specification Writing Framework - Precise Requirements Definition | AI Skill Library

Master specification writing to create precise, unambiguous specifications that enable consistent, accurate AI outputs.

intermediate
20 min read

Specification Writing

What is Specification Writing

Specification writing is the practice of creating precise, complete, and unambiguous descriptions of desired outputs. It transforms vague requests into structured instructions that define exactly what should be produced, how it should be formatted, and what constraints must be satisfied.

A specification answers five questions: what output is needed, what structure it should have, what rules it must follow, what constraints limit the solution, and how success will be measured. Unlike general requirements gathering, specification writing focuses on eliminating interpretation gaps.

Why This Skill Matters

Without precise specifications, AI systems produce inconsistent outputs. Two requests that seem similar to a human may generate completely different results. The system must guess at details not explicitly stated, leading to variation in quality, format, and scope.

Poor specifications cause repeated iterations. You might receive output that's technically correct but doesn't match your needs because critical constraints were omitted. The system follows your instructions literally, not your intent. When you don't specify edge cases, it makes arbitrary choices. When you don't define the output structure, it improvises.

Strong specifications reduce back-and-forth by addressing ambiguities upfront. They establish clear expectations and measurable criteria. This matters most at scale: when generating thousands of outputs, variation becomes expensive to fix.

Core Concepts

Explicitness: Every requirement must be stated directly. Avoid implying dependencies or assuming shared context. If something matters, write it down. Implicit requirements become sources of variation.

Constraints: Boundaries that limit the solution space. Constraints include format requirements, length limits, forbidden patterns, mandatory inclusions, and structural rules. Each constraint eliminates an entire category of incorrect outputs.

Edge Cases: Boundary conditions and special scenarios that break general rules. Edge cases include empty inputs, maximum values, conflicting requirements, and unusual combinations. Addressing them prevents failures in rare but critical situations.

Validation Criteria: Objective tests that determine whether output meets specifications. Criteria must be measurable and binary—pass or fail. Subjective measures like "high quality" cannot be validated reliably.

Decomposition: Breaking complex specifications into smaller, independently verifiable components. Each component addresses one aspect of the output. Decomposition makes specifications easier to write, test, and modify.

When to Use This Skill

Ideal Scenarios:

  • Batch processing: Generating multiple outputs that must follow identical rules
  • Automated workflows: Downstream systems require predictable input formats
  • High-stakes outputs: Security configs, financial calculations, medical recommendations
  • Integration scenarios: APIs, databases, file formats with strict contracts
  • Collaborative work: Teams need shared understanding of requirements
  • Recurring tasks: Repeated executions benefit from upfront specification investment

Not Ideal For:

  • One-off exploratory tasks: Where flexibility matters more than consistency
  • Rapid prototyping: Early stages where requirements are evolving quickly
  • Creative work: Where some variability is desirable
  • Simple requests: Trivial tasks where specification overhead exceeds value

Decision Criteria:

Use specification writing when:
1. Task will be executed multiple times
2. Output format/structure is critical
3. Errors have significant costs
4. Multiple people/systems depend on the output
5. Requirements are stable enough to document

Common Use Cases

Use Case 1: API Response Specification

Context: Building an integration that consumes AI-generated API responses.

Challenge: AI generates responses in varying formats, breaking the integration parser.

Solution: Write precise output schema specification.

Example Prompt:

Generate API responses for product search results.

## Output Specification

**Format**: JSON only, no markdown code blocks

**Schema**:
```json
{
  "results": [
    {
      "id": "string (product SKU)",
      "name": "string (product name, max 100 chars)",
      "price": "number (USD, 2 decimal places)",
      "inStock": "boolean",
      "category": "string (one of: electronics, clothing, home, other)"
    }
  ],
  "totalCount": "integer",
  "pageNumber": "integer (1-based)"
}

Constraints:

  • Always return valid JSON (no trailing commas)
  • Minimum 1 result, maximum 10 results
  • If no results, return empty array (not null)
  • Results sorted by price ascending
  • Total count reflects all matching items (not just returned page)

Validation: Response must validate against this schema: [JSON Schema provided]


**Result**: Integration receives consistently formatted responses that parse reliably.

---

### Use Case 2: Code Generation Specification

**Context**: Generating TypeScript functions for a production codebase.

**Challenge**: Generated code lacks proper error handling and type safety.

**Solution**: Specify exact code structure and requirements.

**Example Prompt**:
```markdown
Generate a TypeScript function to fetch user data.

## Code Specification

**Function Signature**:
```typescript
async function fetchUserData(userId: string): Promise<UserData>

Requirements:

  1. Use strict TypeScript (no 'any' types)
  2. Implement proper error handling with try/catch
  3. Add JSDoc comments with @param, @returns, @throws
  4. Return type must match UserData interface exactly

Error Handling:

  • Throw TypeError for invalid userId format
  • Throw NetworkError for fetch failures
  • Throw ValidationError for API errors
  • Include error messages with context

Style Constraints:

  • Maximum 50 lines per function
  • Use async/await, no promise chains
  • No console.log or console.error (throw instead)

Output Format: Only the function code, no usage examples or explanations.


**Result**: Generated code meets production standards without manual refactoring.

---

### Use Case 3: Data Analysis Specification

**Context**: Analyzing customer feedback and generating weekly reports.

**Challenge**: Reports vary in format and miss key metrics each week.

**Solution**: Specify exact report structure and analysis requirements.

**Example Prompt**:
```markdown
Analyze customer feedback and generate weekly summary report.

## Report Specification

**Structure** (in order):
1. Executive Summary (150-200 words)
2. Key Metrics (table format)
3. Top 5 Issues (bulleted list)
4. Representative Feedback (3-5 quotes)

**Key Metrics Table**:
Columns: Metric, Value, Week-over-week change
Required rows:
- Total feedback count
- Average rating (1-5 scale)
- Response time (average hours)
- Issue resolution rate (%)

**Analysis Requirements**:
- Classify feedback into: Product, Service, Billing, Other
- Identify themes (must include at least 3 themes)
- Flag issues appearing >3 times
- Quote selection: 1 positive, 1 neutral, 1 negative minimum

**Format Constraints**:
- Use plain text (no markdown headers)
- Sections separated by blank lines
- Decimal numbers: 2 places max
- Percentages: whole numbers only

Result: Consistent weekly reports that enable trend analysis.

Step-by-Step Guide

Step 1: Identify Output Type

Determine what category of output you're specifying.

Categories:

  • Structured data: JSON, XML, CSV, database records
  • Code: Functions, classes, modules, scripts
  • Natural language: Reports, summaries, explanations
  • Mixed: Multiple output types combined

Why it matters: Each type has different specification requirements.

Ask yourself:

  • What format must the output be in?
  • How will the output be consumed or processed?
  • What structural requirements exist?

Step 2: Define Output Structure

Specify the exact organization of the output.

For structured data:

  • Schema definition (field names, types, constraints)
  • Required vs optional fields
  • Nested structures and relationships
  • Ordering requirements

For code:

  • Function/class signatures
  • File organization
  • Import/export requirements
  • Dependencies

For natural language:

  • Section structure and order
  • Paragraph/section length limits
  • Formatting requirements (headers, lists, etc.)

Step 3: Specify Constraints

Define boundaries on acceptable outputs.

Constraint categories:

Content constraints:

  • What must be included
  • What must be excluded
  • Minimum/maximum quantities

Format constraints:

  • File format (JSON, CSV, plain text)
  • Character encoding
  • Whitespace handling
  • Date/time formats

Quality constraints:

  • Accuracy requirements
  • Completeness criteria
  • Performance characteristics

Style constraints:

  • Tone and voice
  • Reading level
  • Terminology usage
  • Language variants (US vs UK English)

Step 4: Establish Validation Criteria

Create objective tests for specification compliance.

Types of validation:

Structural validation:

  • Schema validation for data
  • Syntax checking for code
  • Format verification for text

Content validation:

  • Field presence/absence
  • Value range checks
  • Pattern matching (regex)
  • Reference validation

Quality validation:

  • Completeness checks
  • Consistency verification
  • Accuracy sampling

Make criteria binary: Each check should pass or fail objectively.

Step 5: Document Edge Cases

Specify behavior for boundary conditions and unusual inputs.

Edge cases to address:

Input edge cases:

  • Empty input
  • Minimum/maximum values
  • Null/undefined values
  • Conflicting requirements

Output edge cases:

  • Empty result sets
  • Over-constrained requirements
  • Impossible combinations

Example specifications:

  • "If input is empty, return empty array (not null)"
  • "If no results match, return success with empty data field"
  • "If constraints conflict, prioritize: A > B > C"

Step 6: Provide Examples

Include concrete examples of valid and invalid outputs.

Example types:

Complete examples: Full output demonstrating all requirements

Partial examples: Specific patterns or structures

Positive examples: What correct output looks like

Negative examples: What incorrect output looks like (and why)

Examples should:

  • Cover typical use cases
  • Illustrate edge cases
  • Demonstrate constraint compliance
  • Show format requirements

Step 7: Review for Ambiguity

Critically examine the specification for interpretation gaps.

Ambiguity checks:

  • Can two different outputs both satisfy the spec?
  • Are there undefined terms or concepts?
  • Could requirements be interpreted multiple ways?
  • Are there implicit assumptions?

Test the specification:

  • Give it to someone unfamiliar with the task
  • Can they produce correct output without clarification?
  • Where do they have questions?

Step 8: Validate Against Requirements

Confirm specification matches actual needs.

Validation questions:

  • Does this specification produce what we actually need?
  • Are all critical requirements captured?
  • Have we over-constrained and eliminated valid solutions?
  • Is this specification testable and enforceable?

Measuring Success

Quality Checklist

Explicitness: All requirements stated directly, no implied needs

Completeness: All necessary aspects of output specified

Unambiguity: Single correct interpretation of requirements

Testability: Every requirement has corresponding validation method

Edge case coverage: Boundary conditions explicitly addressed

Constraint balance: Sufficient constraints without over-specification

Example quality: Examples illustrate all critical requirements

Clarity: Someone unfamiliar can produce correct output

Red Flags 🚩

🚩 Subjective language: Terms like "user-friendly", "intuitive", "engaging"

🚩 Implicit requirements: Assumptions not explicitly stated

🚩 Missing edge cases: Boundary conditions not addressed

🚩 Over-specification: Implementation details constrained unnecessarily

🚩 No validation: Requirements without corresponding tests

🚩 Vague formats: "Be consistent", "follow best practices"

🚩 Conflicting constraints: Requirements that cannot be simultaneously satisfied

🚩 Ambiguous priorities: No guidance on which requirements take precedence

Quick Reference

Specification Template

## Output Specification

**Type**: [structured data / code / natural language / mixed]

**Format**: [exact format requirements]

**Structure**:
[Detailed structural requirements]

**Constraints**:
- Must: [positive requirements]
- Must not: [negative requirements]
- Limits: [boundaries]

**Validation Criteria**:
1. [Checkable criterion 1]
2. [Checkable criterion 2]
3. [Checkable criterion 3]

**Edge Cases**:
- [Edge case 1]: [specified behavior]
- [Edge case 2]: [specified behavior]

**Examples**:
[2-3 representative examples]

Common Constraints by Output Type

Structured Data (JSON/XML):

- Schema: [explicit schema definition]
- Required fields: [list]
- Optional fields: [list]
- Data types: [field -> type mapping]
- Value constraints: [ranges, patterns, enums]

Code:

- Language: [specific version]
- Style guide: [link or reference]
- Error handling: [explicit pattern]
- Testing: [test requirements]
- Dependencies: [allowed/prohibited libraries]

Natural Language:

- Tone: [formal/casual/technical]
- Length: [word/character limits]
- Reading level: [grade level or complexity]
- Structure: [sections, headers, formatting]
- Format: [markdown/plain/HTML]

Validation Criteria Examples

Bad CriterionGood Criterion
"High quality""No spelling errors, fewer than 5 grammar errors per 1000 words"
"User-friendly""Reading grade level ≤ 8, no jargon without definition"
"Fast performance""Response time under 200ms for 95th percentile requests"
"Good summary""150-200 words, covers 3 main points"
"Clean code""Functions under 50 lines, cyclomatic complexity under 10"

Pro Tips 💡

Tip 1: Write specifications before writing prompts—specifications define what you want, prompts define how to ask for it

Tip 2: Use JSON Schema for structured data—it provides both documentation and validation

Tip 3: Treat specifications as code—version them, review them, test them

Tip 4: Add "why" comments to specifications to explain rationale behind constraints

Tip 5: Create specification templates for recurring output types in your domain

Tip 6: Test specifications by having someone else try to follow them

Tip 7: Maintain a "specification debt" list for things that need clarification

Tip 8: Review specifications after each failure—errors reveal missing requirements

FAQ

Q1: How detailed should my specification be?

A: Detailed enough that someone unfamiliar with your task can produce correct output without asking questions. Practical test: Give the specification to a colleague (or another AI instance). If they ask clarification questions, your specification needs more detail. Aim for the "Goldilocks zone": specific enough to ensure correctness, general enough to allow valid implementation flexibility.

Q2: Should I specify implementation details or just requirements?

A: Specify requirements (what), not implementation (how). Implementation details constrain solutions unnecessarily and make specifications fragile. Example: Good: "Function must complete in under 100ms". Bad: "Use async/await with parallel processing". The bad specification prescribes the solution; the good specification defines the requirement. Let the system determine optimal implementation within requirements.

Q3: How do I handle conflicting constraints?

A: Explicitly specify priority hierarchies when constraints might conflict. Use language like "If A and B conflict, prioritize A" or "Constraints in priority order: 1. Security, 2. Performance, 3. Usability". Also consider whether conflicts indicate a fundamental problem with the requirements—sometimes you need to revisit the core goals rather than specifying around contradictions.

Q4: What's the difference between specification writing and prompt engineering?

A: Specification writing defines WHAT you want (requirements, constraints, acceptance criteria). Prompt engineering defines HOW to ask for it (instruction structure, examples, context management). Specifications are the content; prompts are the delivery mechanism. Good specifications make prompt engineering easier because they provide clear content to communicate. You should write specifications first, then craft prompts to convey them effectively.

Q5: How do I know if my specification is testable?

A: Every requirement should correspond to a binary check you can perform. Ask: "Can I write an automated test for this?" If the answer is no, the requirement isn't specific enough. "User-friendly output" isn't testable. "Output requires fewer than 3 clicks to complete task" is testable. "High performance" isn't testable. "Response time under 200ms for 95th percentile" is testable. If you can't measure it, you can't enforce it—refine the requirement.

How This Skill Connects to Other Skills

Specification writing relies on context management to understand what information is available and what must be specified. Without context awareness, you might redundantly specify known information or fail to establish necessary background.

Decomposition breaks complex specifications into manageable components, each specified and validated independently. This modular approach makes large specifications tractable and enables iterative refinement.

Validation strategies define how to check outputs against specifications. The specification determines what validation is possible; validation reveals specification gaps. They evolve together.

Prompt engineering translates specifications into system instructions. The specification defines what you want; prompt engineering defines how to ask for it. Strong specifications make prompting more effective.

Error analysis identifies patterns in output failures, revealing missing or unclear specifications. Each error exposes a gap in constraints, criteria, or edge case handling.

Skill Boundaries

Specification writing cannot compensate for fundamentally unclear goals. If you don't know what you want, specifications won't help. They translate intent into instructions, not create intent.

Specifications don't guarantee execution quality. They define the target, not the path. A perfect specification still produces incorrect output if the system cannot fulfill the requirements. Capability differs from specification.

Over-specification can backfire by constraining the system unnecessarily. Micromanaging every detail prevents optimization and creates fragile specifications that break with small requirement changes. Balance specificity with flexibility.

Specifications don't substitute for testing. They define expected behavior, but only validation confirms actual behavior. Specifications and testing are complementary quality assurance mechanisms.

Static specifications cannot handle dynamic requirements well. If needs change rapidly, the overhead of maintaining specifications may exceed their value. Specifications work best for stable or slowly evolving requirements.

Note: This skill is not yet in the main relationship map. Relationships will be defined as the skill library evolves.

Complementary Skills

Context Management: Specification writing relies on context management to understand what information is available and what must be specified.

Task Decomposition: Decomposition breaks complex specifications into manageable components, each specified and validated independently.

Constraint Encoding: Specifications provide the requirements that constraint encoding translates into explicit rules.

Cognitive
Requirements
Precision
Documentation