Skip to content
← Back to Library

ETL Specification

Required data etl_spec
Agent Prompt Snippet
Define extraction sources, transformation rules, loading targets, data validation checkpoints, and error handling for the complete ETL pipeline.

Purpose

The ETL specification defines the extraction sources, transformation rules, loading targets, data validation checkpoints, and error handling strategy for the entire pipeline.

This is a Required document — every project of this type should have one. Without it, the team risks misalignment, rework, or undetected issues that compound over time.

Key Sections to Include

  • Extraction sources
  • Transformation rules
  • Loading targets
  • Data validation checkpoints
  • Error handling for the complete ETL pipeline

Agent hint: Define extraction sources, transformation rules, loading targets, data validation checkpoints, and error handling for the complete ETL pipeline.

What Makes It Good vs Bad

A strong version of this document:

  • Defines clear data ownership, lineage, and quality expectations
  • Includes schema documentation with field-level descriptions
  • Specifies retention policies, archival rules, and deletion procedures
  • Documents data access patterns and query performance expectations
  • Addresses privacy requirements (PII handling, anonymization, consent)

Warning signs of a weak version:

  • Schema exists but fields are undocumented or ambiguously named
  • No retention policy — data grows indefinitely without governance
  • Missing data lineage — unclear where data originates and how it transforms
  • No privacy analysis for personally identifiable information
  • Query patterns undocumented, leading to performance surprises

Common Mistakes

  • Treating ‘we’ll figure out the schema later’ as a viable strategy
  • Not planning for data migration when schemas evolve
  • Ignoring data quality until downstream consumers report problems
  • Assuming all data access patterns are known at design time

How to Use This Document

Document schemas as living artifacts that evolve with the system. Include field-level descriptions, valid value ranges, and nullability constraints. Define a data classification scheme (public, internal, confidential, restricted) and label every data store accordingly. Plan for schema evolution from day one.

For AI agents: When modifying data models or queries, reference the data documentation to understand field semantics, access patterns, and privacy constraints. Ensure migrations preserve data integrity and backward compatibility.

Starter Template

SpecBase includes a ready-to-use template for this document: kb/templates/data/etl_spec.md.tmpl. Use the SpecBase CLI or MCP integration to generate it pre-filled for your project.

# Generate stubs via CLI
specbase init <archetype> --features <features> --dir ./docs
  • Designing Data-Intensive Applications by Martin Kleppmann — Comprehensive guide to data modeling, storage engines, and distributed data systems.
  • The Data Warehouse Toolkit by Ralph Kimball & Margy Ross — The standard reference for dimensional modeling and data warehouse design.
  • Data Management at Scale by Piethein Strengholt — Modern approaches to data architecture, governance, and organizational data management.

Appears In