Skip to content
← Back to Library

Evaluation Dataset Specification

Recommended data eval_dataset_spec
Agent Prompt Snippet
Document the held-out test splits, label provenance, freshness guarantees, and stratification criteria applied to the evaluation dataset.

Purpose

The evaluation dataset specification documents the held-out test splits, label provenance, freshness guarantees, and stratification criteria for fair evaluation.

This is a Recommended document — most projects benefit significantly from having one. While not strictly essential for every situation, its absence often leads to gaps in team understanding or quality.

Key Sections to Include

  • The held-out test splits
  • Label provenance
  • Freshness guarantees
  • Stratification criteria applied to the evaluation dataset

Agent hint: Document the held-out test splits, label provenance, freshness guarantees, and stratification criteria applied to the evaluation dataset.

What Makes It Good vs Bad

A strong version of this document:

  • Defines clear data ownership, lineage, and quality expectations
  • Includes schema documentation with field-level descriptions
  • Specifies retention policies, archival rules, and deletion procedures
  • Documents data access patterns and query performance expectations
  • Addresses privacy requirements (PII handling, anonymization, consent)

Warning signs of a weak version:

  • Schema exists but fields are undocumented or ambiguously named
  • No retention policy — data grows indefinitely without governance
  • Missing data lineage — unclear where data originates and how it transforms
  • No privacy analysis for personally identifiable information
  • Query patterns undocumented, leading to performance surprises

Common Mistakes

  • Treating ‘we’ll figure out the schema later’ as a viable strategy
  • Not planning for data migration when schemas evolve
  • Ignoring data quality until downstream consumers report problems
  • Assuming all data access patterns are known at design time

How to Use This Document

Document schemas as living artifacts that evolve with the system. Include field-level descriptions, valid value ranges, and nullability constraints. Define a data classification scheme (public, internal, confidential, restricted) and label every data store accordingly. Plan for schema evolution from day one.

For AI agents: When modifying data models or queries, reference the data documentation to understand field semantics, access patterns, and privacy constraints. Ensure migrations preserve data integrity and backward compatibility.

Starter Template

SpecBase includes a ready-to-use template for this document: kb/templates/data/eval_dataset_spec.md.tmpl. Use the SpecBase CLI or MCP integration to generate it pre-filled for your project.

# Generate stubs via CLI
specbase init <archetype> --features <features> --dir ./docs
  • Designing Data-Intensive Applications by Martin Kleppmann — Comprehensive guide to data modeling, storage engines, and distributed data systems.
  • The Data Warehouse Toolkit by Ralph Kimball & Margy Ross — The standard reference for dimensional modeling and data warehouse design.
  • Data Management at Scale by Piethein Strengholt — Modern approaches to data architecture, governance, and organizational data management.

Appears In