Catalog
mattpocock/tdd

mattpocock

tdd

Test-driven development with red-green-refactor loop. Use when user wants to build features or fix bugs using TDD, mentions "red-green-refactor", wants integration tests, or asks for test-first development.

global
0installs0uses~1.1k
v1.0Saved May 2, 2026

Test-Driven Development

Philosophy

Core principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.

Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.

Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.

See tests.md for examples and mocking.md for mocking guidelines.

Anti-Pattern: Horizontal Slices

DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."

This produces crap tests:

  • Tests written in bulk test imagined behavior, not actual behavior
  • You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
  • Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
  • You outrun your headlights, committing to test structure before understanding the implementation

Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.

WRONG (horizontal):
  RED:   test1, test2, test3, test4, test5
  GREEN: impl1, impl2, impl3, impl4, impl5

RIGHT (vertical):
  RED→GREEN: test1→impl1
  RED→GREEN: test2→impl2
  RED→GREEN: test3→impl3
  ...

Workflow

1. Planning

When exploring the codebase, use the project's domain glossary so that test names and interface vocabulary match the project's language, and respect ADRs in the area you're touching.

Before writing any code:

  • Confirm with user what interface changes are needed
  • Confirm with user which behaviors to test (prioritize)
  • Identify opportunities for deep modules (small interface, deep implementation)
  • Design interfaces for testability
  • List the behaviors to test (not implementation steps)
  • Get user approval on the plan

Ask: "What should the public interface look like? Which behaviors are most important to test?"

You can't test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.

2. Tracer Bullet

Write ONE test that confirms ONE thing about the system:

RED:   Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes

This is your tracer bullet - proves the path works end-to-end.

3. Incremental Loop

For each remaining behavior:

RED:   Write next test → fails
GREEN: Minimal code to pass → passes

Rules:

  • One test at a time
  • Only enough code to pass current test
  • Don't anticipate future tests
  • Keep tests focused on observable behavior

4. Refactor

After all tests pass, look for refactor candidates:

  • Extract duplication
  • Deepen modules (move complexity behind simple interfaces)
  • Apply SOLID principles where natural
  • Consider what new code reveals about existing code
  • Run tests after each refactor step

Never refactor while RED. Get to GREEN first.

Checklist Per Cycle

[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Code is minimal for this test
[ ] No speculative features added
Files6
6 files · 6.3 KB

Select a file to preview

Overall Score

88/100

Grade

A

Excellent

Safety

95

Quality

87

Clarity

88

Completeness

82

Summary

A test-driven development (TDD) skill that guides agents through the red-green-refactor loop with emphasis on behavior-driven testing and avoiding implementation-detail coupling. The skill teaches vertical slicing (tracer bullets), integration-style tests, interface design for testability, and refactoring practices. It references a set of supporting markdown files covering deep modules, mocking strategies, and good/bad test patterns.

Static Analysis Findings

1 finding

Patterns detected by deterministic static analysis before AI scoring. Hover over any finding code for detailed information and remediation guidance.

Credential Exposure
SEC-020Direct .env File Access

Direct .env file access

mocking.md.env

Detected Capabilities

TDD workflow guidance (red-green-refactor)Test planning and behavior identificationInterface design for testabilityMocking strategy and system boundary identificationRefactoring techniques and candidatesCode review patterns (distinguishing good vs bad tests)Tracer bullet (vertical slice) methodology

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

test-driven developmentred-green-refactortracer bulletintegration testsbehavior-driven testinginterface designtest-first developmentvertical slicing

Risk Signals

INFO

Reference to .env file in mocking.md context

mocking.md | reference to process.env.STRIPE_KEY in code examples

Use Cases

  • Build features using red-green-refactor workflow
  • Fix bugs with test-first approach
  • Design testable interfaces and APIs
  • Refactor code while maintaining test coverage
  • Write integration tests instead of unit test mocks
  • Plan test strategy and behavior prioritization

Quality Notes

  • Excellent philosophical grounding: skill clearly articulates why certain testing patterns matter (behavior vs implementation)
  • Strong use of anti-patterns: horizontal vs vertical slicing comparison with ASCII diagrams makes the core concept immediately clear
  • Comprehensive supporting files: deep-modules.md, interface-design.md, tests.md, mocking.md cover essential concepts with concrete TypeScript examples
  • Well-structured workflow: four-phase planning→tracer→loop→refactor with checkpoints keeps agents on track
  • Clear guardrails: explicit checklist per cycle prevents common pitfalls (speculative features, implementation testing)
  • Domain vocabulary emphasis: planning section instructs agents to use project glossary, respecting existing ADRs — shows maturity
  • Good/bad test examples: side-by-side comparisons in tests.md and mocking.md make patterns unambiguous
  • Minor: refactoring.md is very brief (387 B) — could benefit from expanded examples or linking to specific refactoring patterns (e.g., extract method, introduce value object)
Model: claude-haiku-4-5-20251001Analyzed: May 2, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Add mattpocock/tdd to your library

Command Palette

Search for a command to run...