Catalog
affaan-m/iterative-retrieval

affaan-m

iterative-retrieval

Pattern for progressively refining context retrieval to solve the subagent context problem

global
0installs0uses~1.5k
v1.2Saved Apr 20, 2026

Iterative Retrieval Pattern

Solves the "context problem" in multi-agent workflows where subagents don't know what context they need until they start working.

When to Activate

  • Spawning subagents that need codebase context they cannot predict upfront
  • Building multi-agent workflows where context is progressively refined
  • Encountering "context too large" or "missing context" failures in agent tasks
  • Designing RAG-like retrieval pipelines for code exploration
  • Optimizing token usage in agent orchestration

The Problem

Subagents are spawned with limited context. They don't know:

  • Which files contain relevant code
  • What patterns exist in the codebase
  • What terminology the project uses

Standard approaches fail:

  • Send everything: Exceeds context limits
  • Send nothing: Agent lacks critical information
  • Guess what's needed: Often wrong

The Solution: Iterative Retrieval

A 4-phase loop that progressively refines context:

┌─────────────────────────────────────────────┐
│                                             │
│   ┌──────────┐      ┌──────────┐            │
│   │ DISPATCH │─────│ EVALUATE │            │
│   └──────────┘      └──────────┘            │
│        ▲                  │                 │
│        │                  ▼                 │
│   ┌──────────┐      ┌──────────┐            │
│   │   LOOP   │─────│  REFINE  │            │
│   └──────────┘      └──────────┘            │
│                                             │
│        Max 3 cycles, then proceed           │
└─────────────────────────────────────────────┘

Phase 1: DISPATCH

Initial broad query to gather candidate files:

// Start with high-level intent
const initialQuery = {
  patterns: ['src/**/*.ts', 'lib/**/*.ts'],
  keywords: ['authentication', 'user', 'session'],
  excludes: ['*.test.ts', '*.spec.ts']
};

// Dispatch to retrieval agent
const candidates = await retrieveFiles(initialQuery);

Phase 2: EVALUATE

Assess retrieved content for relevance:

function evaluateRelevance(files, task) {
  return files.map(file => ({
    path: file.path,
    relevance: scoreRelevance(file.content, task),
    reason: explainRelevance(file.content, task),
    missingContext: identifyGaps(file.content, task)
  }));
}

Scoring criteria:

  • High (0.8-1.0): Directly implements target functionality
  • Medium (0.5-0.7): Contains related patterns or types
  • Low (0.2-0.4): Tangentially related
  • None (0-0.2): Not relevant, exclude

Phase 3: REFINE

Update search criteria based on evaluation:

function refineQuery(evaluation, previousQuery) {
  return {
    // Add new patterns discovered in high-relevance files
    patterns: [...previousQuery.patterns, ...extractPatterns(evaluation)],

    // Add terminology found in codebase
    keywords: [...previousQuery.keywords, ...extractKeywords(evaluation)],

    // Exclude confirmed irrelevant paths
    excludes: [...previousQuery.excludes, ...evaluation
      .filter(e => e.relevance < 0.2)
      .map(e => e.path)
    ],

    // Target specific gaps
    focusAreas: evaluation
      .flatMap(e => e.missingContext)
      .filter(unique)
  };
}

Phase 4: LOOP

Repeat with refined criteria (max 3 cycles):

async function iterativeRetrieve(task, maxCycles = 3) {
  let query = createInitialQuery(task);
  let bestContext = [];

  for (let cycle = 0; cycle < maxCycles; cycle++) {
    const candidates = await retrieveFiles(query);
    const evaluation = evaluateRelevance(candidates, task);

    // Check if we have sufficient context
    const highRelevance = evaluation.filter(e => e.relevance >= 0.7);
    if (highRelevance.length >= 3 && !hasCriticalGaps(evaluation)) {
      return highRelevance;
    }

    // Refine and continue
    query = refineQuery(evaluation, query);
    bestContext = mergeContext(bestContext, highRelevance);
  }

  return bestContext;
}

Practical Examples

Example 1: Bug Fix Context

Task: "Fix the authentication token expiry bug"

Cycle 1:
  DISPATCH: Search for "token", "auth", "expiry" in src/**
  EVALUATE: Found auth.ts (0.9), tokens.ts (0.8), user.ts (0.3)
  REFINE: Add "refresh", "jwt" keywords; exclude user.ts

Cycle 2:
  DISPATCH: Search refined terms
  EVALUATE: Found session-manager.ts (0.95), jwt-utils.ts (0.85)
  REFINE: Sufficient context (2 high-relevance files)

Result: auth.ts, tokens.ts, session-manager.ts, jwt-utils.ts

Example 2: Feature Implementation

Task: "Add rate limiting to API endpoints"

Cycle 1:
  DISPATCH: Search "rate", "limit", "api" in routes/**
  EVALUATE: No matches - codebase uses "throttle" terminology
  REFINE: Add "throttle", "middleware" keywords

Cycle 2:
  DISPATCH: Search refined terms
  EVALUATE: Found throttle.ts (0.9), middleware/index.ts (0.7)
  REFINE: Need router patterns

Cycle 3:
  DISPATCH: Search "router", "express" patterns
  EVALUATE: Found router-setup.ts (0.8)
  REFINE: Sufficient context

Result: throttle.ts, middleware/index.ts, router-setup.ts

Integration with Agents

Use in agent prompts:

When retrieving context for this task:
1. Start with broad keyword search
2. Evaluate each file's relevance (0-1 scale)
3. Identify what context is still missing
4. Refine search criteria and repeat (max 3 cycles)
5. Return files with relevance >= 0.7

Best Practices

  1. Start broad, narrow progressively - Don't over-specify initial queries
  2. Learn codebase terminology - First cycle often reveals naming conventions
  3. Track what's missing - Explicit gap identification drives refinement
  4. Stop at "good enough" - 3 high-relevance files beats 10 mediocre ones
  5. Exclude confidently - Low-relevance files won't become relevant
  • The Longform Guide - Subagent orchestration section
  • continuous-learning skill - For patterns that improve over time
  • Agent definitions bundled with ECC (manual install path: agents/)
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

78/100

Grade

B

Good

Safety

95

Quality

75

Clarity

82

Completeness

68

Summary

A design pattern for progressively refining context retrieval in multi-agent workflows. The skill teaches a 4-phase loop (dispatch, evaluate, refine, loop) that enables subagents to iteratively discover and fetch only the context they need, solving the problem of either sending too much context (exceeding limits) or too little (missing critical information). The pattern includes relevance scoring, gap identification, and feedback-driven query refinement with a maximum of 3 cycles.

Detected Capabilities

Agent orchestration pattern guidanceContext evaluation and relevance scoring methodologyQuery refinement algorithmsGap identification and closure strategiesToken optimization for multi-agent systemsTerminology discovery in codebases

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

multi-agent orchestrationcontext retrieval optimizationsubagent context problemprogressive query refinementtoken usage optimization

Risk Signals

INFO

External domain reference (x.com) in Related section

SKILL.md, Related section (bottom of file)

Referenced Domains

External domains referenced in skill content, detected by static analysis.

x.com

Use Cases

  • Spawning subagents that need dynamic codebase context
  • Multi-agent workflows with progressive context discovery
  • Handling 'context too large' or 'missing context' failures
  • Optimizing token usage in agent orchestration
  • Building RAG-like retrieval pipelines for code exploration

Quality Notes

  • Excellent structure with clear problem statement, solution diagram, and 4-phase breakdown
  • Well-documented with practical, realistic examples (bug fix, feature implementation) that demonstrate the pattern in action
  • Code examples are illustrative and pseudocode-like, making them language-agnostic and easy to translate to any agent framework
  • Scoring criteria for relevance (0.8-1.0 high, 0.5-0.7 medium, etc.) provides concrete guidance for implementation
  • Best practices section is actionable and reinforces key insights (start broad, learn terminology, stop at 'good enough')
  • Diagram effectively visualizes the loop structure and max 3-cycle boundary
  • Missing concrete implementation details for the `scoreRelevance()`, `explainRelevance()`, and `identifyGaps()` functions — these are critical to actual agent implementation
  • No error handling guidance for edge cases (e.g., what happens if no high-relevance files are found after 3 cycles?, how does the agent handle ambiguous terminology?)
  • No explicit guidance on how to set initial query parameters or how broad/narrow they should be — this is left to interpretation
  • Related section links to external sources but does not clearly explain how this skill integrates with `continuous-learning` or the referenced Longform Guide
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.2

Content updated

2026-04-20

Latest
v1.1

Content updated

2026-04-12

v1.0

Seeded from github.com/affaan-m/everything-claude-code

2026-03-16

Add affaan-m/iterative-retrieval to your library

Command Palette

Search for a command to run...