Catalog
affaan-m/agent-introspection-debugging

affaan-m

agent-introspection-debugging

Structured self-debugging workflow for AI agent failures using capture, diagnosis, contained recovery, and introspection reports.

global
0installs0uses~1.4k
v1.1Saved Apr 20, 2026

Agent Introspection Debugging

Use this skill when an agent run is failing repeatedly, consuming tokens without progress, looping on the same tools, or drifting away from the intended task.

This is a workflow skill, not a hidden runtime. It teaches the agent to debug itself systematically before escalating to a human.

When to Activate

  • Maximum tool call / loop-limit failures
  • Repeated retries with no forward progress
  • Context growth or prompt drift that starts degrading output quality
  • File-system or environment state mismatch between expectation and reality
  • Tool failures that are likely recoverable with diagnosis and a smaller corrective action

Scope Boundaries

Activate this skill for:

  • capturing failure state before retrying blindly
  • diagnosing common agent-specific failure patterns
  • applying contained recovery actions
  • producing a structured human-readable debug report

Do not use this skill as the primary source for:

  • feature verification after code changes; use verification-loop
  • framework-specific debugging when a narrower ECC skill already exists
  • runtime promises the current harness cannot enforce automatically

Four-Phase Loop

Phase 1: Failure Capture

Before trying to recover, record the failure precisely.

Capture:

  • error type, message, and stack trace when available
  • last meaningful tool call sequence
  • what the agent was trying to do
  • current context pressure: repeated prompts, oversized pasted logs, duplicated plans, or runaway notes
  • current environment assumptions: cwd, branch, relevant service state, expected files

Minimum capture template:

## Failure Capture
- Session / task:
- Goal in progress:
- Error:
- Last successful step:
- Last failed tool / command:
- Repeated pattern seen:
- Environment assumptions to verify:

Phase 2: Root-Cause Diagnosis

Match the failure to a known pattern before changing anything.

Pattern Likely Cause Check
Maximum tool calls / repeated same command loop or no-exit observer path inspect the last N tool calls for repetition
Context overflow / degraded reasoning unbounded notes, repeated plans, oversized logs inspect recent context for duplication and low-signal bulk
ECONNREFUSED / timeout service unavailable or wrong port verify service health, URL, and port assumptions
429 / quota exhaustion retry storm or missing backoff count repeated calls and inspect retry spacing
file missing after write / stale diff race, wrong cwd, or branch drift re-check path, cwd, git status, and actual file existence
tests still failing after “fix” wrong hypothesis isolate the exact failing test and re-derive the bug

Diagnosis questions:

  • is this a logic failure, state failure, environment failure, or policy failure?
  • did the agent lose the real objective and start optimizing the wrong subtask?
  • is the failure deterministic or transient?
  • what is the smallest reversible action that would validate the diagnosis?

Phase 3: Contained Recovery

Recover with the smallest action that changes the diagnosis surface.

Safe recovery actions:

  • stop repeated retries and restate the hypothesis
  • trim low-signal context and keep only the active goal, blockers, and evidence
  • re-check the actual filesystem / branch / process state
  • narrow the task to one failing command, one file, or one test
  • switch from speculative reasoning to direct observation
  • escalate to a human when the failure is high-risk or externally blocked

Do not claim unsupported auto-healing actions like “reset agent state” or “update harness config” unless you are actually doing them through real tools in the current environment.

Contained recovery checklist:

## Recovery Action
- Diagnosis chosen:
- Smallest action taken:
- Why this is safe:
- What evidence would prove the fix worked:

Phase 4: Introspection Report

End with a report that makes the recovery legible to the next agent or human.

## Agent Self-Debug Report
- Session / task:
- Failure:
- Root cause:
- Recovery action:
- Result: success | partial | blocked
- Token / time burn risk:
- Follow-up needed:
- Preventive change to encode later:

Recovery Heuristics

Prefer these interventions in order:

  1. Restate the real objective in one sentence.
  2. Verify the world state instead of trusting memory.
  3. Shrink the failing scope.
  4. Run one discriminating check.
  5. Only then retry.

Bad pattern:

  • retrying the same action three times with slightly different wording

Good pattern:

  • capture failure
  • classify the pattern
  • run one direct check
  • change the plan only if the check supports it

Integration with ECC

  • Use verification-loop after recovery if code was changed.
  • Use continuous-learning-v2 when the failure pattern is worth turning into an instinct or later skill.
  • Use council when the issue is not technical failure but decision ambiguity.
  • Use workspace-surface-audit if the failure came from conflicting local state or repo drift.

Output Standard

When this skill is active, do not end with “I fixed it” alone.

Always provide:

  • the failure pattern
  • the root-cause hypothesis
  • the recovery action
  • the evidence that the situation is now better or still blocked
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

87/100

Grade

A

Excellent

Safety

92

Quality

88

Clarity

87

Completeness

80

Summary

A structured self-debugging workflow that teaches AI agents to systematically diagnose and recover from repeated failures, token waste, and context drift. The skill guides agents through failure capture, root-cause diagnosis, contained recovery, and introspection reporting before escalating to humans.

Detected Capabilities

Failure state capture and documentationPattern matching against known agent failure typesRoot-cause diagnosis (logic, state, environment, policy failures)Contained recovery actions with safety checksStructured introspection reporting for human handoffContext audit and optimization guidanceIntegration with companion ECC skills (verification-loop, continuous-learning, council)

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

agent loop recoverydebug repeated failurescontext overflow diagnosistoken waste investigationenvironment state mismatchinfinite retry pattern

Risk Signals

INFO

No file writes, shell execution, or external network calls detected

SKILL.md content
INFO

Skill explicitly prevents unsupported auto-healing claims (e.g., 'reset agent state' without real tools)

Phase 3: Contained Recovery section
INFO

No credential access, environment variable reads for secrets, or sensitive file operations

Full document scope

Use Cases

  • Recover from infinite tool-call loops or repeated retries
  • Diagnose context overflow or prompt drift degrading agent reasoning
  • Identify and verify environment state mismatches (cwd, branch, services)
  • Isolate deterministic vs. transient failures before retry
  • Document failure patterns for prevention and later skill development

Quality Notes

  • Excellent scope boundaries: explicitly defines what to activate for and what to delegate to other skills (verification-loop, continuous-learning-v2, council, workspace-surface-audit)
  • Strong pedagogical structure with four clear phases, diagnostic tables, and concrete examples (good vs. bad patterns)
  • Well-documented recovery heuristics with ordered preference list (restate goal → verify world → shrink scope → discriminate → retry)
  • Practical diagnostic checklist templates provided for each phase, reducing ambiguity about what 'done' looks like
  • Integration guidance clear: explicitly names downstream skills and their responsibilities, reducing duplicate or conflicting activations
  • Recovery safety emphasized through 'contained' language and explicit caution against unsupported auto-healing claims
  • Output standard enforces legibility: demands failure pattern + hypothesis + action + evidence, not vague 'it's fixed' claims
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.1

Content updated

2026-04-20

Latest
v1.0

No changelog

2026-04-12

Add affaan-m/agent-introspection-debugging to your library

Command Palette

Search for a command to run...