Catalog
Yeachan-Heo/learner

Yeachan-Heo

learner

Extract a learned skill from the current conversation

global
0installs0uses~1.5k
v1.1Saved Apr 20, 2026

Learner Skill

This is a Level 7 (self-improving) skill. It has two distinct sections:

  • Expertise: Domain knowledge about what makes a good skill. Updated automatically as patterns are discovered.
  • Workflow: Stable extraction procedure. Rarely changes.

Only the Expertise section should be updated during improvement cycles.


Expertise

This section contains domain knowledge that improves over time. It can be updated by the learner itself when new patterns are discovered.

Core Principle

Reusable skills are not code snippets to copy-paste, but principles and decision-making heuristics that teach Claude HOW TO THINK about a class of problems.

The difference:

  • BAD (mimicking): "When you see ConnectionResetError, add this try/except block"
  • GOOD (reusable skill): "In async network code, any I/O operation can fail independently due to client/server lifecycle mismatches. The principle: wrap each I/O operation separately, because failure between operations is the common case, not the exception."

Quality Gate

Before extracting a skill, ALL three must be true:

  • "Could someone Google this in 5 minutes?" → NO
  • "Is this specific to THIS codebase?" → YES
  • "Did this take real debugging effort to discover?" → YES

Recognition Signals

Extract ONLY after:

  • Solving a tricky bug that required deep investigation
  • Discovering a non-obvious workaround specific to this codebase
  • Finding a hidden gotcha that wastes time when forgotten
  • Uncovering undocumented behavior that affects this project

What Makes a USEFUL Skill

  1. Non-Googleable: Something you couldn't easily find via search

    • BAD: "How to read files in TypeScript" ❌
    • GOOD: "This codebase uses custom path resolution in ESM that requires fileURLToPath + specific relative paths" ✓
  2. Context-Specific: References actual files, error messages, or patterns from THIS codebase

    • BAD: "Use try/catch for error handling" ❌
    • GOOD: "The aiohttp proxy in server.py:42 crashes on ClientDisconnectedError - wrap StreamResponse in try/except" ✓
  3. Actionable with Precision: Tells you exactly WHAT to do and WHERE

    • BAD: "Handle edge cases" ❌
    • GOOD: "When seeing 'Cannot find module' in dist/, check tsconfig.json moduleResolution matches package.json type field" ✓
  4. Hard-Won: Took significant debugging effort to discover

    • BAD: Generic programming patterns ❌
    • GOOD: "Race condition in worker.ts - the Promise.all at line 89 needs await before the map callback returns" ✓

Anti-Patterns (DO NOT EXTRACT)

  • Generic programming patterns (use documentation instead)
  • Refactoring techniques (these are universal)
  • Library usage examples (use library docs)
  • Type definitions or boilerplate
  • Anything a junior dev could Google in 5 minutes

Workflow

This section contains the stable extraction procedure. It should NOT be updated during improvement cycles.

Step 1: Gather Required Information

  • Problem Statement: The SPECIFIC error, symptom, or confusion that occurred

    • Include actual error messages, file paths, line numbers
    • Example: "TypeError in src/hooks/session.ts:45 when sessionId is undefined after restart"
  • Solution: The EXACT fix, not general advice

    • Include code snippets, file paths, configuration changes
    • Example: "Add null check before accessing session.user, regenerate session on 401"
  • Triggers: Keywords that would appear when hitting this problem again

    • Use error message fragments, file names, symptom descriptions
    • Example: ["sessionId undefined", "session.ts TypeError", "401 session"]
  • Scope: Almost always Project-level unless it's a truly universal insight

Step 2: Quality Validation

The system REJECTS skills that are:

  • Too generic (no file paths, line numbers, or specific error messages)
  • Easily Googleable (standard patterns, library usage)
  • Vague solutions (no code snippets or precise instructions)
  • Poor triggers (generic words that match everything)

Step 3: Classify as Expertise or Workflow

Before saving, determine if the learning is:

  • Expertise (domain knowledge, pattern, gotcha) → Save as {topic}-expertise.md
  • Workflow (operational procedure, step sequence) → Save as {topic}-workflow.md

This classification ensures expertise can be updated independently without destabilizing workflows.

Step 4: Save Location

  • User-level: ${CLAUDE_CONFIG_DIR:-~/.claude}/skills/omc-learned/<skill-name>.md - Rare. Only for truly portable insights.
  • Project-level: .omc/skills/<skill-name>.md - Default. Version-controlled with repo.

Required File Format

Every learned skill file MUST start with YAML frontmatter so learned-skill flat-file discovery can load it. Do not write plain markdown without frontmatter.

Minimum required frontmatter:

---
name: <skill-name>
description: <one-line description>
triggers:
  - <trigger-1>
  - <trigger-2>
---

Skill Body Template

---
name: <skill-name>
description: <one-line description>
triggers:
  - <trigger-1>
  - <trigger-2>
---

# [Skill Name]

## The Insight
What is the underlying PRINCIPLE you discovered? Not the code, but the mental model.

## Why This Matters
What goes wrong if you don't know this? What symptom led you here?

## Recognition Pattern
How do you know when this skill applies? What are the signs?

## The Approach
The decision-making heuristic, not just code. How should Claude THINK about this?

## Example (Optional)
If code helps, show it - but as illustration of the principle, not copy-paste material.

Key: A skill is REUSABLE if Claude can apply it to NEW situations, not just identical ones.

  • /oh-my-claudecode:note - Save quick notes that survive compaction (less formal than skills)
  • /oh-my-claudecode:ralph - Start a development loop with learning capture
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

84/100

Grade

B

Good

Safety

95

Quality

82

Clarity

87

Completeness

75

Summary

A meta-skill that teaches Claude how to extract, validate, and save learned skills from conversations. It defines quality gates for skill extraction (non-Googleable, context-specific, hard-won insights) and provides a stable workflow for capturing domain knowledge and codebase-specific gotchas as reusable principles rather than code snippets.

Detected Capabilities

Skill validation and quality gatingExpertise vs. workflow classificationTrigger keyword extraction from problem contextSkill file generation with YAML frontmatterProject-level and user-level skill storage guidanceAnti-pattern detection for skill extraction

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

extract learned skillcapture codebase patternsave debugging insightskill validation gateundocumented gotchanon-googleable knowledge

Use Cases

  • Extract codebase-specific patterns discovered during debugging into reusable skills
  • Validate whether a discovery is worth saving as a skill or should remain a note
  • Capture non-obvious gotchas and undocumented behaviors specific to a project
  • Distinguish between generic programming patterns (not skills) and hard-won insights (skills)
  • Guide skill authorship to focus on principles and decision-making heuristics, not copy-paste code

Quality Notes

  • POSITIVE: Clear quality gate with three binary criteria (non-Googleable, codebase-specific, hard-won) that prevent low-value skill extraction.
  • POSITIVE: Excellent distinction between BAD (generic) and GOOD (specific) examples throughout. The ConnectionResetError vs. async I/O principle example effectively teaches the difference.
  • POSITIVE: Recognition signals section provides clear signals for WHEN to extract (solving tricky bugs, discovering workarounds, finding gotchas) rather than just HOW.
  • POSITIVE: Anti-patterns section is explicit and actionable — developers know what NOT to extract (refactoring techniques, library examples, boilerplate).
  • POSITIVE: Template body structure (The Insight, Why This Matters, Recognition Pattern, The Approach) forces authors to articulate principles, not just solutions.
  • POSITIVE: Workflow section is clearly marked as stable, with expertise section marked as updatable — good separation of concerns.
  • POSITIVE: Explicit guidance on file location (project-level default, user-level rare) and required frontmatter format ensures consistency.
  • MINOR WEAKNESS: Skill body template says 'Example (Optional)' but does not show what good vs. bad examples look like. A canonical example of a well-extracted skill would strengthen this.
  • MINOR WEAKNESS: Step 2 says 'The system REJECTS skills' but does not specify WHO/WHAT does the rejecting or what the user experience is when validation fails.
  • MINOR WEAKNESS: No guidance on trigger keyword quality or quantity — authors need clearer heuristics for how many triggers are sufficient, and what makes a bad trigger.
  • MINOR WEAKNESS: 'Scope: Almost always Project-level unless it's a truly universal insight' — the definition of 'truly universal' is vague and could lead to inconsistent decisions.
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.1

Content updated

2026-04-20

Latest
v1.0

No changelog

2026-04-12

Add Yeachan-Heo/learner to your library

Command Palette

Search for a command to run...