name: skill-inventory-manager description: Meta-skill that enumerates available skills, operationally categorizes them by capability (not just name), generates custom use-case pipelines, and maintains session-adaptive learning. Creates {{usecase}}_pipeline.md documents showing which skills to invoke in what order for complex tasks. Updates itself each session based on successful/failed invocations. Triggers on: "what skills do I have", "how would I approach [complex task]", "generate pipeline for [usecase]", "show me skill combinations for [goal]", or at start of session to auto-inventory and suggest pipelines.
Skill Inventory Manager
Meta-skill for understanding, categorizing, and orchestrating the skill ecosystem with session-adaptive learning.
Core Problem
Skills have names (e.g., "pqc-research", "docx"), but operational capability matters more than naming:
- "What skills help with security research?" → multiple options with different strengths
- "How do I combine skills for complex workflows?" → need pipeline templates
- "Which skills did I actually use successfully?" → need session memory
This skill solves: skill discovery, operational categorization, pipeline generation, usage learning.
Phase 0: Skill Enumeration
At session start or when invoked, scan /mnt/skills/ for all available skills:
# Scan all skill directories
find /mnt/skills -name "SKILL.md" -type f | while read skill; do
# Extract metadata
name=$(grep "^name:" "$skill" | head -1 | cut -d: -f2- | xargs)
desc=$(grep "^description:" "$skill" | head -1 | cut -d: -f2- | xargs)
location=$(dirname "$skill")
# Store in inventory
echo "$name|$desc|$location"
done
Output: Complete inventory with name, description, location for each skill.
Phase 1: Operational Categorization
Group skills by what they actually do, not by name:
Capability Categories
data_acquisition:
web: [web_search, web_fetch, playwright-scraper]
files: [file-reading, pdf-reading, docx, xlsx, pdf]
internal: [calendar_search, event_search, gmail, drive]
data_transformation:
documents: [docx, xlsx, pptx, pdf, markdown]
code: [bash_tool, create_file, str_replace]
analysis: [chart_display, data-analysis, analyst-automation-hub]
research_discovery:
systematic: [non-western-innovation-discovery, ase-skill, deep-research]
technical: [product-self-knowledge, avrs-cybernetic, phoenix-supply-chain-oracle]
security: [avrs-supervisor, rangefinder, google-vrp-reporting]
orchestration:
pipelines: [swarm-master, vulnarch-cascade]
workflows: [test-driven-dev, test-suite-architect, deep-module-refactor]
creation:
documents: [docx, pptx, xlsx, pdf, canvas-design]
code: [frontend-design, web-artifacts-builder, mcp-builder]
skills: [skill-creator, advanced-skill-creator]
specialized_domains:
business: [slumdog-billionaire, moonshot-architect]
communication: [message_compose, doc-coauthoring]
creativity: [algorithmic-art, theme-factory]
Key Insight: Skills map many-to-many with capabilities. docx is both acquisition (reading) AND transformation (creating).
Phase 2: Pipeline Generation
For a given use case, generate a pipeline showing skill invocation order:
Pipeline Template Structure
# {{USECASE}}_pipeline.md
## Goal
[What this pipeline achieves]
## Prerequisites
- Skills required: [list]
- Data/context needed: [list]
- Estimated complexity: [simple/moderate/complex]
## Pipeline Stages
### Stage 1: [Discovery/Acquisition/Setup]
**Skills:** [skill_name_1, skill_name_2]
**Actions:**
1. [Specific action with skill_1]
2. [Specific action with skill_2]
**Outputs:** [What's produced]
**Validation:** [How to verify success]
### Stage 2: [Processing/Analysis/Transformation]
**Skills:** [skill_name_3]
**Actions:**
[...]
### Stage N: [Delivery/Integration/Reporting]
**Skills:** [skill_name_N]
**Actions:**
[...]
## Fallback Strategies
- If [skill] unavailable: [alternative approach]
- If [stage] fails: [recovery strategy]
## Success Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
## Session Notes
[Auto-populated: what worked, what failed, timing]
Example Pipeline: Security Vulnerability Research
# vulnerability_research_pipeline.md
## Goal
Discover, validate, and report security vulnerabilities in target system
## Prerequisites
- Skills: ase-skill, avrs-supervisor, phoenix-supply-chain-oracle, google-vrp-reporting
- Target scope defined
- Authorization confirmed
## Pipeline Stages
### Stage 1: Reconnaissance & Surface Mapping
**Skills:** ase-skill
**Actions:**
1. Invoke ase-skill with target domain
2. Generate threat surface map (8 parallel seeds)
3. Grade all sources with Admiralty Scale
**Outputs:** Threat surface inventory, ranked attack vectors
**Validation:** ≥30 unique sources, ≥B2 grade threshold
### Stage 2: Dependency Chain Analysis
**Skills:** phoenix-supply-chain-oracle
**Actions:**
1. Extract binary dependencies from target
2. Query NVD/OSV for known CVEs
3. Verify vulnerable code presence (check for vendor backports)
**Outputs:** KEEL-sealed provenance graph, vulnerable dependency list
**Validation:** All dependencies traced, CVEs verified in actual code
### Stage 3: Dynamic Exploitation
**Skills:** avrs-supervisor
**Actions:**
1. Select highest-priority vulnerability from Stage 2
2. Run AVRS graph-aware static analysis
3. Execute dynamic probing with GDB/MI
4. Generate PoC exploit
**Outputs:** Working exploit, deterministic reproduction steps
**Validation:** Exploit successful in isolated environment
### Stage 4: Reporting
**Skills:** google-vrp-reporting
**Actions:**
1. Document attack scenario with AVRS witness chain
2. Create reproduction steps following Google VRP standards
3. Generate responsible disclosure report
**Outputs:** VRP-compliant report ready for submission
**Validation:** Report includes all required sections per Google standards
## Fallback Strategies
- If avrs-supervisor unavailable: Manual static analysis + gdb
- If Stage 3 fails: Pivot to different vulnerability from Stage 2
- If no exploitable vulns found: Report findings as security hardening recommendations
## Success Criteria
- [ ] Complete threat surface mapped
- [ ] ≥1 verified vulnerability with PoC
- [ ] VRP report submitted with all required evidence
- [ ] Witness chain preserved for provenance
## Session Notes
[2026-05-12: Successfully found buffer overflow in target, AVRS generated ROP chain, Google VRP report submitted]
Phase 3: Capability Mapping
For any given goal, identify which skills provide paths to solution:
def find_skills_for_goal(goal, skill_inventory):
"""
goal: "Create a Word document from web research"
Returns: Multiple paths ranked by directness
"""
paths = []
# Path 1: Direct
if "word document" in goal and "web research" in goal:
paths.append({
"name": "Direct web-to-docx",
"skills": ["web_search", "web_fetch", "docx"],
"stages": ["Search web", "Fetch content", "Create Word doc"],
"complexity": "simple"
})
# Path 2: With intermediates
if "web research" in goal:
paths.append({
"name": "Systematic research with documentation",
"skills": ["ase-skill", "docx"],
"stages": ["ASE systematic search", "Consolidate to Word"],
"complexity": "moderate"
})
# Path 3: Full pipeline
if requires_deep_analysis(goal):
paths.append({
"name": "Deep research to professional doc",
"skills": ["non-western-innovation-discovery", "swarm-master", "docx", "theme-factory"],
"stages": ["RAD research", "Orchestrate sources", "Create doc", "Apply theme"],
"complexity": "complex"
})
return sorted(paths, key=lambda x: x["complexity"])
Phase 4: Session-Adaptive Learning
Track skill usage and outcomes within session:
class SessionLearning:
def __init__(self):
self.invocations = []
self.successes = []
self.failures = []
self.timing = {}
def record_invocation(self, skill_name, context, outcome, duration):
self.invocations.append({
"skill": skill_name,
"context": context,
"outcome": outcome, # "success" | "partial" | "failed"
"duration": duration,
"timestamp": now()
})
if outcome == "success":
self.successes.append(skill_name)
elif outcome == "failed":
self.failures.append(skill_name)
def get_recommendations(self, goal):
"""
Based on session history, recommend skills
"""
# Prioritize recently successful skills
recent_successes = [s for s in self.successes[-5:]]
# Deprioritize recently failed skills
recent_failures = [f for f in self.failures[-3:]]
# Return ranked list
return prioritize(all_skills, recent_successes, recent_failures)
Auto-Update Pipeline Files
At end of each skill invocation, append session notes:
## Session Notes
### 2026-05-12 Session
- **Stage 1 (ase-skill):** Completed in 8 minutes, found 42 sources (35 B2+)
- **Stage 2 (phoenix):** Completed in 12 minutes, mapped 127 dependencies
- **Stage 3 (avrs):** FAILED - target had ASLR+PIE, needed manual analysis fallback
- **Stage 4 (google-vrp):** Completed with fallback findings, 6 minutes
**Lessons:**
- AVRS struggles with full ASLR+PIE - add pre-check in future
- Fallback to manual analysis worked well
- Total time: 26 minutes (under 30min target)
**Recommended Changes:**
- Add ASLR/PIE detection to Stage 1
- Include manual analysis as parallel track, not just fallback
Phase 5: Pipeline Suggestions
At session start, proactively suggest pipelines based on context:
def suggest_pipelines(user_context, recent_activity):
"""
user_context: current conversation, uploaded files, calendar
recent_activity: what user has been working on
"""
suggestions = []
# If user uploaded PDFs
if has_uploaded_pdfs(user_context):
suggestions.append({
"pipeline": "document_analysis_pipeline",
"reason": "You uploaded PDFs - extract, analyze, create report?",
"skills": ["pdf-reading", "data-analysis", "docx"]
})
# If discussing security
if contains_keywords(user_context, ["vulnerability", "security", "CVE"]):
suggestions.append({
"pipeline": "vulnerability_research_pipeline",
"reason": "Security discussion detected",
"skills": ["ase-skill", "avrs-supervisor", "google-vrp-reporting"]
})
# If calendar events approaching
if has_upcoming_events(user_context):
suggestions.append({
"pipeline": "event_prep_pipeline",
"reason": "Upcoming events - need prep materials?",
"skills": ["event_search", "doc-coauthoring", "pptx"]
})
return suggestions
Common Use Case Pipelines
Research & Documentation
Goal: Research topic → comprehensive report
Pipeline: [ase-skill OR non-western-innovation-discovery]
→ [swarm-master if orchestration needed]
→ [docx OR markdown]
→ [theme-factory if polished output]
Security Assessment
Goal: Assess security → report vulnerabilities
Pipeline: [ase-skill for recon]
→ [phoenix-supply-chain-oracle for dependencies]
→ [avrs-supervisor for exploitation]
→ [google-vrp-reporting for documentation]
Document Creation
Goal: Create professional document
Pipeline: [web_search for content OR file-reading for source]
→ [docx OR pptx OR xlsx based on format]
→ [theme-factory for styling]
Code Development
Goal: Build feature with quality
Pipeline: [test-driven-dev for TDD loop]
→ [frontend-design if UI needed]
→ [test-suite-architect for comprehensive testing]
→ [deep-module-refactor for cleanup]
Business Strategy
Goal: Business plan or impact project
Pipeline: [moonshot-architect for high-impact framing]
→ [slumdog-billionaire for operational execution]
→ [docx for formal documentation]
Skill Inventory Output Format
# Skill Inventory - Session [DATE]
## Available Skills: [COUNT]
### By Capability
#### Data Acquisition
1. **web_search** - Search web for current information
- Location: /mnt/skills/public/web-search/
- Use when: Need current/recent information beyond training
2. **ase-skill** - Adversarial Swarm Evolution (systematic research)
- Location: /mnt/skills/user/ase-skill/
- Use when: Need deep research with source grading
[... continue for all skills ...]
### By Complexity
- **Simple** (single invocation): web_search, bash_tool, docx
- **Moderate** (2-3 steps): ase-skill, test-driven-dev
- **Complex** (orchestration): swarm-master, vulnarch-cascade, slumdog-billionaire
### Recently Used (This Session)
1. non-western-innovation-discovery (2 invocations, 2 success)
2. docx (1 invocation, 1 success)
3. bash_tool (47 invocations, 47 success)
### Never Used (Consider for future)
- algorithmic-art
- canvas-design
- localization-toolkit
## Recommended Pipelines for Current Context
Based on conversation:
1. **security_research_pipeline** (high confidence)
- Detected keywords: vulnerability, CVE, security
- Suggested skills: ase-skill → avrs-supervisor → google-vrp-reporting
2. **technical_documentation_pipeline** (medium confidence)
- Detected context: skill creation, technical writing
- Suggested skills: skill-creator → docx → theme-factory
Auto-Generation at Session Start
# Pseudo-code for auto-invoke at start
def on_session_start():
# 1. Enumerate all skills
skills = scan_skill_directories()
# 2. Categorize operationally
categories = categorize_by_capability(skills)
# 3. Check recent history (if available)
recent = load_recent_usage()
# 4. Analyze current context
context = analyze_conversation()
# 5. Generate suggestions
suggestions = suggest_pipelines(context, recent)
# 6. Create inventory file
create_inventory_markdown(skills, categories, recent, suggestions)
# 7. Optionally prompt user
if suggestions:
print(f"I found {len(suggestions)} relevant pipelines for your context. Show them?")
Usage Examples
User: "What skills do I have for security research?"
Response:
Security research capabilities:
Primary:
- ase-skill (systematic OSINT with source grading)
- avrs-supervisor (vulnerability exploitation, ROP chains)
- phoenix-supply-chain-oracle (dependency CVE mapping)
- google-vrp-reporting (responsible disclosure documentation)
Supporting:
- rangefinder (autonomous target discovery)
- test-suite-architect (security testing suites)
- bash_tool (manual analysis, custom tools)
Recommended pipeline: vulnerability_research_pipeline.md
[Show full pipeline? Y/N]
User: "Generate pipeline for creating a professional pitch deck from web research"
Response:
# pitch_deck_pipeline.md
## Goal
Research topic → professional presentation
## Pipeline Stages
### Stage 1: Content Research
**Skills:** ase-skill OR web_search (depending on depth needed)
**Actions:**
1. Define research seeds (8 parallel for ase-skill, or 3-5 searches for web_search)
2. Gather content with source attribution
3. Organize by narrative structure
### Stage 2: Slide Creation
**Skills:** pptx
**Actions:**
1. Create presentation structure (problem → solution → traction → ask)
2. Add content from Stage 1
3. Include data visualizations if applicable
### Stage 3: Design Polish
**Skills:** theme-factory
**Actions:**
1. Select appropriate theme (corporate, tech, creative, etc.)
2. Apply consistent styling
3. Review for visual clarity
## Success Criteria
- [ ] All claims sourced from research
- [ ] Narrative flows logically
- [ ] Visuals support message
- [ ] Professional appearance
Skill Version: 1.0
Created: 2026-05-12
Self-Updating: Yes (appends session notes)
License: Evermoor Sanctuary License (ESL-ANCSA-MRA-IndiModSHA v1.0)