<Use_When>
- You already have a mission and evaluator from
/deep-interview --autoresearch - You want persistent single-mission improvement with strict evaluation
- You need durable experiment logs under
.omc/autoresearch/ - You want a supported path for periodic reruns via Claude Code native cron </Use_When>
<Do_Not_Use_When>
- You need evaluator generation at runtime — use
/deep-interview --autoresearchfirst - You need multiple missions orchestrated together — v1 forbids that
- You want the deprecated
omc autoresearchCLI flow — it is no longer authoritative </Do_Not_Use_When>
<Required_Artifacts>
Canonical persistent storage lives under .omc/autoresearch/<mission-slug>/ and/or .omc/logs/autoresearch/<run-id>/.
Minimum required artifacts:
- mission spec
- evaluator script or command reference
- per-iteration evaluation JSON
- markdown decision logs
Recommended canonical shape:
.omc/autoresearch/<mission-slug>/
mission.md
evaluator.json
runs/<run-id>/
evaluations/
iteration-0001.json
iteration-0002.json
decision-log.md
Reuse existing runtime artifacts when available rather than duplicating them unnecessarily. </Required_Artifacts>
<Cron_Integration> Claude Code native cron is a supported integration point for periodic mission enhancement. In v1, prefer documenting/configuring cron inputs over building a large scheduler UI.
If cron is used:
- keep one mission per scheduled job
- preserve the same mission/evaluator contract
- append new run artifacts rather than overwriting prior experiments </Cron_Integration>
<Execution_Policy>
- Do not hand execution back to
omc autoresearch - Do not create multi-mission orchestration
- Prefer reusing
src/autoresearch/*runtime/schema helpers where they already match the stricter contract - Keep logs useful to humans, not only machines </Execution_Policy>