Catalog
affaan-m/deep-research

affaan-m

deep-research

Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.

global
0installs0uses~1.1k
v1.1Saved Apr 20, 2026

Deep Research

Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.

When to Activate

  • User asks to research any topic in depth
  • Competitive analysis, technology evaluation, or market sizing
  • Due diligence on companies, investors, or technologies
  • Any question requiring synthesis from multiple sources
  • User says "research", "deep dive", "investigate", or "what's the current state of"

MCP Requirements

At least one of:

  • firecrawlfirecrawl_search, firecrawl_scrape, firecrawl_crawl
  • exaweb_search_exa, web_search_advanced_exa, crawling_exa

Both together give the best coverage. Configure in ~/.claude.json or ~/.codex/config.toml.

Workflow

Step 1: Understand the Goal

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research

Break the topic into 3-5 research sub-questions. Example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

For EACH sub-question, search using available MCP tools:

With firecrawl:

firecrawl_search(query: "<sub-question keywords>", limit: 8)

With exa:

web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix general and news-focused queries
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content:

With firecrawl:

firecrawl_scrape(url: "<url>")

With exa:

crawling_exa(url: "<url>", tokensNum: 5000)

Read 3-5 key sources in full for depth. Do not rely only on search snippets.

Step 5: Synthesize and Write Report

Structure the report:

# [Topic]: Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Deliver

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, save full report to a file

Parallel Research with Subagents

For broad topics, use Claude Code's Task tool to parallelize:

Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes

Each agent searches, reads sources, and returns findings. The main session synthesizes into the final report.

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."
  6. Separate fact from inference. Label estimates, projections, and opinions clearly.

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

78/100

Grade

B

Good

Safety

82

Quality

72

Clarity

85

Completeness

68

Summary

A multi-source web research skill that uses firecrawl and exa MCP tools to search the web, synthesize findings from multiple sources, and deliver cited research reports. The skill guides agents through breaking topics into sub-questions, executing parallel searches, deep-reading key sources, and structuring findings with proper attribution and methodology documentation.

Detected Capabilities

Web search via firecrawl and exa MCPsFull-page content scraping and crawlingMulti-source query execution with keyword variationParallel research execution using subagentsStructured report generation with citationsSource prioritization and quality assessmentResearch methodology documentation

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

research topiccompetitive analysismarket sizingdue diligencetechnology evaluationdeep diveinvestigate trendssynthesize sources

Risk Signals

INFO

Network access to external web sources via firecrawl_search, firecrawl_scrape, web_search_exa, crawling_exa

Step 3, Step 4, MCP Requirements section
WARNING

Reliance on external MCP tools (firecrawl, exa) that are not bundled; skill cannot function without them

MCP Requirements section
WARNING

No validation of URL safety or source reputation before scraping; relies on user trust of sources

Step 4: Deep-Read Key Sources

Use Cases

  • Competitive analysis and market sizing
  • Technology evaluation and due diligence
  • In-depth topic research with source attribution
  • Synthesizing evidence across multiple web sources
  • Writing cited research reports

Quality Notes

  • Positive: Clear, step-by-step workflow with numbered progression from planning through delivery.
  • Positive: Explicit quality rules (every claim needs source, cross-reference, recency prioritization, gap acknowledgment) that guide agents toward high-integrity research.
  • Positive: Concrete examples provided for both workflow (AI healthcare impact) and use cases, making instructions actionable.
  • Positive: Well-structured report template with sections for executive summary, themes, takeaways, sources, and methodology.
  • Positive: Addresses parallelization strategy for broad topics using subagents, enabling scalability.
  • Negative: No error handling guidance — what should the agent do if a search returns no results, or if all sources are behind paywalls?
  • Negative: No guidance on handling conflicting information between sources or assessing source reliability beyond 'academic, official, reputable > blogs > forums'.
  • Negative: Missing guidance on token/cost limits for large scraping operations (firecrawl and exa may have usage constraints).
  • Negative: No instructions for handling rate limiting, timeouts, or failed requests during multi-source searches.
  • Negative: Vague on 'confidence levels' in report header — no criteria for assigning High/Medium/Low confidence.
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.1

Content updated

2026-04-20

Latest
v1.0

Seeded from github.com/affaan-m/everything-claude-code

2026-03-16

Add affaan-m/deep-research to your library

Command Palette

Search for a command to run...