Catalog
affaan-m/scholar-evaluation

affaan-m

scholar-evaluation

Structured scholarly-work evaluation for papers, proposals, literature reviews, methods sections, evidence quality, citation support, and research-writing feedback.

global
0installs0uses~1.2k
v1.0Saved May 15, 2026

Scholar Evaluation

Use this skill to evaluate academic or scientific work with a repeatable rubric.

When to Use

  • Reviewing a research paper, proposal, thesis chapter, or literature review.
  • Checking whether claims are supported by cited evidence.
  • Evaluating methodology, study design, analysis, or limitations.
  • Comparing two or more papers for quality or relevance.
  • Producing structured feedback for revision.

Evaluation Scope

Start by identifying the artifact:

  • empirical research paper
  • theoretical paper
  • technical report
  • systematic or narrative literature review
  • research proposal
  • thesis or dissertation chapter
  • conference abstract or short paper

Then choose scope:

  • comprehensive: all rubric dimensions
  • targeted: one or two dimensions, such as method or citations
  • comparative: rank multiple works against the same rubric

Rubric

Score each applicable dimension from 1 to 5:

  • 5: excellent; clear, rigorous, and publication-ready
  • 4: good; minor improvements needed
  • 3: adequate; meaningful gaps but usable
  • 2: weak; substantial revision needed
  • 1: poor; major validity or clarity problems

Use N/A for dimensions that do not apply.

1. Problem and Research Question

  • Is the problem clear and specific?
  • Is the contribution meaningful?
  • Are scope and assumptions explicit?
  • Does the question match the claimed contribution?

2. Literature and Context

  • Is relevant prior work covered?
  • Does the work synthesize rather than merely list sources?
  • Are gaps accurately identified?
  • Are recent and foundational sources balanced?

3. Methodology

  • Does the method answer the research question?
  • Are design choices justified?
  • Are variables, datasets, participants, or materials described clearly?
  • Could another researcher reproduce the work?
  • Are ethical and practical constraints acknowledged?

4. Data and Evidence

  • Are data sources credible and appropriate?
  • Is sample size or corpus coverage adequate?
  • Are inclusion, exclusion, and preprocessing decisions documented?
  • Are missing data and bias risks discussed?

5. Analysis

  • Are statistical, qualitative, or computational methods appropriate?
  • Are baselines and controls fair?
  • Are uncertainty, sensitivity, or robustness checks included when needed?
  • Are alternative explanations considered?

6. Results and Interpretation

  • Are results clearly presented?
  • Do claims stay within the evidence?
  • Are figures, tables, and metrics understandable?
  • Are negative or null results handled honestly?

7. Limitations and Threats to Validity

  • Are limitations specific rather than generic?
  • Are internal, external, construct, and conclusion-validity risks addressed?
  • Does the paper distinguish speculation from demonstrated results?

8. Writing and Structure

  • Is the argument easy to follow?
  • Are sections organized around the research question?
  • Are definitions and notation clear?
  • Is the tone precise and scholarly?

9. Citations

  • Do cited papers support the claims attached to them?
  • Are primary sources used where possible?
  • Are reviews labeled as reviews?
  • Are preprints labeled as preprints?
  • Are citation metadata and links correct?

Review Process

  1. Read the abstract, introduction, figures, and conclusion for claimed contribution.
  2. Read methods and results for evidence quality.
  3. Check the strongest claims against cited sources.
  4. Score each applicable dimension.
  5. Separate critical blockers from revision suggestions.
  6. End with concrete next edits.

Output Template

# Scholar Evaluation: <Artifact>

## Overall Assessment

- Overall score: <1-5 or N/A>
- Confidence: <high | medium | low>
- Summary: <3-5 sentences>

## Dimension Scores

| Dimension | Score | Evidence | Revision priority |
| --- | ---: | --- | --- |
| Problem and question |  |  |  |
| Literature and context |  |  |  |
| Methodology |  |  |  |
| Data and evidence |  |  |  |
| Analysis |  |  |  |
| Results and interpretation |  |  |  |
| Limitations |  |  |  |
| Writing and structure |  |  |  |
| Citations |  |  |  |

## Critical Issues

## Recommended Revisions

## Evidence Checks Needed

Pitfalls

  • Do not use the score as a substitute for concrete feedback.
  • Do not penalize a paper for omitting a dimension outside its scope.
  • Do not treat citation count, venue, or author reputation as proof of quality.
  • Do not accept unsupported claims just because they appear in the abstract.
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

86/100

Grade

A

Excellent

Safety

95

Quality

85

Clarity

88

Completeness

82

Summary

This skill provides a structured rubric and review process for evaluating academic and scientific work across nine dimensions (problem clarity, literature, methodology, data quality, analysis rigor, results interpretation, limitations, writing, and citations). It guides agents to assess research artifacts systematically, produce dimension-specific scores (1–5), identify critical blockers, and deliver concrete revision feedback using a standardized markdown output template.

Detected Capabilities

document analysis and readingstructured scoring and rubric applicationtextual evaluation and feedback generationcomparative analysis of multiple worksmarkdown output generation

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

evaluate research paperreview methodology rigorassess literature reviewcheck citation supportprovide research feedbackscore scholarly workcompare research papers

Use Cases

  • Evaluate empirical research papers for methodology rigor and evidence quality
  • Review research proposals and grant applications for clarity and feasibility
  • Assess literature reviews for synthesis and gap identification
  • Compare multiple papers against the same rubric for ranking or selection
  • Provide structured feedback on thesis chapters and dissertation work
  • Check claims against cited sources for citation accuracy and support
  • Evaluate conference abstracts and short papers before submission

Quality Notes

  • Clear, well-structured rubric with nine distinct dimensions that align with academic standards for research quality
  • Explicit scoring guidance (1–5 scale) with clear definitions for each level, reducing ambiguity in evaluation
  • Review process is methodical and prescriptive: abstract/intro/figures/conclusion first, then methods/results, then evidence checks
  • Comprehensive output template with dimension scores table, critical issues section, and concrete revision recommendations
  • Pitfalls section proactively addresses common evaluation biases (citation count, venue prestige, author reputation)
  • Instructions are read-only — no file writes, shell commands, or external requests, making this a low-risk analysis skill
  • Artifact type taxonomy helps agents categorize the input before applying the rubric
  • Scope selector (comprehensive, targeted, comparative) allows flexible evaluation depth matching user needs
  • Edge case handling explicit: N/A scores for inapplicable dimensions prevents forced assessments
Model: claude-haiku-4-5-20251001Analyzed: May 15, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Add affaan-m/scholar-evaluation to your library

Command Palette

Search for a command to run...