Catalog
affaan-m/dashboard-builder

affaan-m

dashboard-builder

Build monitoring dashboards that answer real operator questions for Grafana, SigNoz, and similar platforms. Use when turning metrics into a working dashboard instead of a vanity board.

global
0installs0uses~561
v1.1Saved Apr 20, 2026

Dashboard Builder

Use this when the task is to build a dashboard people can operate from.

The goal is not "show every metric." The goal is to answer:

  • is it healthy?
  • where is the bottleneck?
  • what changed?
  • what action should someone take?

When to Use

  • "Build a Kafka monitoring dashboard"
  • "Create a Grafana dashboard for Elasticsearch"
  • "Make a SigNoz dashboard for this service"
  • "Turn this metrics list into a real operational dashboard"

Guardrails

  • do not start from visual layout; start from operator questions
  • do not include every available metric just because it exists
  • do not mix health, throughput, and resource panels without structure
  • do not ship panels without titles, units, and sane thresholds

Workflow

1. Define the operating questions

Organize around:

  • health / availability
  • latency / performance
  • throughput / volume
  • saturation / resources
  • service-specific risk

2. Study the target platform schema

Inspect existing dashboards first:

  • JSON structure
  • query language
  • variables
  • threshold styling
  • section layout

3. Build the minimum useful board

Recommended structure:

  1. overview
  2. performance
  3. resources
  4. service-specific section

4. Cut vanity panels

Every panel should answer a real question. If it does not, remove it.

Example Panel Sets

Elasticsearch

  • cluster health
  • shard allocation
  • search latency
  • indexing rate
  • JVM heap / GC

Kafka

  • broker count
  • under-replicated partitions
  • messages in / out
  • consumer lag
  • disk and network pressure

API gateway / ingress

  • request rate
  • p50 / p95 / p99 latency
  • error rate
  • upstream health
  • active connections

Quality Checklist

  • valid dashboard JSON
  • clear section grouping
  • titles and units are present
  • thresholds/status colors are meaningful
  • variables exist for common filters
  • default time range and refresh are sensible
  • no vanity panels with no operator value
  • research-ops
  • backend-patterns
  • terminal-ops
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

82/100

Grade

B

Good

Safety

95

Quality

78

Clarity

85

Completeness

72

Summary

A skill that guides AI agents to build operational monitoring dashboards for platforms like Grafana and SigNoz. Rather than creating visually comprehensive but operationally useless dashboards, it teaches agents to structure dashboards around real operator questions (health, performance, throughput, resources) and to include only panels that answer those questions.

Detected Capabilities

Dashboard design methodology (organizing by operational questions rather than available metrics)Platform-agnostic schema analysis (Grafana JSON, SigNoz UI, query language patterns)Panel design with thresholds, units, and meaningful status indicatorsVanity panel elimination (scope reduction based on operator value)Example panel patterns for common infrastructure services (Elasticsearch, Kafka, ingress controllers)

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

build monitoring dashboardgrafana dashboard designoperational metrics visualizationkafka dashboardelasticsearch monitoringsigNoz setupturn metrics into dashboard

Use Cases

  • Build a Kafka cluster monitoring dashboard with broker health, replication status, and consumer lag panels
  • Create a Grafana dashboard for Elasticsearch cluster operations including shard allocation and JVM metrics
  • Design a SigNoz dashboard for API gateway monitoring with latency percentiles, error rates, and upstream health
  • Convert a raw metrics list into a structured, operator-focused dashboard with clear sections and actionable thresholds

Quality Notes

  • Positive: Clear philosophical guardrails that prioritize operator intent over metric coverage — establishes a principled boundary against scope creep.
  • Positive: Well-structured workflow with four concrete steps (define questions → study schema → build minimum board → cut vanity).
  • Positive: Domain-specific examples (Elasticsearch, Kafka, API gateway) with realistic panel sets, helping agents understand what 'operational value' looks like.
  • Positive: Quality checklist is actionable and specific (JSON validity, sections, units, thresholds, variables, time range, no vanity).
  • Positive: Scope is clearly bounded to dashboard design; does not claim to teach metrics collection, alerting rules, or platform administration.
  • Limitation: No guidance on querying — skill assumes the agent or user will know how to write queries for the target platform (PromQL, ES query DSL, etc.). Could benefit from a note like 'consult platform-specific query documentation' or examples of simple queries.
  • Limitation: Does not address multi-datasource dashboards or cross-platform correlation — assumes single-service or single-platform focus.
  • Limitation: No error handling guidance — what if the target system lacks certain metrics, or if thresholds cannot be determined from data? Skill could acknowledge these edge cases.
  • Limitation: Related skills are listed but not explained — agent does not know when to defer to `research-ops` or `backend-patterns`.
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.1

Content updated

2026-04-20

Latest
v1.0

No changelog

2026-04-12

Add affaan-m/dashboard-builder to your library

Command Palette

Search for a command to run...

affaan-m/dashboard-builder | SkillRepo