Master any domain with a custom AI coach

Kampé generates a weighted rubric that coaches your agent through structured evaluation — then export it to Claude Code, ChatGPT, OpenClaw, or train right here.

Try an ignite

Forge your coach here. Deploy it everywhere.

OpenAI
Anthropic
Grok
OpenClaw
JSON

Export as CLAUDE.md, Custom GPT instructions, SKILL.md, or raw JSON — your coach works wherever your agent lives.

Describe

Tell us what your AI needs to be great at. Smart intake asks 2-3 questions to sharpen the rubric.

Coach forged

We generate a domain-specific rubric with weighted criteria and practice scenarios.

Export anywhere

Download your coach for Claude Code, ChatGPT, OpenClaw, or train right here.

7888.5

In testing, agents coached with Kampé rubrics improved from 78 to 88.5 on domain-specific tasks. Not because they got smarter — because they got coached.

Research confirms this — “Rubric Is All You Need” (ACM 2025) and rubric-based RL rewards (ICLR 2025) show structured evaluation produces measurably better results than freeform feedback.

How is this different from just prompting Claude?

Prompting an LLM cold

  • One-shot response, no evaluation structure
  • No memory of what worked or failed
  • Generic to the model's training data
  • No way to measure improvement

Kampé-coached agent

  • Weighted rubric with domain-specific criteria
  • Iterative feedback loop with trajectory tracking
  • Coaching notes that accumulate across attempts
  • Human domain knowledge injected via Power Boost
  • Measurable score improvement over time