Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt

Use this file to discover all available pages before exploring further.

DAC Copilot

DAC (Dacard Agentic Coach) is your always-on AI product operations copilot. DAC lives in a resizable panel on the right side of every page and has deep knowledge of modern product practices, your scoring data, and actionable recommendations.

What DAC knows

DAC is not a generic chatbot. It has specialized knowledge across three domains:

Scoring frameworks

All three Dacard frameworks (Maturity, Operations, Lifecycle) with dimension-level detail. What each maturity stage means and how to progress between them.

Your data

When viewing a scored product, DAC has access to the full result: dimension scores, cluster analysis, stage classification, and prescriptive “Do This Next” actions.

Thought leadership

Practices and insights from 13 leading product thinkers across discovery, PLG, DevOps, pricing, and operations.

Knowledge domains

DAC draws on established practices and frameworks across:
Continuous discovery practices, opportunity framing, assumption testing, empowered team models, and outcome-driven development.
Team health signals, making work visible, operational excellence patterns, and high-leverage prioritization frameworks.
Product-led growth strategy, self-serve funnels, usage-based pricing, growth loops, and PLG-to-sales handoff patterns.
DevOps metrics (deployment frequency, lead time, change failure rate), engineering velocity, and AI infrastructure economics.

How to use DAC

Context-aware conversations

DAC adjusts its responses based on where you are in the app:
PageWhat DAC knows
Score resultsFull scoring data, dimension analysis, improvement recommendations
DashboardCross-product patterns, portfolio trends
Operations reportTeam workflow maturity, function-level coaching
Lifecycle reportBuild process assessment, stage progression
SourcesIntegration health, signal coverage
Any other pageGeneral product ops guidance, framework knowledge

Conversation starters

  • “What should I improve first?”
  • “Explain my biggest gap”
  • “How do I reach the next stage?”
  • “Build me a 90-day roadmap”
  • “How do I improve my Data Strategy score from 2 to 3?”

Resizable panel

DAC’s panel can be resized by dragging the left edge. Drag left to expand (up to 600px) for longer conversations, or right to collapse (down to 280px) to focus on the main content. Close DAC with the X button and reopen it with the floating Ask ✦ DAC button.

Financial attribution

Every coaching recommendation includes a dollar estimate of the potential impact. When DAC suggests improving a dimension, it shows the estimated annual capacity recovery based on your team size and the dimension’s recovery factor. For example: “Improving Process Iteration from 2 to 3 could recover ~$180K/year in developer capacity (eliminated process bottlenecks).” These estimates use a heuristic model based on team size, average fully-loaded cost, and industry benchmarks for how each dimension improvement translates to recovered developer-weeks.

Evidence citations

Coaching observations are backed by traceable evidence. When your account has connected integrations, observations include citations linking directly to the source artifacts:
  • GitHub PR URLs for delivery velocity observations
  • Linear issue links for process and cycle time observations
  • Deployment URLs for shipping frequency signals
Each citation includes the source, a browsable link, and a timestamp. This means product leaders can trace any recommendation back to the specific commits, issues, or deployments that generated it.

Providing feedback

DAC supports structured feedback beyond simple thumbs up/down. When you react to a recommendation, you can specify why:
ReactionWhat it tells DAC
HelpfulThe recommendation was accurate and actionable
Already doing thisCorrect diagnosis, but the team has already addressed it
Wrong diagnosisThe underlying problem identified is incorrect
Wrong priorityCorrect diagnosis, but wrong urgency level
Not actionableToo vague or generic to act on
Implemented differentlyTook a different approach to the same problem
Your structured feedback improves DAC’s recommendations over time by teaching it which types of suggestions work for your team.

Tips for better responses

Instead of “How do I improve?”, try “How do I improve my Data Strategy score from 2 to 3?” DAC gives sharper answers with specific context.
DAC can apply specific frameworks: “What does good continuous discovery look like for our stage?” or “How does our score map to the Product Operating Model?”
DAC can build structured plans: “Give me a 90-day roadmap to move from Building to Scaling” or “What are the top 3 things my team should do this sprint?”
Ask DAC to contextualize your scores: “Is a score of 27 good for a Series B company?” or “How does my Operations maturity compare to my Product maturity?”
DAC is AI-generated. Always verify recommendations before making critical decisions.