Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
DAC Copilot
DAC (Dacard Agentic Coach) is your always-on AI product operations copilot. DAC lives in a resizable panel on the right side of every page and has deep knowledge of modern product practices, your scoring data, and actionable recommendations.What DAC knows
DAC is not a generic chatbot. It has specialized knowledge across three domains:Scoring frameworks
All three Dacard frameworks (Maturity, Operations, Lifecycle) with dimension-level detail. What each maturity stage means and how to progress between them.
Your data
When viewing a scored product, DAC has access to the full result: dimension scores, cluster analysis, stage classification, and prescriptive “Do This Next” actions.
Thought leadership
Practices and insights from 13 leading product thinkers across discovery, PLG, DevOps, pricing, and operations.
Knowledge domains
DAC draws on established practices and frameworks across:Product strategy & discovery
Product strategy & discovery
Continuous discovery practices, opportunity framing, assumption testing, empowered team models, and outcome-driven development.
Product operations & craft
Product operations & craft
Team health signals, making work visible, operational excellence patterns, and high-leverage prioritization frameworks.
Growth & monetization
Growth & monetization
Product-led growth strategy, self-serve funnels, usage-based pricing, growth loops, and PLG-to-sales handoff patterns.
Engineering & platform
Engineering & platform
DevOps metrics (deployment frequency, lead time, change failure rate), engineering velocity, and AI infrastructure economics.
How to use DAC
Context-aware conversations
DAC adjusts its responses based on where you are in the app:| Page | What DAC knows |
|---|---|
| Score results | Full scoring data, dimension analysis, improvement recommendations |
| Dashboard | Cross-product patterns, portfolio trends |
| Operations report | Team workflow maturity, function-level coaching |
| Lifecycle report | Build process assessment, stage progression |
| Sources | Integration health, signal coverage |
| Any other page | General product ops guidance, framework knowledge |
Conversation starters
- On a scored product
- On general pages
- “What should I improve first?”
- “Explain my biggest gap”
- “How do I reach the next stage?”
- “Build me a 90-day roadmap”
- “How do I improve my Data Strategy score from 2 to 3?”
Resizable panel
DAC’s panel can be resized by dragging the left edge. Drag left to expand (up to 600px) for longer conversations, or right to collapse (down to 280px) to focus on the main content. Close DAC with the X button and reopen it with the floating Ask ✦ DAC button.Financial attribution
Every coaching recommendation includes a dollar estimate of the potential impact. When DAC suggests improving a dimension, it shows the estimated annual capacity recovery based on your team size and the dimension’s recovery factor. For example: “Improving Process Iteration from 2 to 3 could recover ~$180K/year in developer capacity (eliminated process bottlenecks).” These estimates use a heuristic model based on team size, average fully-loaded cost, and industry benchmarks for how each dimension improvement translates to recovered developer-weeks.Evidence citations
Coaching observations are backed by traceable evidence. When your account has connected integrations, observations include citations linking directly to the source artifacts:- GitHub PR URLs for delivery velocity observations
- Linear issue links for process and cycle time observations
- Deployment URLs for shipping frequency signals
Providing feedback
DAC supports structured feedback beyond simple thumbs up/down. When you react to a recommendation, you can specify why:| Reaction | What it tells DAC |
|---|---|
| Helpful | The recommendation was accurate and actionable |
| Already doing this | Correct diagnosis, but the team has already addressed it |
| Wrong diagnosis | The underlying problem identified is incorrect |
| Wrong priority | Correct diagnosis, but wrong urgency level |
| Not actionable | Too vague or generic to act on |
| Implemented differently | Took a different approach to the same problem |
Tips for better responses
Be specific about what you want to improve
Be specific about what you want to improve
Instead of “How do I improve?”, try “How do I improve my Data Strategy score from 2 to 3?” DAC gives sharper answers with specific context.
Ask for frameworks and models
Ask for frameworks and models
DAC can apply specific frameworks: “What does good continuous discovery look like for our stage?” or “How does our score map to the Product Operating Model?”
Request action plans
Request action plans
DAC can build structured plans: “Give me a 90-day roadmap to move from Building to Scaling” or “What are the top 3 things my team should do this sprint?”
Compare and benchmark
Compare and benchmark
Ask DAC to contextualize your scores: “Is a score of 27 good for a Series B company?” or “How does my Operations maturity compare to my Product maturity?”
DAC is AI-generated. Always verify recommendations before making critical decisions.