Dacard.ai goes beyond measuring activity. It measures whether your product team is getting smarter over time by tracking decision quality, recommendation accuracy, and the compound intelligence flywheel.Navigate to Intelligence > ROI to access this view.
The decision intelligence score (0-100) is a longitudinal metric that correlates your team’s launch decisions with actual outcomes. Unlike dimension scores that measure current capability, the DI score measures whether acting on recommendations actually improves your product operations.
Behind the scenes, Dacard.ai builds a decision graph connecting signals, scores, decisions, and outcomes across cycles. This graph powers compound intelligence queries:
What caused this score change? Trace from an outcome back through the recommendation, the agent that produced it, and the signals that informed it.
Which recommendations worked? Follow paths from recommendations through to measured outcomes.
How did human decisions affect outcomes? Every approval, rejection, and feedback reaction is recorded as a node in the graph.
When enough teams have scored, Dacard.ai computes anonymized peer benchmarks by segment (company stage, team size, industry). Your coaching observations include benchmark comparisons:
“Your score of 2.1 is below the peer median of 3.2 (25th percentile across 150 teams). Closing this gap could improve your overall maturity.”
Benchmarks are computed nightly from the aggregate scoring database. Your individual scores are never shared or identifiable.
Score velocity chart weekly composite score deltas per product. Upward trend = your team is improving faster than the platform is raising the bar.Actions dispatched by type breakdown of Linear issues, Slack messages, re-scores, and coaching recommendations. Skew toward re-scores suggests agents are catching regressions; skew toward Linear issues suggests proactive improvement.Dimension heat map which of the 27 dimensions are improving most consistently across all products. Strong dimensions (4/4 across multiple products) indicate systemic organizational strength.
Act — recommendations become actions (Linear tickets, coaching)
Learn — outcomes are measured and attributed
Teams with 10+ completed cycles see measurably better recommendation accuracy than teams in their first cycle. This is the compound intelligence moat: a competitor can copy the framework, but they cannot copy 12 months of your team’s decision intelligence data.
The ROI view is designed to be screenshottable for investor and board reporting. The headline metrics (score delta, loop closure rate, velocity) quantify your team’s continuous improvement cadence in a format that goes beyond subjective self-assessment.