Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
How Scoring Works
Dacard.ai evaluates products using AI to assess observable signals across 27 dimensions organized into 6 functions. Each dimension is scored 1-5, producing a composite score of 27-135 that maps to one of five maturity stages.The scoring process
Crawl
The platform visits the URL and crawls up to 6 pages, extracting signals from the product’s public-facing presence, documentation, pricing, and UX patterns.
Analyze
Each of the 27 dimensions is evaluated against clear criteria for each maturity level. Signals include technical architecture, business model, UX patterns, and team operations indicators.
Score
Every dimension receives a score from 1 (Foundation) to 5 (Compounding). The 27 scores sum to a composite total of 27-135.
Classify
The composite score determines the product’s maturity stage (Foundation through Compounding).
Five maturity stages
| Stage | Score | What it means |
|---|---|---|
| Foundation | 27-48 | Basic or absent capabilities. AI is not part of the product’s core value, architecture, or strategy. |
| Building | 49-70 | Emerging practices, inconsistently applied. Experimenting with AI features but no proprietary advantage yet. |
| Scaling | 71-91 | Systematic processes with measurable outcomes. AI is a real differentiator. |
| Leading | 92-113 | Deeply integrated AI across the product and team. Remove it and nothing works. |
| Compounding | 114-135 | AI compounds across every layer: product, data, operations, and business model. Self-improving flywheel. |
27 dimensions across 6 functions
Every score evaluates your product team across 6 functions, each containing dimensions:Strategy (4 dimensions)
Strategy (4 dimensions)
Market Intelligence - How well the team collects and synthesizes market signalsDecision Quality - Evidence quality behind product and strategic decisionsRoadmap Discipline - How well the roadmap reflects strategic priorities with outcome measurementCompetitive Positioning - Clarity and defensibility of market positioning
Design (4 dimensions)
Design (4 dimensions)
Research & Discovery - Depth and consistency of user research practicesPrototyping Speed - How fast the team goes from idea to testable artifactExperience Design - Quality of AI interaction patterns and UX craftDesign-Dev Handoff - Efficiency and fidelity of design-to-development translation
Development (4 dimensions)
Development (4 dimensions)
Architecture & Systems - Depth of AI integration in the technical architectureSpec & Context Quality - Quality of PRDs, tickets, and context provided to buildersBuild vs Buy - Strategic decision-making on model and infrastructure choicesDelivery Velocity - Speed and consistency of shipping AI improvements
Intelligence (5 dimensions)
Intelligence (5 dimensions)
Customer Signal Synthesis - Quality of customer feedback collection and synthesisProduct Analytics - Depth and use of product usage dataData Strategy & Flywheel - Whether data creates a defensible compounding advantageFeedback Loop Quality - Whether usage data flows back to improve the productKnowledge Management - How well institutional knowledge is captured, organized, and surfaced
GTM (4 dimensions)
GTM (4 dimensions)
Positioning & Messaging - Clarity and resonance of market messagingLaunch Execution - Consistency and quality of product launch processesAdoption & Expansion - How effectively the product drives usage growth and expansionPricing & Packaging - Whether pricing reflects AI value and supports growth
Operations (6 dimensions)
Operations (6 dimensions)
Quality & Experimentation - Rigor of testing, evaluation, and quality processesTeam Orchestration - How well the team coordinates work across people and systemsProcess Iteration - How fast the team improves its own operating modelCost & Token Economics - How well the team manages AI inference and infrastructure costsSecurity & Compliance - How well the team manages AI-specific security and compliance risksReliability & Resilience - How well AI features handle failures and maintain availability
Signal bars
Scores are visualized as signal bars using a traffic-light color system. Signal bars appear throughout the dashboard, reports, and portfolio views for quick visual scanning.| Dimension score | Color | Meaning |
|---|---|---|
| 1 (Foundation) | Red | Gap. This dimension is limiting your overall maturity. |
| 2 (Building) | Amber | Developing. Targeted improvement will pay off quickly. |
| 3 (Scaling) | Green | Strong. Maintain and extend. |
| 4 (Compounding) | Green | Exceptional. Industry-leading capability in this dimension. |
Signal-score blending
- AI assessment — the LLM’s evaluation of your product’s public-facing signals
- Integration signals — ground-truth operational data from connected tools (deploy frequency, PR cycle time, sprint velocity, etc.)
How blending works
Each connected integration produces metrics mapped to specific dimensions. These metrics are converted to 1-5 signal scores using calibrated thresholds, then blended with the AI assessment score:| Factor | Effect |
|---|---|
| Signal count | More signals = higher blending weight (5 signals minimum to activate, up to 0.4 weight at 20+) |
| Confidence level | High-confidence signals (from well-established integrations) carry more weight |
| Weight cap | Signal scores never exceed 60% weight, preserving the AI assessment as the primary input |
Scoring best practices
Score your own product first
Score your own product first
Start with your own product URL to calibrate. Then score 2-3 competitors to see how you compare. Each URL score costs 10 credits.
Add context for better results
Add context for better results
Click Add context before scoring to include a brief description of what your product does and who it serves. This helps the engine understand products with minimal public content.
Re-score after shipping improvements
Re-score after shipping improvements
Scores are point-in-time snapshots. Re-score after major releases or quarterly to track your trajectory. Score history is preserved automatically.
Connect integrations for higher accuracy
Connect integrations for higher accuracy
URL-only scoring relies on public signals. Connecting GitHub, Linear, and PostHog adds ground-truth operational data that significantly improves accuracy and confidence levels.
Share results with stakeholders
Share results with stakeholders
Anonymous scoring
Anyone can try scoring at app.dacard.ai/try without creating an account. Anonymous scores are rate-limited (1 per hour per IP) and do not include full report access.When you sign up after an anonymous score, you can link your anonymous score to your new account. Your scoring history is preserved.
Related pages
Understand your report
Guide to reading your maturity report section by section.
F1 Framework deep dive
Full 27-dimension definitions and scoring criteria.
Connect integrations
Pull real operational signals into scoring.
Score your first product
Step-by-step walkthrough for first-time users.