Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt

Use this file to discover all available pages before exploring further.

How Scoring Works

Dacard.ai evaluates products using AI to assess observable signals across 27 dimensions organized into 6 functions. Each dimension is scored 1-5, producing a composite score of 27-135 that maps to one of five maturity stages. Five-step scoring process: Crawl, Analyze, Score, Classify, Recommend

The scoring process

1

Crawl

The platform visits the URL and crawls up to 6 pages, extracting signals from the product’s public-facing presence, documentation, pricing, and UX patterns.
2

Analyze

Each of the 27 dimensions is evaluated against clear criteria for each maturity level. Signals include technical architecture, business model, UX patterns, and team operations indicators.
3

Score

Every dimension receives a score from 1 (Foundation) to 5 (Compounding). The 27 scores sum to a composite total of 27-135.
4

Classify

The composite score determines the product’s maturity stage (Foundation through Compounding).
5

Recommend

Dimension-level insights, strengths, gaps, and “Do This Next” improvement actions are generated and stored at /r/{id}.

Five maturity stages

Maturity spectrum from Foundation (27-48) through Compounding (114-135)
StageScoreWhat it means
Foundation27-48Basic or absent capabilities. AI is not part of the product’s core value, architecture, or strategy.
Building49-70Emerging practices, inconsistently applied. Experimenting with AI features but no proprietary advantage yet.
Scaling71-91Systematic processes with measurable outcomes. AI is a real differentiator.
Leading92-113Deeply integrated AI across the product and team. Remove it and nothing works.
Compounding114-135AI compounds across every layer: product, data, operations, and business model. Self-improving flywheel.

27 dimensions across 6 functions

Every score evaluates your product team across 6 functions, each containing dimensions:
Market Intelligence - How well the team collects and synthesizes market signalsDecision Quality - Evidence quality behind product and strategic decisionsRoadmap Discipline - How well the roadmap reflects strategic priorities with outcome measurementCompetitive Positioning - Clarity and defensibility of market positioning
Research & Discovery - Depth and consistency of user research practicesPrototyping Speed - How fast the team goes from idea to testable artifactExperience Design - Quality of AI interaction patterns and UX craftDesign-Dev Handoff - Efficiency and fidelity of design-to-development translation
Architecture & Systems - Depth of AI integration in the technical architectureSpec & Context Quality - Quality of PRDs, tickets, and context provided to buildersBuild vs Buy - Strategic decision-making on model and infrastructure choicesDelivery Velocity - Speed and consistency of shipping AI improvements
Customer Signal Synthesis - Quality of customer feedback collection and synthesisProduct Analytics - Depth and use of product usage dataData Strategy & Flywheel - Whether data creates a defensible compounding advantageFeedback Loop Quality - Whether usage data flows back to improve the productKnowledge Management - How well institutional knowledge is captured, organized, and surfaced
Positioning & Messaging - Clarity and resonance of market messagingLaunch Execution - Consistency and quality of product launch processesAdoption & Expansion - How effectively the product drives usage growth and expansionPricing & Packaging - Whether pricing reflects AI value and supports growth
Quality & Experimentation - Rigor of testing, evaluation, and quality processesTeam Orchestration - How well the team coordinates work across people and systemsProcess Iteration - How fast the team improves its own operating modelCost & Token Economics - How well the team manages AI inference and infrastructure costsSecurity & Compliance - How well the team manages AI-specific security and compliance risksReliability & Resilience - How well AI features handle failures and maintain availability

Signal bars

Scores are visualized as signal bars using a traffic-light color system. Signal bars appear throughout the dashboard, reports, and portfolio views for quick visual scanning.
Dimension scoreColorMeaning
1 (Foundation)RedGap. This dimension is limiting your overall maturity.
2 (Building)AmberDeveloping. Targeted improvement will pay off quickly.
3 (Scaling)GreenStrong. Maintain and extend.
4 (Compounding)GreenExceptional. Industry-leading capability in this dimension.

Signal-score blending

AI assessment and integration signals blend into composite dimension scores When you connect integrations (GitHub, Linear, PostHog, etc.), your scores become signal-enhanced. The platform blends two scoring sources:
  1. AI assessment — the LLM’s evaluation of your product’s public-facing signals
  2. Integration signals — ground-truth operational data from connected tools (deploy frequency, PR cycle time, sprint velocity, etc.)

How blending works

Each connected integration produces metrics mapped to specific dimensions. These metrics are converted to 1-5 signal scores using calibrated thresholds, then blended with the AI assessment score:
FactorEffect
Signal countMore signals = higher blending weight (5 signals minimum to activate, up to 0.4 weight at 20+)
Confidence levelHigh-confidence signals (from well-established integrations) carry more weight
Weight capSignal scores never exceed 60% weight, preserving the AI assessment as the primary input
The result: connected teams get measurably more accurate scores because dimension scores reflect both observable product quality and actual operational data.
Connect GitHub and Linear first. These two integrations cover the most dimensions and produce the highest-confidence signal scores.

Scoring best practices

Start with your own product URL to calibrate. Then score 2-3 competitors to see how you compare. Each URL score costs 10 credits.
Click Add context before scoring to include a brief description of what your product does and who it serves. This helps the engine understand products with minimal public content.
Scores are point-in-time snapshots. Re-score after major releases or quarterly to track your trajectory. Score history is preserved automatically.
URL-only scoring relies on public signals. Connecting GitHub, Linear, and PostHog adds ground-truth operational data that significantly improves accuracy and confidence levels.
Every score has a shareable URL at /r/{id}. Send it to leadership, investors, or team members without requiring them to sign in.

Anonymous scoring

Anyone can try scoring at app.dacard.ai/try without creating an account. Anonymous scores are rate-limited (1 per hour per IP) and do not include full report access.
When you sign up after an anonymous score, you can link your anonymous score to your new account. Your scoring history is preserved.

Understand your report

Guide to reading your maturity report section by section.

F1 Framework deep dive

Full 27-dimension definitions and scoring criteria.

Connect integrations

Pull real operational signals into scoring.

Score your first product

Step-by-step walkthrough for first-time users.