Skip to main content

Three Frameworks Explained

Dacard.ai uses three complementary frameworks to give you a complete picture of product maturity. Each framework answers a different question, and together they form the compound readiness model. Three frameworks: F1 ProdOps Intelligence, F2 Dev Lifecycle, F3 AI Product Assessment forming compound readiness

F1: Product Operations Maturity

Question: How capable is your team at building and operating products? F1 is the primary scoring framework. It evaluates 27 dimensions organized into 6 functions, measuring your team’s operational maturity across the full product lifecycle.

The 6 Functions

FunctionDimensionsWhat it measures
StrategyMarket Intelligence, Decision Quality, Roadmap Discipline, Competitive PositioningHow well you understand your market and make strategic decisions
DesignResearch & Discovery, Prototyping Speed, Experience Design, Design-Dev HandoffHow effectively you design and validate product experiences
DevelopmentArchitecture & Systems, Spec & Context Quality, Build vs Buy, Delivery VelocityHow efficiently you build and ship software
OperationsCustomer Signal Synthesis, Product Analytics, Data Strategy, Feedback Loop QualityHow well you listen to customers and use data
GTMPositioning & Messaging, Launch Execution, Adoption & Expansion, Pricing & PackagingHow effectively you bring products to market
IntelligenceQuality & Experimentation, Team Orchestration, Process Iteration, Cost & Token EconomicsHow well you learn, adapt, and optimize

Scoring

Each dimension is scored 1-5:
  • 1 (Foundation): Basic or absent capability
  • 2 (Building): Emerging practices, inconsistently applied
  • 3 (Scaling): Systematic processes, measurable outcomes
  • 4 (Leading): Industry-leading practices, deeply integrated
  • 5 (Compounding): Self-improving systems that compound over time
Total scores range from 27-135, mapped to five maturity stages: Foundation, Building, Scaling, Leading, and Compounding.

F2: AI-Native Lifecycle

Question: Where are you in the AI-native product build process? F2 tracks your product through 6 sequential stages of the AI-native build lifecycle. Unlike F1 (which measures capability), F2 measures process maturity and progress.

The 6 Stages

1

Discover

Problem definition, user research, competitive analysis, and opportunity sizing. The foundation of product-market understanding.
2

Define

Solution architecture, technical feasibility, AI model selection, and success metrics. Translating insights into a buildable plan.
3

Design

UX design, prompt engineering, interaction patterns, and prototype validation. Making the AI experience tangible and testable.
4

Develop

Implementation, integration, testing, and infrastructure. Building the product with AI-native architecture patterns.
5

Deploy

Launch execution, monitoring, rollout strategy, and incident readiness. Getting your AI product safely into production.
6

Optimize

Performance tuning, cost optimization, model improvement, and feedback loops. Making your AI product better over time.

How it works

F2 uses a self-assessment model. For each stage, you report the status of specific tasks (not started, in progress, completed). The system computes a completion percentage per stage and identifies your current lifecycle position.
F2 is available to all users. Navigate to any product and select the Lifecycle tab to start your assessment.

F3: AI Product Assessment

Question: How AI-native is your product itself? F3 evaluates the product (not the team) across 27 dimensions that measure how deeply AI is integrated into the user experience, architecture, and business model.

Key areas

  • AI Integration Depth: How central AI is to core product functionality
  • Personalization: Adaptive experiences that learn from user behavior
  • Automation: Intelligent workflows that reduce manual effort
  • Data Flywheel: Whether usage data improves the product over time
  • AI-Native UX: Interface patterns designed for AI interactions (prompts, suggestions, explanations)
  • Cost Architecture: Token economics and inference cost management

Scoring

F3 uses the same 1-4 scoring scale as F1. It runs as a separate assessment via the /api/score/product endpoint or through the platform UI.

How They Relate

The three frameworks form a tension map. High capability (F1) without process maturity (F2) means your team can build well but lacks discipline. Strong AI integration (F3) without operational maturity (F1) creates fragile products.
ScenarioF1F2F3Interpretation
Strong team, weak processHighLowVariesTalented team shipping inconsistently
Process-driven, low capabilityLowHighLowFollowing a playbook without the skills to execute
AI-native product, weak opsLowVariesHighImpressive demo, unsustainable in production
Compound readinessHighHighHighTeam, process, and product all aligned

Compound Readiness

When all three frameworks score well, you achieve compound readiness: the state where team capability, process maturity, and product AI-nativeness reinforce each other. This is the target state for AI-native product teams.

Score your product

Run an F1 assessment to see your 27-dimension breakdown.

Start lifecycle assessment

Complete your F2 self-assessment to track build progress.

API Reference

Score programmatically via the API.