Three Frameworks Explained
Dacard.ai uses three complementary frameworks to give you a complete picture of product maturity. Each framework answers a different question, and together they form the compound readiness model.F1: Product Operations Maturity
Question: How capable is your team at building and operating products? F1 is the primary scoring framework. It evaluates 27 dimensions organized into 6 functions, measuring your team’s operational maturity across the full product lifecycle.The 6 Functions
| Function | Dimensions | What it measures |
|---|---|---|
| Strategy | Market Intelligence, Decision Quality, Roadmap Discipline, Competitive Positioning | How well you understand your market and make strategic decisions |
| Design | Research & Discovery, Prototyping Speed, Experience Design, Design-Dev Handoff | How effectively you design and validate product experiences |
| Development | Architecture & Systems, Spec & Context Quality, Build vs Buy, Delivery Velocity | How efficiently you build and ship software |
| Operations | Customer Signal Synthesis, Product Analytics, Data Strategy, Feedback Loop Quality | How well you listen to customers and use data |
| GTM | Positioning & Messaging, Launch Execution, Adoption & Expansion, Pricing & Packaging | How effectively you bring products to market |
| Intelligence | Quality & Experimentation, Team Orchestration, Process Iteration, Cost & Token Economics | How well you learn, adapt, and optimize |
Scoring
Each dimension is scored 1-5:- 1 (Foundation): Basic or absent capability
- 2 (Building): Emerging practices, inconsistently applied
- 3 (Scaling): Systematic processes, measurable outcomes
- 4 (Leading): Industry-leading practices, deeply integrated
- 5 (Compounding): Self-improving systems that compound over time
F2: AI-Native Lifecycle
Question: Where are you in the AI-native product build process? F2 tracks your product through 6 sequential stages of the AI-native build lifecycle. Unlike F1 (which measures capability), F2 measures process maturity and progress.The 6 Stages
Discover
Problem definition, user research, competitive analysis, and opportunity sizing. The foundation of product-market understanding.
Define
Solution architecture, technical feasibility, AI model selection, and success metrics. Translating insights into a buildable plan.
Design
UX design, prompt engineering, interaction patterns, and prototype validation. Making the AI experience tangible and testable.
Develop
Implementation, integration, testing, and infrastructure. Building the product with AI-native architecture patterns.
Deploy
Launch execution, monitoring, rollout strategy, and incident readiness. Getting your AI product safely into production.
How it works
F2 uses a self-assessment model. For each stage, you report the status of specific tasks (not started, in progress, completed). The system computes a completion percentage per stage and identifies your current lifecycle position.F3: AI Product Assessment
Question: How AI-native is your product itself? F3 evaluates the product (not the team) across 27 dimensions that measure how deeply AI is integrated into the user experience, architecture, and business model.Key areas
- AI Integration Depth: How central AI is to core product functionality
- Personalization: Adaptive experiences that learn from user behavior
- Automation: Intelligent workflows that reduce manual effort
- Data Flywheel: Whether usage data improves the product over time
- AI-Native UX: Interface patterns designed for AI interactions (prompts, suggestions, explanations)
- Cost Architecture: Token economics and inference cost management
Scoring
F3 uses the same 1-4 scoring scale as F1. It runs as a separate assessment via the/api/score/product endpoint or through the platform UI.
How They Relate
The three frameworks form a tension map. High capability (F1) without process maturity (F2) means your team can build well but lacks discipline. Strong AI integration (F3) without operational maturity (F1) creates fragile products.| Scenario | F1 | F2 | F3 | Interpretation |
|---|---|---|---|---|
| Strong team, weak process | High | Low | Varies | Talented team shipping inconsistently |
| Process-driven, low capability | Low | High | Low | Following a playbook without the skills to execute |
| AI-native product, weak ops | Low | Varies | High | Impressive demo, unsustainable in production |
| Compound readiness | High | High | High | Team, process, and product all aligned |
Compound Readiness
When all three frameworks score well, you achieve compound readiness: the state where team capability, process maturity, and product AI-nativeness reinforce each other. This is the target state for AI-native product teams.Score your product
Run an F1 assessment to see your 27-dimension breakdown.
Start lifecycle assessment
Complete your F2 self-assessment to track build progress.
API Reference
Score programmatically via the API.