Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
F1 Product Operations Maturity Framework
F1 is the primary scoring framework. It evaluates a product team’s operational capability across 27 dimensions organized into 6 functions, each scored 1 to 5, for a composite score of 27 to 135. It answers the question: “How capable is your team at building and operating AI-native products?”Five maturity stages
Foundation (27-48)
Basic or absent capabilities across most dimensions. The team has not yet established consistent AI-native practices. Most work is manual, ad hoc, or driven by individual contributors rather than shared systems.
Building (49-70)
Emerging practices exist but are inconsistently applied. Some functions are stronger than others. The team is experimenting with AI-native workflows but lacks repeatability and measurement.
Scaling (71-91)
Systematic processes are in place with measurable outcomes. The team ships AI-native features repeatedly and reliably. Data loops are beginning to compound.
Leading (92-113)
Industry-leading practices, deeply integrated across all functions. AI is embedded in the team’s operating model, not just the product. The gap between this team and Building-stage peers is significant and widening.
Six functions, 27 dimensions
Strategy
How well the team understands its market and makes evidence-based strategic decisions.| Dimension | What it measures | Score 1 | Score 5 |
|---|---|---|---|
| Market Intelligence | Quality of market signal collection and synthesis | No systematic market research | Continuous AI-powered competitive intelligence with real-time synthesis |
| Decision Quality | Evidence quality behind product and strategic decisions | Gut-feel decisions with no documentation | Structured decision frameworks with measurable outcome tracking |
| Roadmap Discipline | How well the roadmap reflects strategic priorities | Roadmap driven by stakeholder requests | Outcome-based roadmap with clear OKR linkage and regular pruning |
| Competitive Positioning | Clarity and defensibility of market positioning | No differentiated positioning | Compound positioning that deepens with scale and is hard to replicate |
Design
How effectively the team translates user insight into shipped product experiences.| Dimension | What it measures | Score 1 | Score 5 |
|---|---|---|---|
| Research & Discovery | Depth and consistency of user research practices | No user research practice | Continuous discovery with AI-powered synthesis and opportunity trees |
| Prototyping Speed | How fast the team goes from idea to testable artifact | Weeks to produce a prototype | Same-day AI-generated prototypes with real user feedback loops |
| Experience Design | Quality of AI interaction patterns and UX craft | No AI in UX | AI interactions feel native, adaptive, and delightful |
| Design-Dev Handoff | Efficiency and fidelity of design-to-development translation | Manual specs with high loss-in-translation | Automated handoff with design system coverage and zero spec debt |
Development
How efficiently and consistently the team builds and ships software.| Dimension | What it measures | Score 1 | Score 5 |
|---|---|---|---|
| Architecture & Systems | Depth of AI integration in the technical architecture | No AI in the stack | Models, pipelines, and inference are core to the architecture |
| Spec & Context Quality | Quality of PRDs, tickets, and context provided to builders | Vague specs with high ambiguity | AI-generated specs with rich context, acceptance criteria, and examples |
| Build vs Buy | Strategic decision-making on model and infrastructure choices | No framework for build vs buy | Principled model with clear criteria, regular review, and measured outcomes |
| Delivery Velocity | Speed and consistency of shipping AI improvements | Quarterly releases | Continuous deployment with AI-powered review, testing, and rollout |
Intelligence
How well the team captures, organizes, and uses customer and product signals.| Dimension | What it measures | Score 1 | Score 5 |
|---|---|---|---|
| Customer Signal Synthesis | Quality of customer feedback collection and synthesis | No systematic feedback collection | AI-powered synthesis of all customer signals into actionable intelligence |
| Product Analytics | Depth and use of product usage data | No analytics instrumentation | Real-time AI anomaly detection with automated insight generation |
| Data Strategy & Flywheel | Whether data creates a defensible compounding advantage | No data strategy | Proprietary data flywheel: usage generates data that improves the product |
| Feedback Loop Quality | Whether usage data flows back to improve the product | No feedback mechanism | Real-time signal loop from user to model to product improvement |
| Knowledge Management | How well institutional knowledge is captured, organized, and surfaced | No systematic knowledge capture | AI agents capture, organize, and distribute institutional knowledge autonomously |
GTM
How effectively the team brings products to market and drives adoption.| Dimension | What it measures | Score 1 | Score 5 |
|---|---|---|---|
| Positioning & Messaging | Clarity and resonance of market messaging | Generic or feature-based positioning | Outcome-focused AI positioning with clear differentiation |
| Launch Execution | Consistency and quality of product launch processes | Ad hoc launches with no playbook | Repeatable launch system with pre/post analytics and clear success metrics |
| Adoption & Expansion | How effectively the product drives usage growth and expansion | No adoption strategy | AI-powered onboarding, expansion loops, and retention flywheel |
| Pricing & Packaging | Whether pricing reflects AI value and supports growth | Traditional seat-based pricing | Usage or outcome-based pricing that scales with AI value delivered |
Operations
How well the team learns, adapts, and optimizes its own processes.| Dimension | What it measures | Score 1 | Score 5 |
|---|---|---|---|
| Quality & Experimentation | Rigor of testing, evaluation, and quality processes | Manual QA with no AI evals | Automated eval pipelines with continuous quality monitoring |
| Team Orchestration | How well the team coordinates work across people and systems | Manual planning with poor visibility | AI-assisted coordination with real-time capacity and dependency tracking |
| Process Iteration | How fast the team improves its own operating model | Processes are fixed until broken | Continuous process improvement with data-driven retrospectives |
| Cost & Token Economics | How well the team manages AI inference and infrastructure costs | No awareness of AI costs | Active token budget management with cost-per-feature analysis and optimization |
| Security & Compliance | How well the team manages AI-specific security and compliance risks | No AI security practices | AI agents detect threats, patch vulnerabilities, and maintain compliance autonomously |
| Reliability & Resilience | How well AI features handle failures and maintain availability | No AI-specific monitoring | AI predicts failures before they occur and preemptively adjusts infrastructure |
How dimensions interact
Dimensions compound within and across functions:- Data Strategy + Feedback Loop Quality = The compounding engine. Strong data feeds strong models, which generate more useful data.
- Architecture & Systems + Delivery Velocity = The delivery engine. Deep integration enables fast iteration.
- Market Intelligence + Competitive Positioning = The positioning engine. Clear market insight produces defensible differentiation.
- Customer Signal Synthesis + Product Analytics = The intelligence layer. Great signal collection with great analysis produces actionable insight.
Scoring criteria
Each dimension is scored 1-5 based on observable signals. Assessors evaluate:- Public evidence - What the product shows, says, and does externally
- Technical signals - Architecture patterns, API design, infrastructure choices
- Business model signals - Pricing structure, packaging, monetization approach
- Team signals - Job postings, engineering blog content, conference talks
- Integration signals - Connected tools provide ground-truth operational data (GitHub, Linear, PostHog)
Using F1 scores strategically
For individual contributors
For individual contributors
Identify the function most relevant to your role and focus on its 4 dimensions. A PM should prioritize Strategy and Operations. An engineer should prioritize Development and Intelligence. Your function score is your primary growth lever.
For product leaders
For product leaders
Use the dimension heatmap across your portfolio to identify systemic weaknesses. A dimension that scores low across multiple products is an org-level capability gap, not a team problem. Invest in org-wide training and tooling, not individual coaching.
For founders and executives
For founders and executives
Compare your F1 score against your F2 (lifecycle) and F3 (AI product) scores to identify the biggest tension. A high F3 with a low F1 means your product has more AI than your team can sustain. That is a scaling risk, not a strength.
Related pages
Three Frameworks
How F1, F2, and F3 work together for compound readiness.
Score your product
Run an F1 assessment to see your 27-dimension breakdown.
Understand your report
Guide to reading your F1 maturity report.