Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt

Use this file to discover all available pages before exploring further.

F1 Product Operations Maturity Framework

F1 is the primary scoring framework. It evaluates a product team’s operational capability across 27 dimensions organized into 6 functions, each scored 1 to 5, for a composite score of 27 to 135. It answers the question: “How capable is your team at building and operating AI-native products?”

Five maturity stages

Maturity spectrum from Foundation (27-48) through Compounding (114-135)
1

Foundation (27-48)

Basic or absent capabilities across most dimensions. The team has not yet established consistent AI-native practices. Most work is manual, ad hoc, or driven by individual contributors rather than shared systems.
2

Building (49-70)

Emerging practices exist but are inconsistently applied. Some functions are stronger than others. The team is experimenting with AI-native workflows but lacks repeatability and measurement.
3

Scaling (71-91)

Systematic processes are in place with measurable outcomes. The team ships AI-native features repeatedly and reliably. Data loops are beginning to compound.
4

Leading (92-113)

Industry-leading practices, deeply integrated across all functions. AI is embedded in the team’s operating model, not just the product. The gap between this team and Building-stage peers is significant and widening.
5

Compounding (114-135)

Self-improving systems. Every function reinforces every other. The team gets better at building AI products with every cycle, creating a compounding advantage that is difficult to replicate.

Six functions, 27 dimensions

27 dimensions organized across 6 functions: Strategy, Design, Development, Intelligence, Operations, GTM Dimensions are organized into six functions that map to the full product lifecycle.

Strategy

How well the team understands its market and makes evidence-based strategic decisions.
DimensionWhat it measuresScore 1Score 5
Market IntelligenceQuality of market signal collection and synthesisNo systematic market researchContinuous AI-powered competitive intelligence with real-time synthesis
Decision QualityEvidence quality behind product and strategic decisionsGut-feel decisions with no documentationStructured decision frameworks with measurable outcome tracking
Roadmap DisciplineHow well the roadmap reflects strategic prioritiesRoadmap driven by stakeholder requestsOutcome-based roadmap with clear OKR linkage and regular pruning
Competitive PositioningClarity and defensibility of market positioningNo differentiated positioningCompound positioning that deepens with scale and is hard to replicate

Design

How effectively the team translates user insight into shipped product experiences.
DimensionWhat it measuresScore 1Score 5
Research & DiscoveryDepth and consistency of user research practicesNo user research practiceContinuous discovery with AI-powered synthesis and opportunity trees
Prototyping SpeedHow fast the team goes from idea to testable artifactWeeks to produce a prototypeSame-day AI-generated prototypes with real user feedback loops
Experience DesignQuality of AI interaction patterns and UX craftNo AI in UXAI interactions feel native, adaptive, and delightful
Design-Dev HandoffEfficiency and fidelity of design-to-development translationManual specs with high loss-in-translationAutomated handoff with design system coverage and zero spec debt

Development

How efficiently and consistently the team builds and ships software.
DimensionWhat it measuresScore 1Score 5
Architecture & SystemsDepth of AI integration in the technical architectureNo AI in the stackModels, pipelines, and inference are core to the architecture
Spec & Context QualityQuality of PRDs, tickets, and context provided to buildersVague specs with high ambiguityAI-generated specs with rich context, acceptance criteria, and examples
Build vs BuyStrategic decision-making on model and infrastructure choicesNo framework for build vs buyPrincipled model with clear criteria, regular review, and measured outcomes
Delivery VelocitySpeed and consistency of shipping AI improvementsQuarterly releasesContinuous deployment with AI-powered review, testing, and rollout

Intelligence

How well the team captures, organizes, and uses customer and product signals.
DimensionWhat it measuresScore 1Score 5
Customer Signal SynthesisQuality of customer feedback collection and synthesisNo systematic feedback collectionAI-powered synthesis of all customer signals into actionable intelligence
Product AnalyticsDepth and use of product usage dataNo analytics instrumentationReal-time AI anomaly detection with automated insight generation
Data Strategy & FlywheelWhether data creates a defensible compounding advantageNo data strategyProprietary data flywheel: usage generates data that improves the product
Feedback Loop QualityWhether usage data flows back to improve the productNo feedback mechanismReal-time signal loop from user to model to product improvement
Knowledge ManagementHow well institutional knowledge is captured, organized, and surfacedNo systematic knowledge captureAI agents capture, organize, and distribute institutional knowledge autonomously

GTM

How effectively the team brings products to market and drives adoption.
DimensionWhat it measuresScore 1Score 5
Positioning & MessagingClarity and resonance of market messagingGeneric or feature-based positioningOutcome-focused AI positioning with clear differentiation
Launch ExecutionConsistency and quality of product launch processesAd hoc launches with no playbookRepeatable launch system with pre/post analytics and clear success metrics
Adoption & ExpansionHow effectively the product drives usage growth and expansionNo adoption strategyAI-powered onboarding, expansion loops, and retention flywheel
Pricing & PackagingWhether pricing reflects AI value and supports growthTraditional seat-based pricingUsage or outcome-based pricing that scales with AI value delivered

Operations

How well the team learns, adapts, and optimizes its own processes.
DimensionWhat it measuresScore 1Score 5
Quality & ExperimentationRigor of testing, evaluation, and quality processesManual QA with no AI evalsAutomated eval pipelines with continuous quality monitoring
Team OrchestrationHow well the team coordinates work across people and systemsManual planning with poor visibilityAI-assisted coordination with real-time capacity and dependency tracking
Process IterationHow fast the team improves its own operating modelProcesses are fixed until brokenContinuous process improvement with data-driven retrospectives
Cost & Token EconomicsHow well the team manages AI inference and infrastructure costsNo awareness of AI costsActive token budget management with cost-per-feature analysis and optimization
Security & ComplianceHow well the team manages AI-specific security and compliance risksNo AI security practicesAI agents detect threats, patch vulnerabilities, and maintain compliance autonomously
Reliability & ResilienceHow well AI features handle failures and maintain availabilityNo AI-specific monitoringAI predicts failures before they occur and preemptively adjusts infrastructure

How dimensions interact

Dimensions compound within and across functions:
  • Data Strategy + Feedback Loop Quality = The compounding engine. Strong data feeds strong models, which generate more useful data.
  • Architecture & Systems + Delivery Velocity = The delivery engine. Deep integration enables fast iteration.
  • Market Intelligence + Competitive Positioning = The positioning engine. Clear market insight produces defensible differentiation.
  • Customer Signal Synthesis + Product Analytics = The intelligence layer. Great signal collection with great analysis produces actionable insight.

Scoring criteria

Each dimension is scored 1-5 based on observable signals. Assessors evaluate:
  1. Public evidence - What the product shows, says, and does externally
  2. Technical signals - Architecture patterns, API design, infrastructure choices
  3. Business model signals - Pricing structure, packaging, monetization approach
  4. Team signals - Job postings, engineering blog content, conference talks
  5. Integration signals - Connected tools provide ground-truth operational data (GitHub, Linear, PostHog)

Using F1 scores strategically

Identify the function most relevant to your role and focus on its 4 dimensions. A PM should prioritize Strategy and Operations. An engineer should prioritize Development and Intelligence. Your function score is your primary growth lever.
Use the dimension heatmap across your portfolio to identify systemic weaknesses. A dimension that scores low across multiple products is an org-level capability gap, not a team problem. Invest in org-wide training and tooling, not individual coaching.
Compare your F1 score against your F2 (lifecycle) and F3 (AI product) scores to identify the biggest tension. A high F3 with a low F1 means your product has more AI than your team can sustain. That is a scaling risk, not a strength.

Three Frameworks

How F1, F2, and F3 work together for compound readiness.

Score your product

Run an F1 assessment to see your 27-dimension breakdown.

Understand your report

Guide to reading your F1 maturity report.