Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt

Use this file to discover all available pages before exploring further.

F1 Product Operations Framework

The F1 framework measures your team’s operational capability across 6 functions and 27 dimensions. This page explores the operational lens: how AI-native are your team’s actual workflows, not just the product you ship? A product cannot be more AI-native than the team building it. Organizations with high product AI-nativeness but low operational maturity will eventually stall, as the team becomes the bottleneck, not the technology.

Why operational maturity matters

The most common pattern is F3 (product AI-nativeness) running ahead of F1 (team operational maturity). The product has impressive AI features, but the team is still building, managing, and iterating using traditional processes. This creates fragility: the product looks great but cannot sustain its lead. F1 reveals whether your team is actually working in AI-native ways, or just building AI features using conventional processes.

Six operational functions

Each function maps to a team discipline and contains dimensions from the 27 F1 dimensions.
FunctionOwned byWhat it measures
StrategyProduct ManagementAI-native market analysis, decision quality, roadmap discipline, competitive positioning
DesignDesignAI-native research, prototyping speed, experience design, design-to-dev handoff
DevelopmentEngineeringAI-native architecture, spec quality, build vs buy decisions, delivery velocity
OperationsOps / DataCustomer signal synthesis, product analytics, data strategy, feedback loop quality
GTMProduct GTMAI-native positioning, launch execution, adoption, pricing
IntelligenceCross-functionalQuality and experimentation, team orchestration, process iteration, cost management

Maturity stages for operations

The same five F1 stages apply, interpreted through an operational lens:
StageScore RangeWhat it looks like for operations
Foundation27-48No AI in team workflows. Everything is manual: spreadsheets, status meetings, gut-feel decisions.
Building49-70Individual team members use AI tools (Claude, Copilot) but no organizational adoption. Each person does their own thing.
Scaling71-91Standardized AI tools adopted across functions. Shared playbooks, measurable productivity gains, consistent practices.
Leading92-113AI is embedded in core workflows. The team cannot operate efficiently without it. Cross-function coordination is AI-assisted.
Compounding114-135AI orchestrates cross-function work. Agents handle routine operations. Humans focus on judgment, not execution.

Function-level scoring

The Strategy function evaluates whether product management is using AI to make better, faster decisions.Four dimensions: Market Intelligence, Decision Quality, Roadmap Discipline, Competitive Positioning.Foundation: Roadmap is driven by stakeholder requests. Market research is occasional and manual. Competitive tracking is informal.Leading: Continuous market intelligence pipelines. AI-generated decision briefs before major calls. Roadmap tied to measurable outcomes with automated tracking.

Product vs. operations maturity gap

The most valuable insight from F1 comes from comparing your product maturity (captured in the URL score) against your operational maturity (captured in integration signals):
PatternWhat it meansAction
Product ahead of operationsBuilding AI features with traditional processes. Sustainability risk as the product scales.Invest in team AI adoption and workflow modernization before adding more product AI.
Operations ahead of productAI-savvy team that has not yet channeled capability into the product. Opportunity.Use the strong operational foundation to ship AI features aggressively.
Both highFully aligned. Compounding advantage that widens over time.Maintain and extend. Focus on the highest-leverage dimensions to reinforce the flywheel.
Both lowStarting point. Do not try to fix everything at once.Start with Operations (faster ROI through tool adoption) to build muscle for product AI.

Using function scores

If Design scores a 3 but GTM scores a 1, your team builds good AI products but markets them with legacy methods. Fix the bottleneck function before trying to raise already-strong functions.
Re-score operations every quarter. Unlike product maturity (which depends on shipped features), operations maturity can improve in weeks through tool adoption and process changes.
Use portfolio view to compare function scores across product teams. High-scoring teams in a particular function are your internal best practice source.

F1 Framework (Dimension View)

Full 27-dimension breakdown with scoring criteria.

Three Frameworks

How F1, F2, and F3 work together.

Connect integrations

Integration data improves F1 scoring accuracy significantly.