Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
F1 Product Operations Framework
The F1 framework measures your team’s operational capability across 6 functions and 27 dimensions. This page explores the operational lens: how AI-native are your team’s actual workflows, not just the product you ship? A product cannot be more AI-native than the team building it. Organizations with high product AI-nativeness but low operational maturity will eventually stall, as the team becomes the bottleneck, not the technology.Why operational maturity matters
The most common pattern is F3 (product AI-nativeness) running ahead of F1 (team operational maturity). The product has impressive AI features, but the team is still building, managing, and iterating using traditional processes. This creates fragility: the product looks great but cannot sustain its lead. F1 reveals whether your team is actually working in AI-native ways, or just building AI features using conventional processes.Six operational functions
Each function maps to a team discipline and contains dimensions from the 27 F1 dimensions.| Function | Owned by | What it measures |
|---|---|---|
| Strategy | Product Management | AI-native market analysis, decision quality, roadmap discipline, competitive positioning |
| Design | Design | AI-native research, prototyping speed, experience design, design-to-dev handoff |
| Development | Engineering | AI-native architecture, spec quality, build vs buy decisions, delivery velocity |
| Operations | Ops / Data | Customer signal synthesis, product analytics, data strategy, feedback loop quality |
| GTM | Product GTM | AI-native positioning, launch execution, adoption, pricing |
| Intelligence | Cross-functional | Quality and experimentation, team orchestration, process iteration, cost management |
Maturity stages for operations
The same five F1 stages apply, interpreted through an operational lens:| Stage | Score Range | What it looks like for operations |
|---|---|---|
| Foundation | 27-48 | No AI in team workflows. Everything is manual: spreadsheets, status meetings, gut-feel decisions. |
| Building | 49-70 | Individual team members use AI tools (Claude, Copilot) but no organizational adoption. Each person does their own thing. |
| Scaling | 71-91 | Standardized AI tools adopted across functions. Shared playbooks, measurable productivity gains, consistent practices. |
| Leading | 92-113 | AI is embedded in core workflows. The team cannot operate efficiently without it. Cross-function coordination is AI-assisted. |
| Compounding | 114-135 | AI orchestrates cross-function work. Agents handle routine operations. Humans focus on judgment, not execution. |
Function-level scoring
- Strategy
- Design
- Development
- Operations
- GTM
- Intelligence
The Strategy function evaluates whether product management is using AI to make better, faster decisions.Four dimensions: Market Intelligence, Decision Quality, Roadmap Discipline, Competitive Positioning.Foundation: Roadmap is driven by stakeholder requests. Market research is occasional and manual. Competitive tracking is informal.Leading: Continuous market intelligence pipelines. AI-generated decision briefs before major calls. Roadmap tied to measurable outcomes with automated tracking.
Product vs. operations maturity gap
The most valuable insight from F1 comes from comparing your product maturity (captured in the URL score) against your operational maturity (captured in integration signals):| Pattern | What it means | Action |
|---|---|---|
| Product ahead of operations | Building AI features with traditional processes. Sustainability risk as the product scales. | Invest in team AI adoption and workflow modernization before adding more product AI. |
| Operations ahead of product | AI-savvy team that has not yet channeled capability into the product. Opportunity. | Use the strong operational foundation to ship AI features aggressively. |
| Both high | Fully aligned. Compounding advantage that widens over time. | Maintain and extend. Focus on the highest-leverage dimensions to reinforce the flywheel. |
| Both low | Starting point. Do not try to fix everything at once. | Start with Operations (faster ROI through tool adoption) to build muscle for product AI. |
Using function scores
Identify your weakest function
Identify your weakest function
If Design scores a 3 but GTM scores a 1, your team builds good AI products but markets them with legacy methods. Fix the bottleneck function before trying to raise already-strong functions.
Track quarterly progress
Track quarterly progress
Re-score operations every quarter. Unlike product maturity (which depends on shipped features), operations maturity can improve in weeks through tool adoption and process changes.
Benchmark across teams
Benchmark across teams
Use portfolio view to compare function scores across product teams. High-scoring teams in a particular function are your internal best practice source.
Related pages
F1 Framework (Dimension View)
Full 27-dimension breakdown with scoring criteria.
Three Frameworks
How F1, F2, and F3 work together.
Connect integrations
Integration data improves F1 scoring accuracy significantly.