Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
New app shell
A redesigned three-column app experience is now available at/app. Bookmark /app and you’ll land directly in the shell on your most recently scored product — no more pasting product IDs.The new layout brings together everything you need in one view:- Prompt library on the left with 20 curated prompts grouped by moment (This Sprint, Deep Dive, Strategic, Generate). Tap any prompt to send it straight to DAC.
- DAC chat in the middle as the primary surface, with a warm first-time greeting, starter decks, and an empty-state hero tailored to whether you’ve scored yet.
- 3-tier rail on the right showing your composite score and triad (Team / Operation / Product), moves in flight, and an ambient “What’s cooking” feed.
/diagnose, /coach, /trace, /benchmark, /forecast, /generate), live scope chips (@adapters, @slack, @public-web, @tribal, @competitors, @team-history), and a model pill so you can see which model and effort answered each response.Saved views and aperture scoping
Apertures (the lens that controls what DAC sees) now actually filter your data — not just the prose. Save your favorite aperture presets server-side so they follow you across devices and sessions, and switch between them with a click. Set your function in Settings > Profile so a “function” aperture narrows DAC to the dimensions most relevant to your role across Strategy, Design, Development, Intelligence, Operations, or GTM.The “just mine” aperture now correctly narrows moves to ones you created or are assigned to, not just tribal notes you authored.Ambient signals on the rail
The Tier 3 “What’s cooking” feed now surfaces three kinds of ambient events without you having to ask:- Slack thread spikes — channels with unusual activity in the last seven days.
- Competitor moves — clustered intel from your connected competitive sources.
- Peer-market shifts — when your cohort’s benchmarks move week over week.
Teach DAC
A new Teach DAC capture endpoint lets you record tribal knowledge — context, decisions, and constraints DAC wouldn’t otherwise know. Captured notes are pulled into every chat as a Tribal Knowledge section and cited back in responses, so you can see when DAC is leaning on your team’s context. Notes you actively use stay fresh; unused notes decay over time.Moves vocabulary
Backlog items are now called Moves across the app, the API, and DAC’s responses — matching how teams actually talk about the actions they’re taking. Existing data is unchanged; only the wording is updated. Learn more about working with moves on the reports page.DAC responses everywhere
DAC’s structured responses now render natively in three new destinations:- Slack as Block Kit messages with epistemological color badges.
- Linear as markdown comments under a “DAC · COACHING COMMENT” header.
- MCP as typed JSON for any tool that speaks Model Context Protocol.
Ambient FTUE
New users hitting the shell for the first time get a warm greeting hero, peer-style copy, and starter decks contextual to their state — connect-adapter prompts pre-first-score, sprint-focus and walk-dimension prompts once you have a snapshot. No more landing on a blank thread.Dacard framework Claude Code skill
The Dacard product-operations framework — 27 dimensions and the five-stage maturity ladder (React, Augment, Orchestrate, Lead, Compound) — is now available as a Claude Code skill. Install it with one command and Claude Code will answer maturity questions grounded in the framework, no Dacard account required.Improvements
- Structured output pipeline: DAC chat responses are now validated against the 15-primitive kit before render, with a one-shot retry on schema misses — fewer malformed cards, more consistent layouts.
- Full primitive kit live: Evidence Ledger, Trajectory Chart, Gap Indicator, Function Card, and Shipping/Landing bars round out the 15-primitive set powering the Deep Dive and Board Slide views.
- Observation-window cadence: All 54 dimensions now track measurement cadence so you know how recent the underlying signals are.
Bug fixes
- Fixed the rail snapshot lookup using the wrong scope on multi-product accounts.
- Fixed
/approutes inheriting the legacy app shell wrapper. - Fixed several layout issues in the new shell: brand mark anchoring, library toggle position, header background separation, and content max-width on wide screens.
- Fixed an internal navigation link in the error display that triggered a full page reload.
AI coding tool integrations
You can now connect AI coding tools as first-class data sources. Claude Code, Cursor, GitHub Copilot, v0, Windsurf, Lovable, and Devin are available in the new AI Workflow integration category. When connected, these tools emit telemetry signals that feed directly into your scoring dimensions — giving you visibility into how AI is embedded in your development workflow. Browse all available providers on the integrations page.Command palette
Press⌘K (or Ctrl+K) to open a global command palette. Use slash verbs to quickly navigate, score a product, open settings, or jump to any page in the app without touching the sidebar.Signals catalog
A new signals catalog shows every signal the platform collects, organized by dimension. Click into any signal to see its evidence drawer — the specific data points from your connected integrations that contributed to that score. This makes it easier to understand exactly what’s behind your results.Coverage dashboard
A new coverage dashboard shows how well your connected integrations cover each of the 27 scoring dimensions. Each dimension displays a confidence indicator based on the number and variety of sources feeding it, so you can see where connecting another tool would improve accuracy.Cross-source intelligence
When you have multiple integrations connected, the platform now synthesizes signals across providers to produce richer, more accurate scoring. For example, combining GitHub deploy frequency with Linear sprint velocity gives a more complete picture of your delivery cadence than either source alone.Credential request flow
Team members can now request access to integration credentials they need. Requests go to account admins for approval, with scope previews showing exactly what permissions will be granted. Admins receive email reminders for stale requests. Manage requests from Settings > Integrations.Per-dimension trajectory sparklines
Every dimension card on your score report now includes a sparkline showing how that dimension has trended over recent scores. Spot improvements and regressions at a glance without navigating to a separate history view.Companion metrics
All 27 dimension cards now display companion metrics — real operational data points from your connected integrations that give additional context alongside the score. These metrics help you understand what’s driving each dimension.Live scorecard on share pages
When you share a score via its public link, the share page now displays a live scorecard badge with key metrics, making shared results more informative for recipients.Cohort benchmarking from integrations
Integration data now powers cohort benchmarks. When you connect tools, your operational metrics are compared against anonymized peers in similar stages and team sizes, giving you more grounded benchmark comparisons.Updated score reveal
The score reveal sequence now leads with your most important cross-framework tension and highlights your weakest function. The new motion sequence draws attention to the insights that matter most before showing the full report.Simplified onboarding
Onboarding has been condensed from multiple steps to a two-step wizard. Select your tier and function in one step, then start scoring immediately. Time to first score is significantly shorter.Refreshed navigation
The app navigation has been restructured around three modes: Read (reports and intelligence), Act (scoring and coaching), and Connect (integrations and settings). The Baseline page is now the single entry point for URL scoring, reducing confusion about where to start.Refreshed voice and tone
Copy across the entire app has been rewritten in a warmer, more direct voice. Headlines, labels, coaching prompts, and onboarding flows now read like a knowledgeable teammate rather than a clinical assessment tool.Tension patterns
Two named tension patterns — Translation Gap and Fragility Signal — are now detected and surfaced automatically. Translation Gap flags when your strategy scores high but execution scores low. Fragility Signal flags when a single dimension is propping up an otherwise weak function.Integration reliability improvements
All 55 integration adapters have been rebuilt on a hardened foundation with better error handling and more consistent sync behavior. Syncs are more reliable, and failures are caught and reported more gracefully.Data sources catalog
A new public data sources catalog at dacard.ai lists every integration provider, organized by category, so you can see what’s available before signing up.Bug fixes
- Fixed role sync not updating correctly during authentication.
- Fixed Business tier checkout not mapping to the correct pricing.
- Fixed chat and gap analysis panels appearing on settings pages.
- Fixed integration sync timestamps not updating after a successful sync.
- Fixed inconsistent score naming across surfaces (now consistently “DAC-score”).
Signal Card now available for all 6 job functions
The IC Signal Card personal assessment is now open to every product function. Previously only available to Product Managers, the assessment now supports Design, Engineering, Data and Analytics, Product Operations, and Product Marketing and Growth in addition to PM.Each function has its own question bank, practice selection set, and confidence sliders mapped to the dimensions most relevant to that role. Narrative generation has been expanded to cover 16 additional dimensions across the new functions, so every function receives specific, opinionated feedback rather than generic fallback copy.Access the assessment from dacard.ai/product-score. No account required.Compound intelligence: pattern detection in production
Layer 3 of the intelligence stack is now running in production. A daily cron job runs pattern detection across all accounts with active integrations, identifying trends (improving, declining, stagnant, volatile) across integration signals and generating intelligent backlog items automatically.Detected patterns feed directly into the coaching engine and the intelligent backlog. Completed recommendations are tracked against score outcomes, building a decision graph that improves recommendation quality over time.Data enrichment: 27 dimensions and 378 signals
The scoring engine now evaluates 27 dimensions (up from 24), with three new dimensions: Security & Compliance, Reliability & Resilience, and Knowledge Management. Integration adapters across all 54 providers have been expanded to emit 120+ new signal types and 378 total signals, giving you deeper and more accurate scoring—especially when connected sources are active. Learn more on the scoring page.Bring Your Own Model
Business and Enterprise plans can now configure a custom Anthropic API key. When active, all AI-powered features—scoring, coaching, agents, and recommendation generation—use your own key. Plan credits are bypassed entirely. Configure it from Settings > LLM Provider.Coaching redesign
The Coach page has been rebuilt around a full-width chat experience. DAC now greets you with personalized starter prompts based on your actual score gaps—your weakest dimension, biggest cross-framework tension, 90-day improvement plan, and benchmark comparison. The 90-Day Plan is now accessible directly from the starters instead of a separate tab. Learn more on the coaching page.Score history and competitor comparison
A new score history chart shows your recent scores with a trend indicator, so you can track progress at a glance. A “Score a competitor to compare” prompt lets you benchmark against another product directly from your report.Methodology page
A new public methodology page explains exactly how scoring works: the four-step process (crawl, assess, compute, normalize), all 27 dimensions across 6 functions, the five maturity stages, and the three confidence levels. Linked from the score hero on every report.Updated pricing
Plans have been restructured into four tiers: Free (299/mo, 100 scores), Business (2,500+/mo, unlimited). A new Starter tier at $49/mo bridges Free and Pro with 20 scores, all three reports, and basic coaching. Score packs and expansion add-ons are available on all paid plans. See full details on the plans page.Refreshed branding
The Dacard.ai brandmark has been updated to a three-bar RAG (red, amber, green) logo representing the maturity stage spectrum. The new mark appears across the app navigation, marketing site, share cards, onboarding, and landing pages.Marketing site rebuild
The marketing site at dacard.ai has been rebuilt with updated positioning around product operations intelligence. New pages include How It Works, About, a blog, and the methodology deep-dive, all aligned with the current three-framework, 27-dimension scoring model.Bug fixes
- Fixed sign-in button contrast in disabled and loading states.
- Fixed settings sidebar missing navigation links.
- Fixed sign-out button not appearing in settings.
Signal-score blending
Connected integrations now produce measurably different scores. When you connect GitHub, Linear, or other tools, their operational metrics (deploy frequency, PR cycle time, sprint velocity) are numerically blended with AI assessment scores. The blending algorithm weights signals by count and confidence, ensuring ground-truth data improves accuracy without overriding the AI assessment.Financial attribution
Every coaching recommendation now carries a dollar estimate. “Improving Process Iteration from 2 to 3 could recover ~$180K/year in developer capacity.” Estimates are based on team size, dimension recovery factors, and average fully-loaded cost. Product ops stops being a cost center conversation and becomes an ROI conversation.Evidence citations
Coaching observations link back to the specific commits, issues, and deployments that generated them. When your account has connected integrations, observations include browsable URLs to GitHub PRs, Linear issues, and other source artifacts. No more “trust the AI” — trace the evidence chain yourself.Decision intelligence score
A new longitudinal metric (0-100) correlates launch decisions with outcomes. Three components: outcome quality (40%), decision velocity (30%), and learning acceleration (30%). The score compounds over time as more cycles complete. Available on the Intelligence ROI page.Knowledge graph
A persistent decision graph now connects signals, scores, recommendations, and outcomes across cycles. Every scoring event, agent run, and human decision is recorded as a node. Path traversal queries power compound intelligence: “what caused this score change?” and “which recommendations worked?”Approval queue
Agents at Suggest autonomy now queue actions for human approval instead of auto-executing. Each pending action shows the proposed Linear issue with an Approve/Reject button and optional reason field. Human decisions are recorded in the knowledge graph for learning.Structured feedback
Coaching feedback expands from thumbs up/down to 7 structured reactions: helpful, already doing, wrong diagnosis, wrong priority, not actionable, and implemented differently. Each reaction type teaches the coaching engine what works for your team.LLM-powered agent narratives
Strategic Intelligence and Anomaly Detection agents now use Claude to generate human-quality narratives grounded in your data. The agent engine runs structured analysis first, then enriches with an LLM call for readable strategic briefs with cited evidence.Doc refresh actions
The Spec Quality agent now emits documentation refresh actions when dimensions score 2 or below. These create Linear tickets flagging documentation that has drifted from implementation, with the dimension context and current score.Outcome attribution
Close the loop on why scores changed. When a recommendation produces a measurable score improvement, annotate whether it was because the recommendation was followed, an external factor, coincidence, or partial contribution. Attributions feed into the decision intelligence score.Review before dispatch
Agents with “Review required” enabled hold their artifacts for human review before dispatching to Slack or email. Ensures a human sees every insight before it reaches the team.Peer benchmark coaching
Coaching observations now reference anonymized peer benchmarks. “Teams at your stage score 3.2 (p50). You’re at 2.1.” Benchmarks are computed nightly from the aggregate scoring database.PLG activation bridge
Anonymous scores are now first-class citizens in the sign-up funnel. When a visitor scores a product before creating an account, that result is automatically claimed after sign-up viaPOST /api/score/link. No data is lost, no re-score required — the full coaching report is immediately available in the new user’s dashboard.Autonomous triggers
Agents now support event-driven execution in addition to schedules. Triggers fire when a specific condition is met — a score drops below a threshold, an integration sync detects a regression, or a new product is added. Each trigger has configurable conditions and a cooldown period to prevent noise.Action executor
Agents can now take real actions, not just produce reports. When an agent detects an anomaly or hits a trigger condition, it can create a Linear issue, post to a Slack channel, queue a re-score of the affected product, or surface a coaching recommendation in the next DAC session. All actions are logged with a full audit trail.Autonomy levels
Every agent now has a configurable autonomy level: Notify (dashboard only), Suggest (draft actions for review), or Auto (immediate execution). New agents default to Notify. Promote to Auto once you have validated the agent’s output quality over several runs.Compound flywheel
A new compound flywheel view tracks whether agent actions produce measurable score improvements over time. For each dispatched action, the flywheel shows the targeted dimension, before and after scores, and whether the loop has been closed. Accessible from the Agent Studio dashboard and individual agent detail pages.Product vitals
A new product vitals panel surfaces real-time operational health signals: agent run count, insights generated, actions dispatched, loops closed, and score velocity (change per week). Vitals appear on the Intelligence dashboard and update on every agent sync.Agent Studio on real data
All eight Agent Studio UI components are now backed by live tRPC data. Compound ROI, connected sources, and several widget panels previously used placeholder values. Every metric you see in Agent Studio now reflects your actual account data.Unified diagnostic report
A new five-tab diagnostic report at/products/[id]/diagnostic brings F1, F2, and F3 together in one view: Summary, People, Process, Product, and Tensions. The Tensions tab visualizes cross-framework conflicts — where your maturity score, lifecycle stage, and AI-native assessment pull in different directions — and ranks them by severity.Bug fixes
- Fixed anonymous score not persisting through the Clerk sign-up redirect.
- Fixed agent triggers not firing on threshold events after an integration sync.
- Fixed compound flywheel loop-closed count double-counting completed actions.
- Fixed product vitals panel showing stale data after a manual agent run.
Team scoring
You can now invite teammates to score your product together. Each member scores the dimensions relevant to their function, and results are combined into a composite report. Disagreements between team members are flagged as alignment opportunities. Team scoring is available on the Team plan.Quick score
A new 10-question self-assessment lets you get your first score in under seven minutes—no URL required. Quick score is designed for individual contributors who want fast, actionable results. Your quick score automatically upgrades when you add a URL or connect integrations.Persona-adaptive experience
The app now adapts to your role. During onboarding, you select your function and seniority, and the platform adjusts headlines, coaching tone, dimension ordering, and suggested actions accordingly. Five persona archetypes cover CPTOs, product ops leads, IC product managers, VP Engineering, and investors.Score reveal animation
When your score completes, a five-phase reveal walks you through the result: score count-up, signal bars, stage badge, People/Process/Product breakdown, and a tension narrative. The sequence highlights the most important insight before the full report loads.90-day coaching plan
DAC now generates a structured 90-day improvement plan from your top recommendations. The plan is broken into three phases (days 1–30, 31–60, 61–90) with actions, owners, effort levels, and target scores. You can copy the plan as Markdown for use in Notion or Linear.Integration platform expanded to 55 providers
The integration catalog has grown from 25 to 55 providers across 24 categories. New additions include Asana, Monday.com, Mixpanel, ClickUp, Jira Product Discovery, ProdPad, Userpilot, Enterpret, Canny, Pendo, Heap, Sentry, Datadog, PagerDuty, Intercom, Zendesk, Statsig, Dovetail, Notion, Confluence, and more. See the full list on the integrations page.Benchmark comparisons
After scoring, you can now see how your product compares to peers in the same stage, team size, and industry vertical. Benchmarks display as percentile bars on your report and update nightly. A minimum sample size ensures statistical validity.Score freshness and decay
Scores now show freshness indicators. As time passes without a re-score, dimension cards fade to signal aging data, and a banner prompts you to re-score. You can opt out of decay reminders in your privacy settings.Confidence badges
Every score now shows a confidence tier—Preliminary (self-assessment only), Standard (URL analysis), or High (URL plus integrations)—so you always know how much data is behind your results.Framework 3: AI-native product assessment
A third framework joins the diagnostic: the AI-Native Product Assessment covers 27 dimensions across six attributes, measuring how deeply AI is embedded in your product and workflows. Together with the Maturity and Operations frameworks, Dacard now scores 54 dimensions. Learn more on the frameworks page.Solo tier
A new Solo plan ($49/month) bridges the gap between Free and Pro. Solo includes 10 scores per month, 2 products, the operations report, and basic coaching.Navigation redesign
The app navigation has been simplified to five items: Score, Intelligence, Coach, Progress, and Team. The sidebar defaults to an icon-only rail to give reports more screen space.Roadmap view
A new roadmap page pulls issues from your connected Linear workspace and groups them by product function (Strategy, Design, Development, Intelligence, Operations, GTM), giving you a function-level view of what your team is building.Website refresh
The marketing site has been rebuilt with updated messaging around three frameworks and 54 dimensions. New pages include the AI-Native Product Assessment framework deep-dive, three explainer blog posts, and a redesigned blog index.Bug fixes
- Fixed duplicate scoring when submitting the form rapidly.
- Fixed products page showing a duplicate empty state.
- Fixed metric pill cards not aligning to equal height.
- Fixed report navigation overlay appearing mid-page instead of at the top.
Integration platform expanded to 25 providers
Seven new integrations join the platform: Klue, Figma, Jellyfish, Orb, AWS Cost Explorer, and Vercel. Existing providers (Linear, GitHub, HubSpot, Salesforce, Stripe) now emit more signals than before—Linear alone went from 4 to 16 signal types. All 27 scoring dimensions are now covered by at least one integration source. See the full list on the integrations page.Smart integration recommendations
The Sources page now recommends which integration to connect next based on the confidence gaps in your current scoring dimensions. If a dimension has low signal coverage, you’ll see a banner suggesting the provider that would improve it most.Glossary tooltips
Product operations terminology—like dimension, confidence, coherence, enrichment, and stage—now shows inline definitions on hover. Tooltips appear on first occurrence across the score hero, sources, and dashboard pages so you can learn the vocabulary without leaving context.Contextual DAC chat suggestions
DAC’s chat now shows suggestion chips tailored to your actual scores. Based on your weakest dimension, the chips prompt you to ask about a 90-day improvement plan, how to leverage your strengths, or how you compare to benchmarks.Personalized score narratives
The score hero subtitle is no longer generic. It now picks from five distinct narrative paths based on your dimensional spread, balance, maturity stage, and gap severity, giving you a more meaningful summary the moment your report loads.Richer recommendations
Every coaching recommendation now includes estimated timeline, effort level, and suggested owner mapped to the relevant team function across all 27 dimensions. DAC narrative sections also tell cause-and-effect stories showing how dimensions influence each other.Anonymous score linking
If you scored a product before creating an account, that result is now automatically linked to your profile after sign-up. No manual steps needed.Consistent color system
Performance colors across the entire app now follow a strict three-tier traffic-light system (green, amber, red). This applies to dashboard scores, framework lens cards, dimension cards, integration source accents, and portfolio heatmaps.Website messaging refresh
The marketing site now emphasizes continuous product operations intelligence rather than one-time assessments. Headline, value propositions, and tier descriptions have been rewritten to focus on outcomes—what you learn and what you can fix.Subprocessors page
A new subprocessors page lists all third-party services that process data on behalf of the platform, linked from the footer under Trust & Legal.Bug fixes
- Fixed a scoring race condition where rapid form submissions could trigger duplicate scores.
- Fixed framework lens card colors that failed to display the correct amber tier.
- Restored the Sources link in the sidebar navigation after it went missing.
Function deep dive pages
Each of the six product functions (Strategy, Design, Development, Intelligence, Operations, GTM) now has its own dedicated page. Click into any function from the maturity report to see dimension-level detail, stage progression timelines, per-dimension backlog items, cross-dimension dependencies, and framework overlay references.Contextual intelligence sidebar
A new right-side sidebar appears on report, function, and dashboard pages with page-aware context. On reports, you see product info, function health scores, and top priorities. On function pages, a dimension navigator and cross-function links. On the dashboard, quick actions and platform stats. Collapses automatically on smaller screens.Recommendation feedback and auto-regeneration
You can now give thumbs up or thumbs down on every DAC coaching recommendation across dimension cards, backlog items, and framework assessments. When you submit feedback with additional context, DAC automatically regenerates the recommendation using your input and collective knowledge from your team. Updated recommendations replace the originals in place.Slack integration
Connect Slack to receive alerts, weekly digest summaries, and agent artifacts directly in your channels. Configure notification channels from the integrations page.Portfolio view
A new portfolio page gives executives and VCs a decision intelligence view across all portfolio companies. See cross-product maturity heatmaps, comparative scoring, and portfolio-level insights at a glance.Agent intelligence system
A new Agents section lets you manage production intelligence agents that monitor your products. View agent status, configure alert rules, and access agent-generated artifacts from a single management interface.Alerts page
A dedicated alerts page lets you view, filter, and manage all intelligence signals. Toggle between new and historical alerts, and provide feedback directly on each signal to improve future detection.Dashboard redesign
The dashboard is now a single-screen view that fits everything in one viewport. A personalized greeting, role-adaptive quick actions, product pulse cards, coaching briefs, and framework health are all visible at a glance without scrolling. Widgets adapt based on your role: executives see portfolio metrics, leads see team insights, and members see their product scores.Floating DAC chat
DAC chat has moved from a fixed third column to a floating overlay. Open it with the DAC button orCmd+K from any page. Your report and function content now uses the full width of the screen.Score normalization
All scores are now displayed on consistent scales: composite scores show as /100 and dimension scores as /5 across every surface in the app, including the dashboard, reports, sidebar, and backlog.Product logomarks
Product logos now appear everywhere a product name is shown, including the sidebar, dashboard, reports, and portfolio views. Logos are pulled automatically from your product’s URL.Enhanced framework assessments
Framework lens cards now show category-level scores with progress bars, structural risk signals, coherence scores, and transformation roadmaps with leverage points. Each of the six overlay reports (POM, DORA, OKRs, North Star, Shape Up, AI Adoption) includes risk analysis, cross-framework connections, and transformation guidance. Backlog items show which framework overlays they improve.Backlog improvements
The prioritized backlog has been redesigned with function and lifecycle stage filters, real subtasks and success criteria on all items, and items expanded by default so you can scan everything at a glance.AI-assisted development detection
The GitHub integration now automatically detects AI-assisted development patterns from pull request metadata.What it detects:- Co-Authored-By trailers (Claude, Copilot, Cursor, GPT, Gemini, Codeium, Tabnine)
- AI-generated and AI-assisted markers in PR bodies
- Tool mentions (Claude Code, v0, Bolt, Lovable)
ai_assisted_pr_count_14d and ai_assisted_pr_count_30d, mapping to Spec & Context Engineering, Build vs Buy, and Delivery Velocity dimensions.New Intelligence card: AI Adoption card on the Intelligence dashboard showing detection status and signal source.No configuration required. Connect GitHub and signals are extracted automatically on every sync.Reverse free trial
New free users now get full Pro features for 14 days from signup. All five reports, advanced coaching, suite intelligence, and 100 scores per month. After 14 days, reverts to free tier. A trial banner shows the countdown across every page.Pricing page overhaul
Tiers renamed (Starter to Solo, Business to Team). Annual billing defaults on at 20% discount. Feature checklists replaced with outcome-oriented value statements. “Most popular” badge moved to Pro. Enterprise CTA changed to Cal.com booking.Navigation restructure
Site header simplified from 7 items to 5 (Platform, Solutions, Learn, Developers, CTA). Footer restructured from 7 columns to 5 for SEO/AEO. App sidebar now hides locked features entirely instead of showing them dimmed. Report tabs renamed to questions (“How mature are we?”, “What should we fix?”).UX overhaul
Maturity report restructured into 3 zones: #1 priority action above the fold, top findings, and collapsible full details. Dimension cards collapsed by default and sorted weakest-first. Score reveal enriched with top gap insight and manual “View report” button. DAC chat proactively coaches on report pages. Narrative bridges connect reports as chapters.Documentation component upgrade
The knowledge base has been redesigned with Mintlify components throughout for better UX, clarity, and findability.What changed:- Frameworks page now uses tabbed navigation to switch between the three frameworks (Maturity, Operations, Lifecycle)
- Scoring, getting started, and integration flows are now step-by-step guides using the Steps component
- Role & permissions matrix is organized into tabs by category (Scoring, Products, Reports, Account, Platform Admin)
- Plans are now summarized as visual cards before the full feature comparison table
- DAC knowledge domains are grouped into expandable sections by area
DAC Copilot expanded with deep knowledge base
DAC (Dacard Agentic Coach) has been significantly upgraded from a scoring assistant to a full product operations copilot.New capabilities:- Deep knowledge of product operations best practices across discovery, growth, DevOps, and AI economics
- Context-aware starter prompts that adjust based on which page and report you’re viewing
- Expert citations on coaching responses with thumbs up/down feedback on each citation card
- Resizable panel by dragging the left edge (280px to 600px)
POM Framework overlay
A new overlay report maps your scoring results to the Product Operating Model (POM).What you get:- 20 POM principles scored against your 27 dimension results
- Gap analysis comparing your current practices to empowered-team standards
- Transformation Tracker: a visual roadmap journey showing progress over time
- Standalone POM assessment (score your team’s operating model independently)
GitHub and Linear integrations (Phase 1 live)
Connect your tools to pull real operational signals into scoring.GitHub signals: PR velocity, review cycles, deployment frequency, code review patternsLinear signals: Issue cycle time, backlog health, sprint velocity, project completion ratesBoth integrate via OAuth. Daily automatic sync, with manual sync available on demand. Signals are normalized into Dacard’s universal metric format and mapped to scoring dimensions (Iteration Speed, Feedback Loop, Build vs Buy, and more).Collapsible sidebar and fixed layout
The app layout was reorganized for focus:- Sidebar collapses to an icon-only rail mode
- Only the middle content column scrolls; sidebar and DAC panel stay fixed
- DAC panel is resizable via a drag handle on the left edge
Stage-based color system
Visual consistency across all reports now uses a stage-dependent palette:| Stage | Color |
|---|---|
| Foundation | Red |
| Building | Amber |
| Scaling | Yellow-green |
| Leading | Green |
| Compounding | Bright green |
Scoring scale and display improvements
Scoring methodology was clarified and display updated across all reports:- Dimension scores use a 1-4 scale (Foundation, Building, Scaling, Compounding) with normalized display
- All score displays standardized to a consistent format across dashboard, reports, and portfolio views
- Homepage hero redesigned with wifi-style signal bars, fixed R-A-A-G-G color progression, and segmented gauge fill
Pricing page redesign
- Feature comparison table now includes category headers with sparkle indicators on AI-driven features
- Cross-function insight cards replace the previous assessment graphic
- Insight card logos standardized using Simple Icons SVGs
- Responsive breakpoints added for all screen sizes