Skip to main content

Agent Studio

Agent Studio is Dacard.ai’s interface for managing autonomous AI agents. Agents run on schedules or event triggers, analyze your product data, and produce structured artifacts or dispatch real actions to connected tools (Linear, Slack, re-score queue). Agent Studio is available on Pro plans and above.

What agents do

Agents are purpose-built AI workflows that monitor specific aspects of your product maturity:

Strategic Intelligence

Analyzes scoring trends, identifies cross-product patterns, and generates strategic recommendations for your portfolio.

Anomaly Detection

Monitors score changes and integration signals for unusual patterns, alerting you to regressions or unexpected improvements.

Competitive Monitor

Tracks competitor product changes and scoring shifts, surfacing competitive intelligence in your dashboard.

Coaching Digest

Generates periodic coaching summaries based on your scores, highlighting the highest-impact actions for your current stage.

Lifecycle Tracker

Monitors your AI-Native Lifecycle stage and flags when team behavior diverges from expected patterns for your current stage.

Outcome Tracker

Closes the feedback loop by linking agent-dispatched actions to measurable score improvements over time.

Getting started

1

Navigate to Agents

Open Agents from the main navigation. Your account will be provisioned with default agent definitions on first visit.
2

Review default agents

Each agent shows its type, description, current status (active or paused), and last run time. Default agents start at Notify autonomy.
3

Activate an agent

Toggle an agent from Paused to Active to start its scheduled runs.
4

Promote autonomy when ready

After validating an agent’s output quality over several runs, promote it to Suggest or Auto to enable action dispatch.

Autonomy levels

Every agent has a configurable autonomy level that controls how much it can do without human approval:
LevelBehaviorBest for
NotifySurfaces findings in the dashboard only. No automated actions.New agents you are still evaluating
SuggestCreates draft actions that go into your approval queue for review before execution.Agents you trust but want to oversee
AutoExecutes configured actions immediately when conditions are met.High-confidence, well-validated agents

Approval queue (Suggest mode)

When an agent is set to Suggest, its actions are queued for human review instead of executing immediately. Each pending action shows:
  • The agent that produced it and the run that triggered it
  • The proposed action (Linear issue title, description, priority)
  • An Approve or Reject button with an optional reason field
Approved actions execute immediately. Rejected actions are recorded in the decision history so the system learns from your judgment over time.
New agents default to Notify. Only promote to Auto after validating output quality over several runs.

Review before dispatch

Agents with Review required enabled hold their artifacts (strategic briefs, anomaly reports) for human review before dispatching to Slack or email. Enable this under Settings > Review on any agent to ensure a human sees every insight before it reaches the team. Set the autonomy level from the agent configuration panel under Settings > Autonomy.

Configuring triggers

Each agent can be triggered in multiple ways:
Trigger typeDescription
ScheduleRuns on a recurring schedule (daily, weekly, or custom cron expression)
EventFires when a specific condition is met (score threshold, integration sync, delta breach)
ManualRun on demand from the Agent Studio UI

Trigger conditions

Event-based triggers support conditions that refine when they fire:
  • Score threshold trigger when a dimension score drops below a specified value
  • Score delta trigger when a composite score changes by more than N points
  • Sync completed trigger after a specific integration syncs new data
  • Product added trigger when a new product is created in the account
Each trigger has a configurable cooldown (minimum time between firings) to prevent noise.
Start with the default weekly schedule. Once you are comfortable with the agent’s output, add event-based triggers for real-time alerts.

Actions

Agents set to Suggest or Auto autonomy can dispatch real actions when triggered:
Action typeWhat it does
Create Linear issueOpens a Linear issue with the finding, suggested owner, and priority
Send Slack messagePosts a structured summary to a configured channel
Trigger re-scoreQueues a fresh score of the affected product
Queue coaching recommendationSurfaces a “Do This Next” item in the next DAC session
Refresh documentationCreates a Linear ticket flagging documentation that has drifted from implementation. Auto-generated by the Spec Quality agent when dimensions score 2 or below.
All dispatched actions are logged in the agent run history with a full audit trail. In Suggest mode, actions appear as drafts in your approval queue before being sent.

Compound flywheel

The compound flywheel tracks whether agent actions are producing measurable score improvements over time. For each dispatched action:
FieldDescription
Linked outcomeWhich dimension the action targeted
Before scoreDimension score at the time the action was dispatched
After scoreDimension score on the next re-score
Loop closedWhether the improvement has been verified

Learning cycles

The flywheel counts completed learning cycles (Score, Connect, Correlate, Act, Learn). Each complete revolution makes the next revolution more accurate. The dashboard shows your cycle count, accuracy trend over time, and intelligence depth percentage.

Outcome attribution

When an action closes its loop (score improves), you can annotate why the score changed:
AttributionMeaning
Recommendation followedThe team implemented the recommendation as suggested
External factorScore changed due to hiring, reorg, or market shift
CoincidenceNo causal relationship between the action and outcome
PartialThe recommendation was one of several contributing factors
These annotations feed back into the intelligence engine so future recommendations become more accurate. Access the flywheel from the Agent Studio dashboard or from any agent’s detail page.

Product vitals

Product vitals surfaces real-time operational health signals for your active products. The vitals panel appears on the Intelligence dashboard and updates on every agent sync:
MetricWhat it measures
Agent runsTotal agent executions in the current period
Insights generatedDistinct findings surfaced across all agent runs
Actions dispatchedActions sent to Linear, Slack, re-score queue, etc.
Loops closedActions verified to have improved a score
Score velocityComposite score change per week over the last 30 days

LLM-powered narratives

Strategic Intelligence and Anomaly Detection agents use Claude to generate human-quality narratives grounded in your scoring data. The agent engine runs a structured analysis first (identifying patterns, anomalies, and priorities), then enriches the output with an LLM call that produces readable strategic briefs with cited evidence. Other agents (Voice of Customer, Strategy Brief, Spec Quality, Code Quality) run as pure heuristic analyzers by default. You can enable LLM narratives for any agent under Settings > Model by selecting a model.
LLM-powered runs consume more credits than heuristic runs but produce significantly richer narratives. Start with Haiku for cost efficiency.

Browsing artifacts

Every agent run produces one or more artifacts, structured outputs stored in your account:
  • Reports Markdown-formatted analysis with data and recommendations
  • Alerts Short notifications about anomalies or threshold breaches
  • Recommendations Specific “Do This Next” actions ranked by impact
Navigate to the agent detail page and select the Runs tab. Click any artifact to view its full content. Artifacts are stored permanently and can be referenced in DAC coaching conversations.

Performance monitoring

MetricDescription
Run historyTimeline of all agent executions with status
Success ratePercentage of runs that completed without errors
Average durationTypical run time for planning capacity
Artifacts generatedCount of outputs produced per run
Token usageLLM tokens consumed per run for cost tracking

Troubleshooting failed runs

  1. Check the run detail page for error messages
  2. Verify that connected integrations are still authorized
  3. Confirm the product being analyzed still exists
  4. Review your credit balance (agent runs consume credits)
Agent runs consume credits. A typical strategic intelligence run uses 1-3 credits. Monitor usage under Settings > Limits.

Managing agents

ActionDescription
PauseStop scheduled runs without losing configuration
ResumeRe-enable a paused agent
EditChange the agent’s name, description, or configuration
DeletePermanently remove the agent and its trigger configurations (artifacts are retained)

Next steps

Connect integrations

Agents work best with rich signal data from connected tools.

DAC Copilot

Reference agent artifacts in coaching conversations for deeper analysis.

Leader: using agents

A guide to configuring agents as a team lead or VP.

API reference

Manage agents programmatically via the REST API.