Agent Studio
Agent Studio is Dacard.ai’s interface for managing autonomous AI agents. Agents run on schedules or event triggers, analyze your product data, and produce structured artifacts or dispatch real actions to connected tools (Linear, Slack, re-score queue). Agent Studio is available on Pro plans and above.What agents do
Agents are purpose-built AI workflows that monitor specific aspects of your product maturity:Strategic Intelligence
Analyzes scoring trends, identifies cross-product patterns, and generates strategic recommendations for your portfolio.
Anomaly Detection
Monitors score changes and integration signals for unusual patterns, alerting you to regressions or unexpected improvements.
Competitive Monitor
Tracks competitor product changes and scoring shifts, surfacing competitive intelligence in your dashboard.
Coaching Digest
Generates periodic coaching summaries based on your scores, highlighting the highest-impact actions for your current stage.
Lifecycle Tracker
Monitors your AI-Native Lifecycle stage and flags when team behavior diverges from expected patterns for your current stage.
Outcome Tracker
Closes the feedback loop by linking agent-dispatched actions to measurable score improvements over time.
Getting started
Navigate to Agents
Open Agents from the main navigation. Your account will be provisioned with default agent definitions on first visit.
Review default agents
Each agent shows its type, description, current status (active or paused), and last run time. Default agents start at Notify autonomy.
Autonomy levels
Every agent has a configurable autonomy level that controls how much it can do without human approval:| Level | Behavior | Best for |
|---|---|---|
| Notify | Surfaces findings in the dashboard only. No automated actions. | New agents you are still evaluating |
| Suggest | Creates draft actions that go into your approval queue for review before execution. | Agents you trust but want to oversee |
| Auto | Executes configured actions immediately when conditions are met. | High-confidence, well-validated agents |
Approval queue (Suggest mode)
When an agent is set to Suggest, its actions are queued for human review instead of executing immediately. Each pending action shows:- The agent that produced it and the run that triggered it
- The proposed action (Linear issue title, description, priority)
- An Approve or Reject button with an optional reason field
Review before dispatch
Agents with Review required enabled hold their artifacts (strategic briefs, anomaly reports) for human review before dispatching to Slack or email. Enable this under Settings > Review on any agent to ensure a human sees every insight before it reaches the team. Set the autonomy level from the agent configuration panel under Settings > Autonomy.Configuring triggers
Each agent can be triggered in multiple ways:| Trigger type | Description |
|---|---|
| Schedule | Runs on a recurring schedule (daily, weekly, or custom cron expression) |
| Event | Fires when a specific condition is met (score threshold, integration sync, delta breach) |
| Manual | Run on demand from the Agent Studio UI |
Trigger conditions
Event-based triggers support conditions that refine when they fire:- Score threshold trigger when a dimension score drops below a specified value
- Score delta trigger when a composite score changes by more than N points
- Sync completed trigger after a specific integration syncs new data
- Product added trigger when a new product is created in the account
Actions
Agents set to Suggest or Auto autonomy can dispatch real actions when triggered:| Action type | What it does |
|---|---|
| Create Linear issue | Opens a Linear issue with the finding, suggested owner, and priority |
| Send Slack message | Posts a structured summary to a configured channel |
| Trigger re-score | Queues a fresh score of the affected product |
| Queue coaching recommendation | Surfaces a “Do This Next” item in the next DAC session |
| Refresh documentation | Creates a Linear ticket flagging documentation that has drifted from implementation. Auto-generated by the Spec Quality agent when dimensions score 2 or below. |
Compound flywheel
The compound flywheel tracks whether agent actions are producing measurable score improvements over time. For each dispatched action:| Field | Description |
|---|---|
| Linked outcome | Which dimension the action targeted |
| Before score | Dimension score at the time the action was dispatched |
| After score | Dimension score on the next re-score |
| Loop closed | Whether the improvement has been verified |
Learning cycles
The flywheel counts completed learning cycles (Score, Connect, Correlate, Act, Learn). Each complete revolution makes the next revolution more accurate. The dashboard shows your cycle count, accuracy trend over time, and intelligence depth percentage.Outcome attribution
When an action closes its loop (score improves), you can annotate why the score changed:| Attribution | Meaning |
|---|---|
| Recommendation followed | The team implemented the recommendation as suggested |
| External factor | Score changed due to hiring, reorg, or market shift |
| Coincidence | No causal relationship between the action and outcome |
| Partial | The recommendation was one of several contributing factors |
Product vitals
Product vitals surfaces real-time operational health signals for your active products. The vitals panel appears on the Intelligence dashboard and updates on every agent sync:| Metric | What it measures |
|---|---|
| Agent runs | Total agent executions in the current period |
| Insights generated | Distinct findings surfaced across all agent runs |
| Actions dispatched | Actions sent to Linear, Slack, re-score queue, etc. |
| Loops closed | Actions verified to have improved a score |
| Score velocity | Composite score change per week over the last 30 days |
LLM-powered narratives
Strategic Intelligence and Anomaly Detection agents use Claude to generate human-quality narratives grounded in your scoring data. The agent engine runs a structured analysis first (identifying patterns, anomalies, and priorities), then enriches the output with an LLM call that produces readable strategic briefs with cited evidence. Other agents (Voice of Customer, Strategy Brief, Spec Quality, Code Quality) run as pure heuristic analyzers by default. You can enable LLM narratives for any agent under Settings > Model by selecting a model.Browsing artifacts
Every agent run produces one or more artifacts, structured outputs stored in your account:- Reports Markdown-formatted analysis with data and recommendations
- Alerts Short notifications about anomalies or threshold breaches
- Recommendations Specific “Do This Next” actions ranked by impact
Performance monitoring
| Metric | Description |
|---|---|
| Run history | Timeline of all agent executions with status |
| Success rate | Percentage of runs that completed without errors |
| Average duration | Typical run time for planning capacity |
| Artifacts generated | Count of outputs produced per run |
| Token usage | LLM tokens consumed per run for cost tracking |
Troubleshooting failed runs
- Check the run detail page for error messages
- Verify that connected integrations are still authorized
- Confirm the product being analyzed still exists
- Review your credit balance (agent runs consume credits)
Managing agents
| Action | Description |
|---|---|
| Pause | Stop scheduled runs without losing configuration |
| Resume | Re-enable a paused agent |
| Edit | Change the agent’s name, description, or configuration |
| Delete | Permanently remove the agent and its trigger configurations (artifacts are retained) |
Next steps
Connect integrations
Agents work best with rich signal data from connected tools.
DAC Copilot
Reference agent artifacts in coaching conversations for deeper analysis.
Leader: using agents
A guide to configuring agents as a team lead or VP.
API reference
Manage agents programmatically via the REST API.