Skip to main content

Insights

The Insights view shows the results of a completed project run. What it displays depends on the type of tests that were generated:

  • UI Tests — shows discovered workflows, execution steps, agent reasoning, and generated artifacts.
  • Unit Tests — shows test generation metrics: compile status, runtime results, and a file/method tree.

Access it by selecting a project run from the project list.


Workflows List

The left panel lists all workflows discovered in the run, with a summary bar at the top showing Total, Passed, and Pending counts.

Workflows list showing Total, Passed and Pending counts with workflow cards and the WF001 Summary panel open

Each workflow card displays:

FieldDescription
Priority badgeHIGH or MEDIUM — indicates the business criticality of the workflow
Workflow nameHuman-readable name discovered or assigned by the agent
Status badgePASS (green) or PENDING (orange) — outcome of the most recent run
Step countNumber of steps executed in the workflow
Workflow IDUnique identifier (e.g. WF001)
Discovery methodHow the workflow was found (e.g. roost_discovered)
GoalShort description of what the workflow validates

Click any workflow card to open the detail panel on the right.


Workflow Detail Panel

The detail panel has four tabs: Summary, Test Steps, Agent Insights, and Artifacts.

Summary

The Summary tab gives a high-level description of the workflow.

FieldDescription
GoalThe objective the agent was asked to verify
Feature AreaThe product or application area the workflow belongs to
Steps DescriptionA plain-language walkthrough of the steps the agent took

Test Steps

The Test Steps tab shows the exact actions the agent performed, in order.

Test Steps tab for WF001 showing 8 numbered steps with action types and descriptions

Each step entry shows the action type followed by a brief description:

Action TypeMeaning
navigate_toThe agent loaded a URL
clickThe agent clicked a UI element
select_optionThe agent selected a value from a dropdown
fillThe agent typed text into an input field

Agent Insights

The Agent Insights tab shows the agent's own assessment of the run — what it intended to do, what it observed, and how it handled any issues.

Agent Insights tab showing Agent Summary paragraph, green Outcome box, and red Errors Encountered box

SectionDescription
Agent SummaryA narrative of the agent's actions and decisions during the run
OutcomeGreen box — the final result if the workflow completed successfully
Errors EncounteredRed box — any failures or unexpected conditions the agent hit, including how it recovered
note

An error in the Errors Encountered section does not always mean the workflow failed. The agent may have self-corrected and still completed the workflow, as shown by a PASS status on the card.

Artifacts

The Artifacts tab lists the files generated from the workflow run.

Artifacts tab showing Status badge, Scenario Summary File, Report PDF, and POM Test File with PASS badge

ArtifactDescription
StatusGenerated (green) confirms that artifact files were successfully produced
Scenario Summary FileA .md file summarising the workflow scenario and outcomes
Report PDFA step-by-step PDF capturing each action the agent will take, with screenshots, to achieve the workflow scenario.
POM Test FileA Playwright Page Object Model spec file ready to run in CI — the badge next to it reflects its own test result (PASS / FAIL)

The file paths shown are relative to the generated test output directory for the project.


Unit Test Insights

When the project run is for unit tests, the Insights view shows a metrics dashboard instead of the workflow list. It summarises how many tests were generated, whether they compiled, and how they performed at runtime.

Unit test Insights view showing Generated Tests and Skipped metric cards, donut charts for SUCCESS/ERRORS/SKIPPED percentages, and the file/method tree on the right

Metric Cards

The top row shows two summary tiles:

TileDescription
Generated TestsTotal number of test cases Roost generated for the project
SkippedNumber of methods that were skipped during generation

The Generated Tests tile breaks down results further:

MetricDescription
Compile SuccessTests that compiled without errors
Compilation ErrorsTests that failed to compile
Run Time SuccessTests that compiled and passed at runtime
Run Time IssuesTests that compiled but encountered failures or errors at runtime

Donut Charts

The bottom row shows three percentage rings for a quick visual summary:

ChartDescription
SUCCESSPercentage of generated tests that compiled and passed
ERRORSPercentage of generated tests that had compilation or runtime errors
SKIPPEDPercentage of methods that were skipped

File and Method Tree

The right panel shows a collapsible tree of source files, methods, and the generated test files:

  • Source file (e.g. calc/calc.go) — top-level node
    • Method (e.g. Add, Divide) — each discovered method
      • Test file (e.g. calc/calc_test.go) — the generated test file for that method

Use the checkboxes in the tree to select specific files or methods. Use the Github Dev button in the top-right to open the generated tests directly in GitHub's web editor.