Skip to main content

Unit Test Generation

RoostGPT generates unit tests from your existing source code. It analyzes your classes and methods, understands their logic, and produces test files that compile and run against your project.

How It Works

  1. Source analysis — RoostGPT parses your source file and extracts function/method signatures, dependencies, and logic
  2. Context building — Optional: provide a Jira story or additional context to improve test relevance
  3. Test generation — The AI model produces test cases covering happy paths, edge cases, and error conditions
  4. Compilation verification — Generated tests are compiled; failures are automatically iterated and fixed
  5. Output — Test files are placed in your standard test directory, preserving your package structure

Supported Languages & Frameworks

LanguageTest Frameworks
JavaJUnit 4, JUnit 5
Pythonpytest, unittest
Gotesting package
C#NUnit, xUnit, MSTest

Using the CLI

# Set up your test configuration
TEST_NAME=my-unit-tests
TEST_TYPE=unit
AI_TYPE=bedrock_ai # or openai, claude_ai, azure_open_ai, gemini

GIT_TYPE=local
LOCAL_PROJECT_PATH=/path/to/your/project

# Run RoostGPT
roostgpt test create -c unit-test.env

Using the VS Code Plugin

  1. Right-click on any source file or function in VS Code
  2. Select "Generate Test using RoostGPT"
  3. Choose your dependency management tool (Maven, Gradle, npm, etc.)
  4. Choose your test framework
  5. Click Generate

Alternatively, use the code lens buttons that appear above each Package, Class, and Function declaration for targeted test generation.

Output Structure

Generated tests are placed in your project's test directory, mirroring the source structure:

src/
├── main/java/com/example/
│ └── PaymentService.java
└── test/java/com/example/
└── PaymentServiceTest.java ← generated by RoostGPT

For Java projects, RoostGPT also adds required testing dependencies to pom.xml or build.gradle automatically.

Error Handling

If a generated test fails to compile, RoostGPT automatically:

  1. Analyzes the compilation error
  2. Regenerates the problematic test with a fix
  3. Repeats up to a configured max iteration count

If a test still fails after max iterations, the file is renamed to *.java.invalid to prevent project-wide build failures while preserving it for manual review.

AI Provider Recommendations

For unit tests, the best performing models are:

  • OpenAI: gpt-4o, gpt-5
  • Claude: claude-sonnet-4-5 (77.2% on SWE-bench), claude-opus-4-1
  • Azure OpenAI: gpt-4o
  • Google Gemini: gemini-2.5-pro, gemini-2.5-flash
  • AWS Bedrock: global.anthropic.claude-sonnet-4-20250514-v1:0

See AI Provider Support for full model tables.

Tips

  • For best results with complex classes, provide a Jira story as additional context via ROOST_USER_INPUT_FILE
  • Use the improvement sidebar in VS Code to refine generated tests with natural language instructions
  • Run roostgpt test result after generation to view execution reports and coverage metrics