Unit Test Generation
RoostGPT generates unit tests from your existing source code. It analyzes your classes and methods, understands their logic, and produces test files that compile and run against your project.
How It Works
- Source analysis — RoostGPT parses your source file and extracts function/method signatures, dependencies, and logic
- Context building — Optional: provide a Jira story or additional context to improve test relevance
- Test generation — The AI model produces test cases covering happy paths, edge cases, and error conditions
- Compilation verification — Generated tests are compiled; failures are automatically iterated and fixed
- Output — Test files are placed in your standard test directory, preserving your package structure
Supported Languages & Frameworks
| Language | Test Frameworks |
|---|---|
| Java | JUnit 4, JUnit 5 |
| Python | pytest, unittest |
| Go | testing package |
| C# | NUnit, xUnit, MSTest |
Using the CLI
# Set up your test configuration
TEST_NAME=my-unit-tests
TEST_TYPE=unit
AI_TYPE=bedrock_ai # or openai, claude_ai, azure_open_ai, gemini
GIT_TYPE=local
LOCAL_PROJECT_PATH=/path/to/your/project
# Run RoostGPT
roostgpt test create -c unit-test.env
Using the VS Code Plugin
- Right-click on any source file or function in VS Code
- Select "Generate Test using RoostGPT"
- Choose your dependency management tool (Maven, Gradle, npm, etc.)
- Choose your test framework
- Click Generate
Alternatively, use the code lens buttons that appear above each Package, Class, and Function declaration for targeted test generation.
Output Structure
Generated tests are placed in your project's test directory, mirroring the source structure:
src/
├── main/java/com/example/
│ └── PaymentService.java
└── test/java/com/example/
└── PaymentServiceTest.java ← generated by RoostGPT
For Java projects, RoostGPT also adds required testing dependencies to pom.xml or build.gradle automatically.
Error Handling
If a generated test fails to compile, RoostGPT automatically:
- Analyzes the compilation error
- Regenerates the problematic test with a fix
- Repeats up to a configured max iteration count
If a test still fails after max iterations, the file is renamed to *.java.invalid to prevent project-wide build failures while preserving it for manual review.
AI Provider Recommendations
For unit tests, the best performing models are:
- OpenAI:
gpt-4o,gpt-5 - Claude:
claude-sonnet-4-5(77.2% on SWE-bench),claude-opus-4-1 - Azure OpenAI:
gpt-4o - Google Gemini:
gemini-2.5-pro,gemini-2.5-flash - AWS Bedrock:
global.anthropic.claude-sonnet-4-20250514-v1:0
See AI Provider Support for full model tables.
Tips
- For best results with complex classes, provide a Jira story as additional context via
ROOST_USER_INPUT_FILE - Use the improvement sidebar in VS Code to refine generated tests with natural language instructions
- Run
roostgpt test resultafter generation to view execution reports and coverage metrics