Skip to main content

RoostGPT AI Provider Support for Unit, API and Other Tests

Overview

RoostGPT supports multiple AI providers for comprehensive test generation across unit tests, API tests, Integration Tests, Functional Tests and more. This document outlines the supported providers and their available models.

Supported Providers:

  • OpenAI
  • Claude AI (Anthropic)
  • Azure OpenAI
  • Vertex AI (Google Cloud)
  • AWS Bedrock

OpenAI Models

Model Name

Context Window

(Tokens)

Best For Key Features
GPT-5 Extended4 Million
Input: 272k
Output: 128k
Complex test scenarios, enterprise Expert-level intelligence, built-in reasoning
GPT-5 Thinking Extended4 Million
Input: 272k
Output: 128k
Advanced test logic, edge cases Extended reasoning mode
GPT-4.5 Large Standard test generation Balanced performance, cost-effective
GPT-4o Large Fast test generation Multimodal, real-time processing
GPT-4o mini Standard High-volume generation Cost-efficient, fast
GPT-4 Turbo 128K tokens Production environments Proven reliability
o3 Extended Complex reasoning tasks Advanced chain-of-thought
o4-mini Standard Cost-efficient reasoning Faster reasoning model

Claude AI (Anthropic) Models

Model Name

Context Window

(Tokens)

Best For Key Features
Claude Opus 4.1

Input: 200K

tokens

Output: 32k

Complex coding tasks, unit tests Best coding model, 74.5% on SWE-bench
Claude Sonnet 4 200K-1M tokens General test generation Superior coding and reasoning, instruction following
Claude Sonnet 3.7 200K tokens Hybrid reasoning tasks Fast and extended thinking modes
Claude Sonnet 3.5 200K tokens Legacy support Strong coding capabilities
Claude Haiku 3.5 200K tokens Quick, simple tests Fast, cost-effective


Azure OpenAI Models

Azure OpenAI provides the same OpenAI models with enterprise features.

Model Name Context Window Additional Features
GPT-5 Series Extended4 Million
Input: 272k
Output: 128k
VNET integration, Private endpoints
GPT-4.5 Large Managed Identity, Regional deployment
GPT-4o Series Large SLA guarantees, Azure Monitor
GPT-4 Turbo 128K tokens Standard and Provisioned deployments
o3, o4-mini Extended/Standard Azure AD authentication


Vertex AI (Google Cloud) Models

Model Name

Context Window

(Tokens)

Best For Key Features
Gemini 2.5 Pro 2M tokens

~ 2 M
Input: 1 million

Output: 65k

Complex API testing, large codebases Deep Think mode, long context
Gemini 2.5 Pro Deep Think 2M tokens Highly complex test logic Enhanced reasoning, problem-solving
Gemini 2.5 Flash 1M tokens Fast test generation, CI/CD Best price-performance, agentic
Gemini 2.5 Flash-Lite Standard High-volume testing Most cost-effective
Gemini 2.0 Flash 1M tokens Modern applications Native tool use, improved speed
Gemini 2.0 Flash-Lite Standard Budget projects Cost-efficient, low latency


AWS Bedrock Models

AWS Bedrock provides access to multiple foundation models from various providers in a fully managed service.

Available Model Families

Provider Models Context Window Best For
Anthropic Claude Opus 4.1, Sonnet 4, Sonnet 3.7 200K-1M tokens Code generation, unit tests


OpenAI

  • GPT-4o, GPT-5

Azure OpenAI

  • GPT-4o, GPT-5

Claude

  • Claude Sonnet 4.1


Support Resources