# RoostGPT Test Generation

# RoostGPT AI Provider Support for UI Test Generation

## Overview

RoostGPT supports three major AI providers for UI test generation:

- **OpenAI**
- **Google Gemini**
- **Azure OpenAI**

All models support vision capabilities for analyzing UI elements and generating comprehensive test cases.

**Note** - RoostGPT only support vision models for UI Test Generation.

---

## OpenAI Models

<table id="bkmrk-model-name-context-w" style="width: 100%; height: 352.157px;"><thead><tr style="height: 46.5938px;"><th style="width: 16.3746%; height: 46.5938px;">Model Name</th><th style="width: 12.0852%; height: 46.5938px;">Context Window</th><th style="width: 28.9156%; height: 46.5938px;">Best For</th><th style="width: 42.5056%; height: 46.5938px;">Capabilities</th></tr></thead><tbody><tr style="height: 46.5938px;"><td style="width: 16.3746%; height: 46.5938px;">GPT-5</td><td style="width: 12.0852%; height: 46.5938px;">Extended</td><td style="width: 28.9156%; height: 46.5938px;">Complex UI workflows, enterprise applications</td><td style="width: 42.5056%; height: 46.5938px;">Expert-level intelligence, built-in reasoning, superior visual perception</td></tr><tr style="height: 29.7969px;"><td style="width: 16.3746%; height: 29.7969px;">GPT-4.5</td><td style="width: 12.0852%; height: 29.7969px;">Large</td><td style="width: 28.9156%; height: 29.7969px;">Standard UI testing</td><td style="width: 42.5056%; height: 29.7969px;">Vision-enabled, balanced performance</td></tr><tr style="height: 29.7969px;"><td style="width: 16.3746%; height: 29.7969px;">GPT-4o (Omni)</td><td style="width: 12.0852%; height: 29.7969px;">Large</td><td style="width: 28.9156%; height: 29.7969px;">Multimodal test generation</td><td style="width: 42.5056%; height: 29.7969px;">Real-time visual analysis, fast response</td></tr><tr style="height: 29.7969px;"><td style="width: 16.3746%; height: 29.7969px;">GPT-4o mini</td><td style="width: 12.0852%; height: 29.7969px;">Standard</td><td style="width: 28.9156%; height: 29.7969px;">High-volume test generation</td><td style="width: 42.5056%; height: 29.7969px;">Cost-efficient, fast, good vision</td></tr><tr style="height: 46.5938px;"><td style="width: 16.3746%; height: 46.5938px;">o3</td><td style="width: 12.0852%; height: 46.5938px;">Extended</td><td style="width: 28.9156%; height: 46.5938px;">Complex test logic, deep reasoning</td><td style="width: 42.5056%; height: 46.5938px;">Advanced chain-of-thought with vision</td></tr><tr style="height: 29.7969px;"><td style="width: 16.3746%; height: 29.7969px;">o4-mini</td><td style="width: 12.0852%; height: 29.7969px;">Standard</td><td style="width: 28.9156%; height: 29.7969px;">Cost-efficient reasoning</td><td style="width: 42.5056%; height: 29.7969px;">Faster reasoning with vision support</td></tr></tbody></table>

---

## Google Gemini Models

<table id="bkmrk-model-name-context-w-1"><thead><tr><th>Model Name</th><th>Context Window</th><th>Best For</th><th>Capabilities</th></tr></thead><tbody><tr><td>Gemini 2.5 Pro</td><td>Input: 1024k

Output: 64k

</td><td>Complex UI testing, large-scale apps</td><td>State-of-the-art thinking, long context</td></tr><tr><td>Gemini 2.5 Flash</td><td>Input: 1024k

Output: 64k

</td><td>Fast, cost-effective test generation</td><td>Best price-performance, agentic use cases</td></tr><tr><td>Gemini 2.5 Flash-Lite</td><td>Input: 1024k

Output: 64k

</td><td>High-volume, low-latency generation</td><td>Most cost-effective, optimized for throughput</td></tr><tr><td>Gemini 2.0 Flash</td><td>Input: 1024k

Output: 8k

</td><td>Next-gen features, modern apps</td><td>Native tool use, improved speed</td></tr><tr><td>Gemini 2.0 Flash-Lite</td><td>Input: 1024k

Output: 8k

</td><td>Budget-conscious projects</td><td>Cost-efficient, low latency</td></tr></tbody></table>

---

## Azure OpenAI Models

Azure OpenAI provides the same OpenAI models with enterprise features.

<table id="bkmrk-model-name-context-w-2"><thead><tr><th>Model Name</th><th>Context Window</th><th>Additional Features</th></tr></thead><tbody><tr><td>GPT-5 Series</td><td>Extended</td><td>Azure security, compliance, VNET integration</td></tr><tr><td>GPT-4.5</td><td>Large</td><td>Private endpoints, managed identity</td></tr><tr><td>GPT-4.1 Series</td><td>Large</td><td>Regional deployment, data residency</td></tr><tr><td>GPT-4o Series</td><td>Large</td><td>SLA guarantees, Azure Monitor integration</td></tr><tr><td>o3, o4-mini</td><td>Extended/Standard</td><td>Azure AD authentication</td></tr></tbody></table>

---

### Recommended Models

### OpenAI

- **gpt-4o**
- **gpt-5**

### Azure OpenAI

- **gpt-4o**
- **gpt-5**

### Google Gemini

- **gemini-2.5-pro**
- **gemini-2.5-flash**

---

## Support Resources

- **OpenAI**: [platform.openai.com/docs](https://platform.openai.com/docs)
- **Google Gemini**: [ai.google.dev/gemini-api/docs](https://ai.google.dev/gemini-api/docs)
- **Azure OpenAI**: [learn.microsoft.com/azure/ai-services/openai](https://learn.microsoft.com/azure/ai-services/openai)

# RoostGPT AI Provider Support for Unit, API and Other Tests

## Overview

RoostGPT supports multiple AI providers for comprehensive test generation across unit tests, API tests, Integration Tests, Functional Tests and more. This document outlines the supported providers and their available models.

**Supported Providers:**

- OpenAI
- Claude AI (Anthropic)
- Azure OpenAI
- Vertex AI (Google Cloud)
- AWS Bedrock

---

## OpenAI Models

<table id="bkmrk-model-name-context-w"><thead><tr><th>Model Name</th><th>Context Window

(Tokens)

</th><th>Best For</th><th>Key Features</th></tr></thead><tbody><tr><td>GPT-5</td><td>4 Million  
Input: 272k  
Output: 128k</td><td>Complex test scenarios, enterprise</td><td>Expert-level intelligence, built-in reasoning</td></tr><tr><td>GPT-4.5</td><td>Large</td><td>Standard test generation</td><td>Balanced performance, cost-effective</td></tr><tr><td>GPT-4o</td><td>Large</td><td>Fast test generation</td><td>Multimodal, real-time processing</td></tr><tr><td>GPT-4o mini</td><td>Standard</td><td>High-volume generation</td><td>Cost-efficient, fast</td></tr><tr><td>GPT-4 Turbo</td><td>128K tokens</td><td>Production environments</td><td>Proven reliability</td></tr><tr><td>o3</td><td>Extended</td><td>Complex reasoning tasks</td><td>Advanced chain-of-thought</td></tr><tr><td>o4-mini</td><td>Standard</td><td>Cost-efficient reasoning</td><td>Faster reasoning model</td></tr></tbody></table>

---

## Claude AI (Anthropic) Models

<table id="bkmrk-model-name-context-w-1" style="width: 100%;"><thead><tr><th style="width: 18.5906%;">Model Name</th><th style="width: 14.3057%;">Context Window

(Tokens)

</th><th style="width: 30.0358%;">Best For</th><th style="width: 36.9487%;">Key Features</th></tr></thead><tbody><tr><td style="width: 18.5906%;">Claude Opus 4.1</td><td style="width: 14.3057%;">Input: 200K

Output: 32k

</td><td style="width: 30.0358%;">Complex coding tasks, unit tests</td><td style="width: 36.9487%;">Best coding model, 74.5% on SWE-bench</td></tr><tr><td style="width: 18.5906%;">Claude Sonnet 4.5</td><td style="width: 14.3057%;">Output: 64k

</td><td style="width: 30.0358%;">Coding, Finance and CyberSecurity</td><td style="width: 36.9487%;">Long running tasks, enhanced domain knowledge and 77.2% on SWE-bench</td></tr><tr><td style="width: 18.5906%;">Claude Sonnet 4</td><td style="width: 14.3057%;">200K-1M

Output: 64k

</td><td style="width: 30.0358%;">General test generation</td><td style="width: 36.9487%;">Superior coding and reasoning, instruction following</td></tr><tr><td style="width: 18.5906%;">Claude Sonnet 3.7</td><td style="width: 14.3057%;">200K tokens

Output: 64k

</td><td style="width: 30.0358%;">Hybrid reasoning tasks</td><td style="width: 36.9487%;">Fast and extended thinking modes</td></tr><tr><td style="width: 18.5906%;">Claude Sonnet 3.5</td><td style="width: 14.3057%;">200K tokens

Output: 8k

</td><td style="width: 30.0358%;">Legacy support</td><td style="width: 36.9487%;">Strong coding capabilities</td></tr><tr><td style="width: 18.5906%;">Claude Haiku 3.5</td><td style="width: 14.3057%;">200K tokens

Output: 8k

</td><td style="width: 30.0358%;">Quick, simple tests</td><td style="width: 36.9487%;">Fast, cost-effective</td></tr></tbody></table>

---

## Azure OpenAI Models

Azure OpenAI provides the same OpenAI models with enterprise features.

<table id="bkmrk-model-name-context-w-2"><thead><tr><th>Model Name</th><th>Context Window</th><th>Additional Features</th></tr></thead><tbody><tr><td>GPT-5 Series</td><td>4 Million  
Input: 272k  
Output: 128k</td><td>VNET integration, Private endpoints</td></tr><tr><td>GPT-4.5</td><td>Large</td><td>Managed Identity, Regional deployment</td></tr><tr><td>GPT-4o Series</td><td>Large</td><td>SLA guarantees, Azure Monitor</td></tr><tr><td>GPT-4 Turbo</td><td>128K tokens</td><td>Standard and Provisioned deployments</td></tr><tr><td>o3, o4-mini</td><td>Extended/Standard</td><td>Azure AD authentication</td></tr></tbody></table>

---

## Vertex AI (Google Cloud) Models

<table id="bkmrk-model-name-context-w-3" style="width: 96.3095%;"><thead><tr><th style="width: 20.0521%;">Model Name</th><th style="width: 16.4062%;">Context Window

(Tokens)

</th><th style="width: 33.9844%;">Best For</th><th style="width: 29.5573%;">Key Features</th></tr></thead><tbody><tr><td style="width: 20.0521%;">Gemini 2.5 Pro</td><td style="width: 16.4062%;">Input: 1024k

Output: 64k

</td><td style="width: 33.9844%;">Complex API testing, large codebases</td><td style="width: 29.5573%;">Deep Think mode, long context</td></tr></tbody></table>

---

## AWS Bedrock Models

AWS Bedrock provides access to multiple foundation models from various providers in a fully managed service.

### Available Model Families

<table id="bkmrk-provider-models-cont" style="width: 91.6667%;"><thead><tr><th style="width: 13.8131%;">Provider</th><th style="width: 38.1299%;">Models</th><th style="width: 21.583%;">Context Window</th><th style="width: 26.4751%;">Best For</th></tr></thead><tbody><tr><td style="width: 13.8131%;">**Anthropic**</td><td style="width: 38.1299%;">Claude Opus 4.1, Sonnet 4, Sonnet 3.7</td><td style="width: 21.583%;">Output Token - 4K</td><td style="width: 26.4751%;">Code generation, unit tests</td></tr></tbody></table>

---

## Recommended Models 

### OpenAI  


- GPT-4o, GPT-5

### Azure OpenAI  


- GPT-4o, GPT-5

### Claude

- Claude Sonnet 4.1


---

## Support Resources

- **OpenAI**: [platform.openai.com/docs](https://platform.openai.com/docs)
- **Anthropic (Claude)**: [docs.claude.com](https://docs.claude.com/)
- **Azure OpenAI**: [learn.microsoft.com/azure/ai-services/openai](https://learn.microsoft.com/azure/ai-services/openai)
- **Vertex AI**: [cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)
- **AWS Bedrock**: [docs.aws.amazon.com/bedrock](https://docs.aws.amazon.com/bedrock)

# RoostGPT Test Generation - Prerequisites

## Overview

Before using RoostGPT for test generation, ensure you have the following prerequisites based on your test type.

---

## 1. Git Integration (For PR Output)

**When needed:** If test output should be created as a Git Pull Request

**Required:**

- Git Personal Access Token with Read, Write, and Create PR permissions
- Repository URL and target branch name

**Support Link:** [https://docs.roost.ai/books/git-configuration-and-tokens](https://docs.roost.ai/books/git-configuration-and-tokens)

---

## 2. AI Provider Access Token

**When needed:** For all test generation

**Required:**

- API Key/Access Token from one of the supported providers: 
    - OpenAI
    - Claude AI (Anthropic)
    - Azure OpenAI (API Key + Endpoint)
    - Vertex AI (Google Cloud)
    - AWS Bedrock
    - Deepseek

**Support Link:** [https://docs.roost.ai/books/ai-configuration-and-tokens](https://docs.roost.ai/books/ai-configuration-and-tokens)

---

## 3. Jira Integration (For Functional Tests)

**When needed:** Generating functional tests from Jira tickets

**Required:**

- Jira Access Token
- Jira Domain URL (e.g., `https://company.atlassian.net`)

---

## 4. UI Test Generation - Login Credentials

**When needed:** Generating UI/E2E tests requiring authentication

**Required:**

- Application Login Username
- Application Login Password

---

## 5. UI Test Generation - Scenario Document

**When needed:** Testing specific user workflows or scenarios

**Required:**

- Document containing scenario details (PDF, Markdown, or Text format)
- Document should include: 
    - Scenario name
    - Test steps
    - Expected results
    - Test data

---

## 6. UI Test Generation - Custom Login Steps Document

**When needed:** Application has non-standard login flows (SSO, OAuth, custom authentication)

**Note:** MFA (Multi-Factor Authentication) is not supported

**Required:**

- Document describing custom login steps
- Document should include: 
    - Login method type
    - General authentication process

# RoostGPT Input and Output Table

## Overview

RoostGPT is an intelligent test automation platform that leverages AI to transform business requirements and technical specifications into comprehensive test suites across multiple testing frameworks.

## Test Types

### Quick Reference Table

<table class="bg-bg-100 min-w-full border-separate border-spacing-0 text-sm leading-[1.88888] whitespace-normal" id="bkmrk-test-type-input-outp" style="width: 100%; height: 420.321px;"><thead class="border-b-border-100/50 border-b-[0.5px] text-left"><tr class="[tbody>&]:odd:bg-bg-500/10" style="height: 29.7969px;"><th class="text-text-000 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 15.6062%; height: 29.7969px;">Test Type</th><th class="text-text-000 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 50.0673%; height: 29.7969px;">Input</th><th class="text-text-000 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 34.3266%; height: 29.7969px;">Output</th></tr></thead><tbody><tr class="[tbody>&]:odd:bg-bg-500/10" style="height: 53.2422px;"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 15.6062%; height: 53.2422px;">**Unit Test**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 50.0673%; height: 53.2422px;">- Source Code (Java, Python, Golang, CSharp)

</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 34.3266%; height: 53.2422px;">- Test Code (Java, Python, Golang, CSharp)

</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10" style="height: 130.016px;"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 15.6062%; height: 130.016px;">**API Test**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 50.0673%; height: 130.016px;">- Git Repo (for output)
- Swagger (OpenAPI spec)

</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 34.3266%; height: 130.016px;">- Postman
- Rest-Assured
- Artillery
- Karate
- Pytest  
    (one of them)
- Test Data (json)

</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10" style="height: 103.633px;"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 15.6062%; height: 103.633px;">**Functional Test**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 50.0673%; height: 103.633px;">- Jira User Story (ID or file)
- User Input (file or text)

</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 34.3266%; height: 103.633px;">- JSON output
- Gherkin Feature File
- Functional Test excel output
- OpenAPI Spec

</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10" style="height: 103.633px;"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 15.6062%; height: 103.633px;">**UI Test**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 50.0673%; height: 103.633px;">- Domain (url for which test need to be generated)
- User Scenario Document,
- Login Credentials (if applicable)
- Login Scenario Document

</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]" style="width: 34.3266%; height: 103.633px;">- JS Playwright Test Script

</td></tr></tbody></table>

###   
Unit Test

**Purpose:** Unit tests validate individual components, functions, or methods in isolation, ensuring they behave correctly under various conditions.

#### Input Requirements:

- ✅ Source code files in supported languages (Java, Python, Golang, or C#)
- ✅ Code should be well-structured with clear function/method definitions
- ✅ Dependencies and imports should be properly declared

#### Output Generated:

- 📝 Comprehensive test code in the same language as the source
- 📝 Test cases covering normal operations, edge cases, and error conditions

📝 Appropriate assertions and test data

### API Test

**Purpose:** API tests verify the functionality, reliability, and performance of application programming interfaces, ensuring they meet specifications and handle requests correctly.

#### Input Requirements:

- ✅ Git repository URL for storing generated test artifacts
- ✅ OpenAPI/Swagger specification document defining: 
    - API endpoints
    - Request/response schemas
    - Authentication requirements

#### Output Generated:

Test collections and scripts in your choice of framework:

<table class="bg-bg-100 min-w-full border-separate border-spacing-0 text-sm leading-[1.88888] whitespace-normal" id="bkmrk-framework-descriptio"><thead class="border-b-border-100/50 border-b-[0.5px] text-left"><tr class="[tbody>&]:odd:bg-bg-500/10"><th class="text-text-000 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] px-2 [&:not(:first-child)]:border-l-[0.5px]">Framework</th><th class="text-text-000 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] px-2 [&:not(:first-child)]:border-l-[0.5px]">Description</th><th class="text-text-000 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] px-2 [&:not(:first-child)]:border-l-[0.5px]">Output Format</th></tr></thead><tbody><tr class="[tbody>&]:odd:bg-bg-500/10"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">**Postman**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">Collection JSON files ready to import</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">`.json`</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">**Rest-Assured**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">Java-based test classes with fluent API syntax</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">`.java`</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">**Artillery**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">YAML configuration for load and performance testing</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">`.yml`</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">**Karate**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">Feature files with BDD-style API tests</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">`.feature`</td></tr><tr class="[tbody>&]:odd:bg-bg-500/10"><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">**Pytest**</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">Python test functions with request fixtures</td><td class="border-t-border-100/50 [&:not(:first-child)]:-x-[hsla(var(--border-100) / 0.5)] border-t-[0.5px] px-2 [&:not(:first-child)]:border-l-[0.5px]">`.py`</td></tr></tbody></table>

###   
Functional Test

**Purpose:** Functional tests validate complete business workflows and user scenarios, ensuring the system behaves according to specified requirements and user expectations.

#### Input Requirements:

- ✅ **Jira User Story:** Can be provided as a Jira ticket ID or exported file
- ✅ **User Input:** Business requirements as text or document files describing expected system behavior

#### Output Generated:

- **JSON Output:** Structured test data and results in JSON format
- **Gherkin Feature Files:** BDD-style scenarios in Given-When-Then format
- **Excel Output:** Test case documentation with steps and expected results

**OpenAPI Spec:** Generated API specifications for tested endpoints

### UI Test

**Purpose:** UI tests automate user interactions with web applications, verifying that user interface elements function correctly and the application responds appropriately to user actions.

#### Input Requirements:

- ✅ **Domain URL:** The base URL of the application to be tested
- ✅ **User Scenario Document:** Detailed description of user workflows and expected interactions
- ✅ **Login Credentials:** Authentication details if the application requires login (optional)
- ✅ **Login Scenario Document:** Step-by-step login process if authentication is required

#### Output Generated:

- 🎭 **Playwright Test Scripts:** JavaScript test files using Playwright framework
- 🎭 Automated browser interactions including clicks, form fills, and navigation
- 🎭 Assertions for verifying page elements, content, and behavior

# RoostGPT AI Keys Requirements

# AI Configuration Keys

This document provides the required environment variables for configuring different AI providers.

---

## 1. OpenAI Configuration

### Required Environment Variables

<table id="bkmrk-variable-name-requir" style="width: 73.6905%;"><thead><tr><th style="width: 22.3301%;">Variable Name</th><th style="width: 12.2977%;">Required</th><th style="width: 65.3722%;">Description</th></tr></thead><tbody><tr><td style="width: 22.3301%;">`OPENAI_API_KEY`</td><td style="width: 12.2977%;">Yes</td><td style="width: 65.3722%;">Your OpenAI API key</td></tr><tr><td style="width: 22.3301%;">`OPENAI_API_MODEL`</td><td style="width: 12.2977%;">Yes</td><td style="width: 65.3722%;">Model to use </td></tr></tbody></table>

### Configuration Example

```bash
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxx
OPENAI_API_MODEL=gpt-4o
```

---

## 2. Google Gemini Configuration

### Required Environment Variables

<table id="bkmrk-variable-name-requir-1"><thead><tr><th>Variable Name</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td>`GEMINI_API_KEY`</td><td>Yes</td><td>Your Google AI Studio API key for accessing Gemini models</td></tr><tr><td>`GEMINI_MODEL`</td><td>Yes</td><td>Gemini model to use. Options: `gemini-pro`, `gemini-pro-vision`, `gemini-ultra`, etc.</td></tr></tbody></table>

### Configuration Example

```bash
GEMINI_API_KEY=AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxxx
GEMINI_MODEL=gemini-pro
```

---

## 3. AWS Bedrock Configuration

### Required Environment Variables

<table id="bkmrk-variable-name-requir-2"><thead><tr><th>Variable Name</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td>`AWS_BEDROCK_MODEL`</td><td>Yes</td><td>Model ID to use. Examples: `anthropic.claude-v2`, `anthropic.claude-3-sonnet-20240229-v1:0`, `amazon.titan-text-express-v1`</td></tr><tr><td>`AWS_DEFAULT_REGION`</td><td>Yes</td><td>AWS region where Bedrock is available. Examples: `us-east-1`, `us-west-2`, `eu-west-1`</td></tr><tr><td>`AWS_ACCESS_KEY_ID`</td><td>Yes</td><td>AWS access key ID for authentication</td></tr><tr><td>`AWS_SECRET_ACCESS_KEY`</td><td>Yes</td><td>AWS secret access key for authentication</td></tr></tbody></table>

### Configuration Example

```bash
AWS_BEDROCK_MODEL=anthropic.claude-3-sonnet-20240229-v1:0
AWS_DEFAULT_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```

### Important Notes

- Model availability varies by region. Check AWS Bedrock documentation for supported models in your region.
- You may need to request model access through the AWS Bedrock console before using certain models.
- Ensure your IAM user has necessary permissions for `bedrock:InvokeModel` action.

---

## 4. Claude AI Configuration

### Required Environment Variables

<table id="bkmrk-variable-name-requir-3"><thead><tr><th>Variable Name</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td>`CLAUDE_AI_API_KEY`</td><td>Yes</td><td>Your Anthropic API key for accessing Claude models</td></tr><tr><td>`CLAUDE_AI_MODEL`</td><td>Yes</td><td>Claude model to use. Examples: `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307`, `claude-2.1`, `claude-2.0`</td></tr></tbody></table>

### Configuration Example

```bash
CLAUDE_AI_API_KEY=sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxx
CLAUDE_AI_MODEL=claude-3-sonnet-20240229
```

---

## 5. Azure OpenAI Configuration

### Required Environment Variables

<table id="bkmrk-variable-name-requir-4"><thead><tr><th>Variable Name</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td>`AZURE_OPENAI_ENDPOINT`</td><td>Yes</td><td>Your Azure OpenAI resource endpoint URL (e.g., `https://your-resource.openai.azure.com/`)</td></tr><tr><td>`AZURE_DEPLOYMENT_NAME`</td><td>Yes</td><td>Name of your deployed model in Azure OpenAI Studio (e.g., `gpt-4-deployment`, `gpt-35-turbo-deployment`)</td></tr><tr><td>`AZURE_OPENAI_KEY`</td><td>Yes</td><td>API key for your Azure OpenAI resource (Key 1 or Key 2 from Azure portal)</td></tr></tbody></table>

### Configuration Example

```bash
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
AZURE_DEPLOYMENT_NAME=gpt-4-deployment
AZURE_OPENAI_KEY=1234567890abcdef1234567890abcdef
```

### Important Notes

- Azure OpenAI requires approval. Apply for access if you haven't already.
- Model availability varies by region. Choose your region based on available models.
- Deployment names are custom - you choose them when deploying models in Azure OpenAI Studio.
- The endpoint URL should end with a trailing slash (`/`).

# Manual Playwright Chromium Browser Installation Guide

This guide provides step-by-step instructions for manually downloading and setting up the Playwright Chromium browser on Windows.

## Prerequisites

- Windows operating system
- File extraction tool (e.g., Windows Explorer, 7-Zip, WinRAR)
- Use Powershell to run commands.

## Installation Steps

### Step 1: Download the Browser

Download the Chromium browser archive from the following URL:

```
https://drive.google.com/drive/u/0/folders/1xcmG-rfTcWzf7DEuNt3DUTpwe7FROhTq

OR

https://playwright-verizon.azureedge.net/builds/chromium/1187/chromium-win64.zip

```

### Step 2: Create the Playwright Directory

Navigate to your user profile's AppData folder and create the ms-playwright directory:

```
%USERPROFILE%\AppData\Local\ms-playwright

```

You can do this via this command:

```cmd
mkdir "%USERPROFILE%\AppData\Local\ms-playwright"

```

### Step 3: Create the Browser Version Folder

Inside the `ms-playwright` folder, create a new folder named `chromium-1187`:

```cmd
mkdir "%USERPROFILE%\AppData\Local\ms-playwright\chromium-1187"

```

### Step 4: Extract the Browser

Unzip the downloaded `chromium-win64.zip` file into the `chromium-1187` folder.

After extraction, your folder structure should look like:

```
%USERPROFILE%\AppData\Local\ms-playwright\
└── chromium-1187\
    └── chrome-win\
        ├── chrome.exe


```

### Step 5: Create Validation Files

Create two empty marker files inside the `chromium-1187` folder to indicate successful installation:

**File 1:** `DEPENDENCIES_VALIDATED`

```cmd
ni "%USERPROFILE%\AppData\Local\ms-playwright\chromium-1187\DEPENDENCIES_VALIDATED"

```

**File 2:** `INSTALLATION_COMPLETE`

```cmd
ni "%USERPROFILE%\AppData\Local\ms-playwright\chromium-1187\INSTALLATION_COMPLETE"

```

## Final Directory Structure

After completing all steps, your directory structure should be:

```
%USERPROFILE%\AppData\Local\ms-playwright\
└── chromium-1187\
    ├── chrome-win\
    │   ├── chrome.exe
    ├── DEPENDENCIES_VALIDATED
    └── INSTALLATION_COMPLETE

```

## Verification

### Check Files and Folders

To verify the installation, check that all files and folders exist:

```cmd
dir "%USERPROFILE%\AppData\Local\ms-playwright\chromium-1187"

```

You should see the `chrome-win` folder along with the two marker files.

### Test Browser Launch

To verify that Playwright is correctly picking up the browser path, run the following command to open a URL:

```cmd
playwright open https://www.google.com

```

This should launch a new Chromium browser window and navigate to the specified URL. If the browser opens successfully, your manual installation is working correctly.

## Troubleshooting

- **Browser not detected:** Ensure the folder name is exactly `chromium-1187` and both marker files exist.
- **Extraction issues:** Make sure the `chrome-win` folder is directly inside `chromium-1187`, not nested in an additional folder.
- **Permission errors:** Run Powershell as Administrator if you encounter access issues.