Login at to access GPT feature. Offerings: RoostGPT UI: - Triggers from local workspace: No - GitOps(GitHub, GitLab, Bitbucket): Yes - Notifications(Slack/Teams): Yes RoostGPT CLI: - Triggers from local workspace: Yes - GitOps(GitHub, GitLab, Bitbucket): Yes - Notifications(Slack/Teams): No RoostGPT VS Code Extension: - Triggers from local workspace: Yes - GitOps(GitHub, GitLab, Bitbucket): No - Notifications(Slack/Teams): No


Installation Command

RoostGPT binary is available at

The RoostGPT binary allows for unmatched flexibility during test generation, by allowing you to trigger test generation both locally and through git through the use of one simple .env file.

Add the "roostgpt-<linux|macos|win.exe>" binary to your PATH environment variable

For Linux/MacOS,  you can execute the below command to install roostgpt

curl | bash -

CLI Commands

./roostGPT-macos   (for mac) &  ./roostGPT-win.exe (for windows)

This command is used to run the RoostGPT CLI on your operating system.

Instructions for creating .env file

An easy way to download the .env file is to use

The .env file should be in the same folder path as RoostGPT binary.

Here is what the content of your .env file will look like:

# GIT env vars
GIT_TYPE=                    # Default is github; Supports [github, gitlab, azure, bitbucket, local]
HOSTED_TYPE=                 # Default is cloud; Supports [cloud, hosted]
GIT_HOSTED_URL=              # Required if HOSTED_TYPE=hosted
USE_SSH=                     # Optional; Default is false; Supports [true, false]; SSH based auth for Git
LOCAL_PROJECT_PATH=          # Required if GIT_TYPE is local, Path to your workspace.
SOURCE_GIT_CLONE_URL=        # Optional; the source repo clone URL
SOURCE_GIT_TOKEN=            # Required
SOURCE_OWNER_NAME=           # Required
SOURCE_REPO_NAME=            # Required
SOURCE_REPO_ID=              # Required for GIT_TYPE=gitlab
SOURCE_PROJECT_NAME=         # Required for GIT_TYPE=azure
SOURCE_PROJECT_ID=           # Required for GIT_TYPE=bitbucket
SOURCE_BASE_BRANCH=          # Required
SOURCE_RELATIVE_DIRECTORY=   # Optional; the relative path of the repository
SAME_TARGET_DETAIL=          # Optional; Default true, Supports [true, false]
DESTINATION_GIT_CLONE_URL=   # Optional; the destination repo clone URL
DESTINATION_REPO_ID=         # Required if SAME_TARGET_DETAIL=false and GIT_TYPE=gitlab
DESTINATION_PROJECT_ID=      # Required if SAME_TARGET_DETAIL=false and GIT_TYPE=bitbucket

# Open AI env vars
OPENAI_API_MODEL=            # Optional; Default is gpt-4
OPENAI_API_KEY=              # Required if AI_TYPE=openai

# Vertex AI env vars
VERTEX_FINE_TUNE=            # Optional; Default is false, Supports [true, false]
VERTEX_PROJECT_ID=           # Required if AI_TYPE=vertexai
VERTEX_REGION=               # Required if AI_TYPE=vertexai and VERTEX_FINE_TUNE=true
VERTEX_BEARER_TOKEN=         # Required if AI_TYPE=vertexai
VERTEX_MODEL=                # Required if AI_TYPE=vertexai; Supports [text-bison, code-bison, codechat-bison]

# Azure Open AI env vars
AZURE_OPENAI_ENDPOINT=       # Required if AI_TYPE=azure_open_ai
AZURE_DEPLOYMENT_NAME=       # Required if AI_TYPE=azure_open_ai
AZURE_OPENAI_KEY=            # Required if AI_TYPE=azure_open_ai
AZURE_OPENAI_VERSION=        # Optional Default is 2023-12-01-preview

# Open source env vars
OPEN_SOURCE_MODEL_ENDPOINT=  # Required if AI_TYPE=open_source_ai
OPEN_SOURCE_MODEL=           # Optional; Supports [meta-llama/Llama-2-13b-chat, HuggingFaceH4/starchat-beta]

# Sagemaker model env vars
SAGEMAKER_MODEL_ENDPOINT=    # Required if AI_TYPE=sagemake_model

# Claude AI env vars 
CLAUDE_AI_MODEL=             # Required if AI_TYPE=claude_ai
CLAUDE_AI_API_KEY=           # Required if AI_TYPE=claude_ai

# Advanced AI env vars
AI_TEMPERATURE=              # Optional; Default value is 0.6

# Jira board env vars
JIRA_HOST_NAME=              # Required if TEST_TYPE=functional and BOARD_TYPE=jira
JIRA_EMAIL=                  # Required if TEST_TYPE=functional and BOARD_TYPE=jira
JIRA_TOKEN=                  # Required if TEST_TYPE=functional and BOARD_TYPE=jira
JIRA_ID=                     # Optional if TEST_TYPE=functional and BOARD_TYPE=jira

# Azure board env vars
AZURE_ORG=                   # Required if TEST_TYPE=functional and BOARD_TYPE=azure
AZURE_PROJECT=               # Required if TEST_TYPE=functional and BOARD_TYPE=azure
AZURE_TOKEN=                 # Required if TEST_TYPE=functional and BOARD_TYPE=azure
AZURE_WORK_ITEM_ID=          # Optional if TEST_TYPE=functional and BOARD_TYPE=azure

# Log env vars
LOG_SOURCE=                  # Optional; Default is elks
LOG_SOURCE_PATH=             # Optional; the log file path
LOG_ELASTICSEARCH_URL=       # Optional; the Elastic search URL
LOG_ELASTICSEARCH_USER=      # Optional; the Elastic search user name
LOG_ELASTICSEARCH_TOKEN=     # Optional; the Elastic search token
LOG_ELASTICSEARCH_API_KEY=   # Optional; the Elastic search api key

# Behavioural test cases env vars
BEHAVIORAL_TEST_TYPE=        # Optional; Supports [gherkin]
BEHAVIORAL_TEST_SOURCE=      # Optional; Supports [file, gitpath, url]
BEHAVIORAL_TEST_FILE_PATH=   # Optional; path of the source file if source is file/gitpath. Relative path in case of gitpath.
BEHAVIORAL_TEST_URL=         # Optional; URL of the source file if source is url

# API Spec env vars
API_SPEC_TYPE=               # Optional; Supports [swagger, postman]
API_SPEC_SOURCE=             # Optional; Supports [file, gitpath, url]
API_SPEC_FILE_PATH=          # Optional; path of the source file if source is file/gitpath. Relative path in case of gitpath.
API_SPEC_URL=                # Optional; URL of the source file if source is url

# API test env vars
HTTP_VERBS_FOR_TESTING=      # Optional; Default is "get,post,put,patch,delete".The specific http methods are to be tested for all APIs. Enter "," separated values here. 
REGEX_HTTP_ENDPOINTS_FOR_TESTING= #Optional; Regex string that matches specific endpoints that need to be tested. If empty, all the endpoints will be tested.

# License env vars
ROOST_DOMAIN=                # Optional; Default is
ROOST_TOKEN=                 # Required, the Roost token
TELEMETRY=                   # Optional; Default is true, Supports [true, false], send telemetry data to roost, no private information is shared.

# Additional vars
TEST_NAME=                   # Optional; Default is roost-test
ROOST_DIR=                   # Optional; Default is /var/tmp/Roost/RoostGPT
LANGUAGE=                    # Optional; Default is java; Supports [java, go, python, csharp, nodejs]
AI_TYPE=                     # Optional; Default is openai; Supports [openai, vertexai, azure_open_ai, open_source_ai, sagemaker_model, claude_ai]
PACKAGES_TO_SCAN=            # Required for LANGUAGE=java, the package to scan and generate test for example - com.demo.sample
ITERATION=                   # Optional; Default is 1
TIMEOUT=                     # Optional; Default is 1 hour
TEST_TYPE=                   # Optional; Default is unit Supports [unit, functional, api-spec-test, integration]
RETRIGGER=                   # Optional; Default is false, Supports [true, false]
TRIGGER_ID=                  # Unique id to identify multiple triggers; Default is epoch
BOARD_TYPE=                  # Optional; Default is jira; Supports [jira, azure, none]
GIT_PR_URL=                  # The Git PR URL  of the generated test
MAX_DEPTH=                   # Optional; Default is -1; if MAX_DEPTH is -1, it will traverse all the sub directories else the maximum depth directories to look for
TEST_FRAMEWORK=              # Optional; Default is "pytest" for Python, "gotest" for Golang, "JUnit4" for Java, "nunit" for CSharp, "jest" for Node.js, "postman" for Postman test-script. Supports [pytest, unittest] for Python, [gotest] for Golang, [JUnit4, JUnit5] for Java, [nunit] for Csharp, [jest, mocha] for Node.js, [postman, artillery, rest-assured, karate] for [test_type: api-spec-test] or [test_type: integration]
FUNCTIONS_TO_TEST=           # Optional;  list of function names to be tested, separated by comma. Name to be specified as module.[class.]method OR module.function
USE_ASSISTANT=               # Optional: Use Assistant feature in openai. Default:false

# Improve test env vars
IMPROVE_TEST=                # Optional; Default is false, Supports [true, false]
FILE_PATH=                   # Required if IMPROVE_TEST=true
USER_CONTENT=                # Required if IMPROVE_TEST=true
TESTSCRIPT_ENDPOINT=         # Required if IMPROVE_TEST=true and TEST_FRAMEWORK=postman
Here's a description of each variable:
# Git env vars
# Open AI env vars
# AZURE Open AI env vars
# Vertex AI env vars
# Open Source AI env vars
# SageMaker Model env vars
# Claude AI env vars
# Advanced AI env vars
# Jira board env vars
# Azure board env vars
# Log env vars
# Behavioural Test cases env vars
# API Spec env vars
# API test env vars
# License env vars
# Additional vars
# Improve Test env vars

VS Code Extension

The Roost GPT VS code extension allows you to generate tests for your code using RoostGPT with just a click, straight from your VS Code workspace.



To use the RoostGPT VS Code extension, you must have VS Code ready and installed in your system, as well as any dependencies that are required to run your code, as RoostGPT will often run its generated test code to improve it.

You can download and install VS Code for your Operating system here.

After VS Code is Successfully installed in your system, you can go ahead and download the Roost GPT VS Code extension from the VS Code marketplace, just simply search for Roost GPT in the extension store. Alternatively, you can download and install the VS code extension from here.

Once the extension is installed, you are ready to generate tests for your code.


Once the extension has been successfully installed in your system, you can then proceed with configuring the extension to start generating tests. This involves providing information that is required for test generation.
To configure the extension to use it, simply open the extension settings for Roost GPT, you can search for it in the extension store, or you can find it in the list of your installed extensions.

You can then set up the required values according to your workspace and needs. Following are the required fields which will be required for you to set no matter what:

Required Fields

Test Generation:

Once your extension configuration is complete, you can then start using the VS Code extension to generate tests for your workspace. To generate tests, simply right-click on a file in your Explorer menu and select the type of test you want to generate from the context menu that shows up. Note that each test type has some requirements to start test generation.
The test types currently supported are as follows:

Test Requirements

Following are the Required Fields, and other instructions for test generation according to each supported test type:

Improve and Analyze Generated Tests

After the test generation process is complete, a side panel will open, showing you all the generated test files, you can select the file you want to view by using the dropdown provided, if you want you can also edit the files in the panel itself and save your changes using the provided save button.
If you want to run the generated tests, you can do so from the provided run button in the side panel, doing so will run the selected test file. NOTE that you will need to have all the dependencies required for running the tests installed in your local system for test generation to take place. and for artillery tests, after you click the run button, you will be prompted to enter the target URL for the tests, if you want to provide a target URL, please provide so in the input box, and then you will be asked if you want to upload a .env file to provide environment variables, if yes then you can upload the env file for the same.
If you are not satisfied with the generated tests and want some improvements or changes in the test, then at the bottom of the side panel, you will find a feedback prompt, enter the feedback prompt that you want to give to the AI model and then click the improve button, this will trigger the test improvement.

Public SaaS

RoostGPT UI :
Access Public SaaS at


Roost GPT allows the user to automate their test against their code repository.

Below is the UI structure of RoostGPT which has two header tabs Test and Events.

View of the Test Tab


Add Test

In case of Add Test it opens up the below page, which has 5 sections ->

  1. Provide a Test Name
  2. Select the Test Type (Unit Tests , Functional and Non Functional Tests, (API using swagger), API(using source code) and integration tests.
  3. Choose GenAI Models which has OpenAI Model and VertexAI Model and field to input their respective tokens. After the token verification, model specific details will be available for selection, such as
    • For openAI, you can choose GPT-3.5 or GPT-4,
    • For Vertex AI, you need to specify vertex project id, region and vertex model.S9TlXGh1FEC21Co1-screenshot-e.png    
  4. Select SCCS (Source Code Repositories) from available
    • Github (Cloud and self hosted) ,
    • Gitlab,(cloud and self managed),
    • Azure Devops and
    • Bitbucket(cloud and self hosted).


  1. After the SCCS token verification, you can provide the code repo, branch, language and their versions.
  2. Languages supported are Java, Python, C#, Go
  3. Optional Integration with ticketing tools like Jira and Azure DevOps is available. 
  4. Jira requires email address associated with jira account, jira hostname and the access token for your jira account.
  5. Azure Devops requires the Organization Name associated with Azure Devops account, access token for authenticating with Azure Devops and the Project Name for Azure Devops.

Screenshot 2023-07-19 at 1.37.44 PM.png

Screenshot 2023-07-19 at 1.38.00 PM.png

Screenshot 2023-07-19 at 2.07.37 PM.png

5. In Advance Timeout (in hrs) can be specified, for which the time the test will be on triggered mode

On click of save Button you can save your test and see the test you saved in the test header section. On click of download icon configuration for the test will be downloaded.

Workflow View

This will show you the workflow of your test, details you filled while adding your test.

Screenshot 2023-07-19 at 2.16.05 PM.png

Events View

Events view contains status of all the triggered test. Event Status filter shows the status of these events weather it is in Progress, timed out, aborted, completed or failed and the fields in the table show the test name with the status of the events and the status icons also the information of the repo, type of test, creation and completion time, user Information and action icons for re-triggering the test again, logs view and the insights