VS Code Extension
The Roost GPT VS code extension allows you to generate tests for your code using RoostGPT with just a click, straight from your VS Code workspace.
Download:
https://marketplace.visualstudio.com/items?itemName=RoostGPT.roostgpt
Installation:
In order to use the RoostGPT VS Code extension, you must have VS Code ready and installed in your system, as well as any dependencies which are required to run your code, as RoostGPT will often run its generated test code in order to improve it.
You can download and install VS Code for your Operating system here.
After VS Code is Successfully installed in your system, you can go ahead and download the Roost GPT VS Code extension from the VS Code marketplace, just simply search for Roost GPT in the extension store. Alternatively, you can download and install the VS code extension from here.
Once the extension is installed, you are ready to generate tests for your code.
Configuration:
Once the extension has been successfully installed in your system, you can then proceed with configuring the extension to start generating tests. This involves providing information that is required for test generation.
In order to configure the extension to use it, simply open the extension settings for Roost GPT, you can search for it in the extension store, or you can find it in the list of your installed extensions.
You can then set up the required values according to your workspace and needs. Following are the required fields which will be required for you to set no matter what:
Required Fields
-
Roost Token: you can get your roost token from my profile page in app.roost.ai. If you don't have a Roost token, you can sign up for a free trial and try out RoostGPT for free, using your organization email from here.
-
Roost Domain: Enter the Domain in which the provided roost token is active, the default value is app.roost.ai.
-
Timeout: Set the timeout for test generation (in hours), default value is 1.
-
Language: Select the language your workspace/source code is written in, currently supports Java, Python, Go, NodeJS, and C#.
-
Board Type: Select the type of scrum/kanban board, Required for functional tests. set as none other test types, supports none, jira and azure boards. default value is none.
-
Iterations: Set the number of iterations for test generation, If you provide an iteration value greater than 0 then it will run the generated test cases and pass the error that occurred(if any) to the ai model then update the test case inside the same file and run again till the number of times of iteration and stop if it ran successfully in between. The default value is 2.
-
Telemetry: Set as False if you do not want to send telemetry data to roost. The default value is true.
-
Generative AI model: Select which model to use for test code generation, which supports OpenAI, Google Vertex, Azure Open AI, and Hosted Open source Models (LLAMA2 and starchat).
- ProvideInput: Set as true if you want to provide your own input before test generation, you will be prompted for input before the test generation begins. The default value is false.
- MaxDepth: This is used to specify how deep into the workspace the extension will traverse to scan for files to generate tests for. default value is traverse to all subdirectories.
- EnvFile: This can be used to provide the path to an env file, which will provide RoostGPT with env file with user environment variables which will be taken into account in the test generation progress.
-
AI model Details:
- If the Generative AI model is Open AI:
- OpenAI API Key: Provide your OpenAI API key if you plan on using an OpenAI as the generative AI model to generate your test cases.
- OpenAI API model: Provide the AI model the provided API key has access to, supports gpt-4, gpt-3.5-turbo, and gpt-3.5-turbo-16k.
- OpenAI API Key: Provide your OpenAI API key if you plan on using an OpenAI as the generative AI model to generate your test cases.
- If the Generative AI model is Google Vertex:
- Vertex Bearer Token: Provide your vertex Bearer Token if you plan to use Google Vertex as the generative AI model to generate your test cases.
- Vertex Project ID: Provide the ID of your Google vertex project.
- Vertex Region: Enter the region where your vertex region is present
- Vertex Model: Select the Vertex model to be used for code generation, supports text-bison, code-bison, and codechat-bison Default value is text-bison.
- Vertex Bearer Token: Provide your vertex Bearer Token if you plan to use Google Vertex as the generative AI model to generate your test cases.
- If the Generative AI model is Azure Open AI:
- API Key: Provide the API Key for your Azure Open AI model.
- API Endpoint: Provide the API Endpoint where your Azure Open AI model is hosted.
- Deployment Name: Enter the Deployment Name for your Azure Open AI API model.
- If the Generative AI model is Open Source:
- Open Source Model Endpoint: provide the endpoint for the open source model if you plan on using one of the roost provided open source models, you need to provide it in the format 'http://MODEL_IP:5000/generate' where MODEL_IP is the IP address for the instance where you have the model's container running.
- Open Source AI model: Select the AI model to be used for test generation, supports meta-llama/Llama-2-13b-chat, and HuggingFaceH4/starchat-beta. The default value is meta-llama/Llama-2-13b-chat.
- Open Source Model Endpoint: provide the endpoint for the open source model if you plan on using one of the roost provided open source models, you need to provide it in the format 'http://MODEL_IP:5000/generate' where MODEL_IP is the IP address for the instance where you have the model's container running.
- If the Generative AI model is Open AI:
Test Generation:
Once your extension configuration is complete, you can then start using the VS Code extension to generate tests for your workspace. To generate tests, simply right-click on a file in your Explorer menu and select the type of test you want to generate from the context menu that shows up. Note that each test type has some requirements to start test generation.
The test types currently supported are as follows:
- Unit Tests.
- API Tests using Swagger.
- API Tests using Postman.
- Functional Tests.
- Integration.
Test Requirements
Following are the Required Fields, and other instructions for test generation according to each supported test type:
-
Unit Tests: To generate Unit tests, just simply select the directory your file is present in, if you right-click on a file and select unit test generation, it will generate unit tests for all the files present in that file's parent directory. No extra fields other than the above-mentioned required fields are needed for unit test generation. Make sure that the language set in the extension settings matches the language your source code is in. If your language is python, then you can also specify the test framework you want to use for test generation from the advanced section of the extension settings, currently we support pytest and unittest frameworks, default value is pytest.
-
API Tests Using Source Code: To generate API Tests Using Source Code, just simply right-click the directory or file for which you want to generate the Artillery tests and click the API Tests Using Source Code option. No extra fields other than the above-mentioned required fields are needed for Artillery Test generation using source code. Make sure that the language set in extension settings matches the language your source code is in.
-
API Tests Using Swagger: To generate API tests using Swagger, you need to right-click on your Swagger.json or swagger.yaml API spec file and then select the Generate API Tests Using Swagger option. If you choose any file other than your API spec file, the test generation will fail. No extra fields other than the above-mentioned required fields are needed for API Tests Using Swagger. For API tests, you can also filter out which http verbs(such as post, get, etc.) will be tested by changing the HttpFilters setting from the advanced section.
-
API Tests Using Postman: To generate API tests using Postman, you need to right-click on your Postman Json API spec file and then select the Generate API Tests Using Postman option. If you choose any file other than your API spec file, the test generation will fail. No extra fields other than the above-mentioned required fields are needed for API Tests Using Postman. For API tests, you can also filter out which http verbs(such as post, get, etc.) will be tested by changing the HttpFilters setting from the advanced section.
-
Integration Tests Using Swagger: When generating Integration tests, you need to right-click on your Swagger.json or Swagger.yaml API spec file and then select the Integration Tests Using Swagger option. If you choose any file other than your API spec file, the test generation will fail, after selecting the option, a popup will open, asking you to provide the type of your gherkin template, you can either select file and browse to your gherkin template file or you can choose URL and provide the URL to your Gherkin template. No extra fields other than the above-mentioned required fields are needed for Integration Tests Using Swagger.
-
Integration Tests Using Postman: When generating Integration tests, you need to right-click on your Postman Json API spec file and then select the Integration tests using Postman option, if you choose any file other than your API spec file, the test generation will fail, after selecting the option, a popup will open, asking you to provide the type of your gherkin template, you can either select file and browse to your gherkin template file or you can choose URL and provide the URL to your Gherkin template. No extra fields other than the above-mentioned required fields are needed for Integration Tests Using Postman.
- Functional tests: To generate functional tests, you need to select your board type to be either JIRA or Azure and Then the Below details are also required:
- If the Board Type is Jira:
- Jira Email.
- Jira Token.
- Jira Hostname.
- If the Board Type is Azure:
- Azure Org.
- Azure Token.
- Azure Project.
- If the Board Type is Jira:
Improve and Analyze Generated Tests
After the test generation process is complete, a side panel will open, showing you all the generated test files, you can select the file you want to view by using the dropdown provided, if you want you can also edit the files in the panel itself and save your changes using the provided save button.
If you want to run the generated tests, you can do so from the provided run button in the side panel, doing so will run the selected test file. NOTE that you will need to have all the dependencies required for running the tests installed in your local system in order for test generation to take place. and for artillery tests, after you click the run button, you will be prompted to enter the target URL for the tests, if you want to provide a target URL, please provide so in the input box, and then you will be asked if you want to upload a .env file in order to provide environment variables, if yes then you can upload the env file for the same.
If you are not satisfied with the generated tests and want some improvements or changes in the test, then at the bottom of the side panel, you will find a feedback prompt, enter the feedback prompt that you want to give to the AI model and then click the improve button, this will trigger the test improvement.