This guide aims to get you up to speed quickly with the PractiTest platform, ensuring you are aware of all the options that would make your PractiTest experience great, and save you loads of time in the process!
If you have any questions please make sure you let us know. You can use the widget at the bottom right side of the screen to reach out. We’d be happy to hear from you!
What is PractiTest?
PractiTest is an end-to-end test management platform to organize, run and visualize all your QA efforts from any testing type: manual, exploratory and automation.
Using our built-in integrations and robust API library, you can connect all aspects of your testing for a unified process and improved team communication.
Our data organization structure and advanced customizable reports and dashboards will allow you to extract and view valuable insights from your real time test results, and our efficiency enhancement capabilities will eliminate your need to re-do tedious unnecessary work so you can focus on what matters most, make smarter decisions and release better software, faster.
Table of Content:
- Accessing PractiTest Projects
- PractiTest Modules and Navigation
- Test Library - Creating Tests
- Test Sets & Runs - Running Tests
- Tracking Bugs and Issues
- Requirements Module
- Filters
- Dashboard, Reports and Task Board
- Test Automation Integration
- Personal Notification Settings
Accessing PractiTest Projects
The first screen you will see when login into PractiTest is your personal account landing page. From this page, you can access the PractiTest projects you are associated with.
To access a PractiTest project, click on its name, and select the module you want to access.
Or, you can use the project drop-down list to access a project.
Your landing page will also include all the items you are working on in the ‘My Stuff’ section, after you start to actively work with PractiTest.
PractiTest Modules and Navigation
PractiTest has four main modules:
- Requirements - is where system requirements and user stories are created and managed
- Test Library - is where test cases are created and managed
- Test Sets & Runs - is where tests are run and results are recorded
- Issues - is where issues, bugs, and defects are managed
You can access each one of the modules from the main navigation bar at the top of the screen.
In addition to the different modules, you can also access the Info center from the main navigation bar. This gives you access to the reporting modules of PractiTest:
- Dashboards
- Reports
- Task Board
From the right side of the main navigation bar, you can access your personal settings, and PractiTest tutorials.
Test Library - Creating Tests
The Test Library is, as its name suggests, a library for your project’s test cases. This is where all the master versions of your test cases will be created, organized, and managed.
Managing and running tests is the core of your testing process. It is important that your tests are properly organized and defined, in order to perform your job in the most effective and efficient way.
Test Library Overview
- Creating a new test - from here you can create a new test in your test library
- Filters section - with filters you can present tests in the main grid based on their attributes/fields. This is where tests are structured into categories, using filtered views. From here, you can access an existing filtered view, or create a new one
- Batch Edit - this option allows you to edit the information of more than one test at a time. Select the tests you want to edit, and click batch edit to edit their information
- Create a TestSet - you can use this option to create test sets directly from the Test Library. Please note that test sets will be discussed in depth later on in this guide
- Add test to TestSet - using this option allows you to add tests to existing test sets in the Test Sets & Runs module
Tests Creation and Management
There are a few different types of test cases that can be created in the Test Library:
- Scripted - manual tests with predefined steps. When choosing a scripted test, you can add the description in the general tab and a “Steps” tab will appear where you can define the steps. Each step has a description of the step and the expected result section
- Exploratory - manual tests that don’t include predefined steps where users add annotations "on the fly". Exploratory tests include a Test Charter (mission) and Guide points instead of test description
- Automated - there are a few types of automation tests you can create based on different methods you can use to integrate your automation tests results
Creating Scripted Tests
- Test Name - fill in a name for your test. The name field is mandatory, so you’ll need to fill this in before saving
- Test Description & Test Fields - describe your test in the description field, and fill in the fields to categorize it properly. You will later be able to filter your tests based on these fields
- Test Preconditions - if needed, add preconditions for your test by clicking the ‘Test Preconditions’ link
- Attachments - if you want to add attachments, drag and drop them in the designated area at the bottom of the page
-
Steps Tab - access the steps tab to add steps information
Each step consists of 3 fields - name, description, and expected result
- New Step - to add a new step, click here
- Call to Test - this option allows you to call steps that were already defined in another test, into the test you are now creating. To call steps, click here and type in the ID of the existing test
- Traceability Tab - this is where you link your test to Requirements and Stories from your Requirements module to create test coverage for them. From here you can also see issues linked to this test based on previous runs
Comments and Mentions
For every test you can also add comments and mention other teammates using an @ sign. Teammates you mention will be notified by the system via email.
Exploratory Tests
Exploratory Tests can be created from the Test Library and from the Test Sets & Runs section. Exploratory Tests have a Title, Charter (mission for the session) and Guide Points. In addition, Exploratory test cases can have linked requirements and issues and therefore provides users with full traceability. When running exploratory tests, annotations of multiple types can be added on the fly, recording the tester's findings during the session. From an Exploratory test run, you can then also create a scripted test and reuse it in test sets. Learn more about it here.
Importing Tests from Excel
To access the import page, click on Settings - Import & Export. Then, click on ‘Import tests’.
In the import page, you need to map the columns of your spreadsheet to the fields shown on the page.
For example, if in my spreadsheet test names are listed in column ‘C’, I will delete the ‘A’ letter that is there by default, and will replace it with a ‘C’.
Please note: if your spreadsheet contains a row of headers, you will need to tick the ‘ignore first row’ box.
You can find the full import guide here.
Test Sets & Runs - Running Tests
Test Sets & Runs is where test execution is taking place, and where you can record test results. Test Sets, also known as Test Suits, are a group of tests which are planned to run together for a certain purpose.
Test Sets allow you to re-use your tests as many times as you need, in order to make your process more efficient.
Accessing a Test Set
- Test Sets Filter Views - on the left hand side of the module, you can find all filter views that currently exist in your project. To narrow down the list of ‘Test Sets’ you see in the main view, access a filter with the scope of tests relevant to you by clicking on it. For example, if you were assigned with running regression for Sprint 23, find the ‘Regression’ filter, under ‘Sprint 23’, and click on it
- Accessing a Test Set - after accessing the filtered view, find the Test Set that is assigned to you, and click on its name to access it.
Note: If you don’t have existing filters in your project, learn how to create them here.
Test Set Options
- Fast Filters - click on ‘Filters’ to enable fast filters. For example, filter by the 'assigned to' field to show only test instances assigned to you/li>
- Filter by grid fields - For example, filter by the 'assigned to' field to show only test instances assigned to you
- Grid Columns - click here to customize the columns in the grid
-
Add Test Instances - here you can add tests to the test set as test instances. Use the Test Library filters, showing on the left hand side, to narrow down the view
- Running a Test Instance - to start executing a test, click on ‘Run’ next to it
- Fast Runs - to quickly assign test instances with a status, select them from the grid and click 'Fast run'
When clicking the 'Run' button next to a new instance, you will be redirected to a new test run directly. If a test instance has been run previously, you will first get redirected to the instance page, showing a summary of all its previous runs.
- Runs History - here you can see a summary of all the previous runs of this test instance
- Save & Run Now - starts a new run for this test instance
Test Runs
- Steps Information - on the left hand side of the run screen, you will see the steps information that you will need to verify when running the test, including step name, description, and expected result
-
Actual Results the right hand side is where you actually record the result of the steps. On this side, you will need to type in the actual result of the step, select a status for it from the status bar, and add attachments if needed
- Fail & Issue - use the Fail & Issue option to report a bug directly from the step if needed. Using this option, the step will be set to Fail, and the description of the bug will be prefilled with the information of the steps that led you to find the bug, saving you time filling this out again
- Link Existing Issue - if the bug you found was already reported previously, use this option to link it to a step
- Pass All Steps - assign all steps in the test run with the status 'PASSED'
- Steps Actions - to modify steps information, add additional steps, or delete a step, click 'Actions'
- Update Original Test - when making changes to your test, you can use this option to update the original test in the test library. Using it will ensure your changes apply for future runs of your test. Otherwise, your changes will only apply for this specific test run.
- Run Next Test - to proceed to running the next test in the queue, click ‘Run next test’
- Go Back to Test Set - to go back to the Test Set, click on the Test Set name in the breadcrumbs
Tracking Bugs and Issues
The Issues module allows you to see the bugs and issues you have recorded in PractiTest during your testing. When using PractiTest as a complete ALM you can create the issues within PractiTest and the information is all stored within the Issues module.
When you create an issue, you can check if you already have a similar issue by clicking the display similar issues link after you start typing the issue name. You can also check linked tests and requirements through the Traceability tab.
For every new issue you report, make sure you fill in the relevant fields with the issue's information, and assign it to the relevant member of the team for a fix using the 'Assigned to' field.
Comments and Mentions
Just like with tests, you can add comments to your issues and mention other teammates using an @ sign. Teammates you mention will be notfied by the system via email.
Working With External Issues Trackers
When PractiTest is integrated with a third-party issues tracker like Jira or Azure DevOps (for example), issues you report from test runs will be created directly in the external system. A copy of the same issue will also be created in PractiTest's issues module, and the information will be synced between the two-systems. Changes that are made to the Name, Description, and status fields in the issue ticket in the external system will reflect in the copy in PractiTest.
Requirements Module
The requirements module is where you create and manage your system requirements and user stories. To create coverage for them, use the traceability tab to link them to tests. You can either create and define your requirements/user stories directly in PractiTest, or import them from an Excel spreadsheet.
Importing Requirements From Excel
The import process for requirements is similar to the test import process. Please note, that when you import requirements, you can import the linkage to tests as well using the traceability field. To access requirements import, navigate to settings - import & export. Then select ‘Import Requirements’.
Creating Requirements in PractiTest
When creating a new requirement in the system, you will first need to fill in the information of the requirement in the Name and Description fields. After you fill in the requirements information, you can proceed by linking it with tests to cover the requirement, from the traceability tab.
Syncing Requirements From a Third Party System
If you are working with an integrated third-party system like Jira, you can sync tickets from the other system into PractiTest, and link them to tests from the traceability tab to ensure coverage.
You can find additional instructions for setting up and using the Jira integration here.
If you are working with another system, you can find the details in the dedicated integration help guide page. You can find all integration guides here.
Requirements Status Field
The requirements status field will be populated automatically based on the status of the tests you link to the requirement.
Below you can find the criteria for each status:
NOT COVERED – There are no tests linked to this Requirement.
NO RUN – There is at least one test linked to the requirement. None of the linked tests ran.
STARTED – There is at least one test linked to the requirement that started running. None of them Failed or was set to Blocked
PASSED – All linked tests ran and passed.
BLOCKED – At least one of the linked tests was set to blocked. None of the linked Tests was Failed.
FAILED – At least one of the linked tests failed.
N/A – All tests linked to this requirement marked as N/A
Filters
Filters is a very powerful tool that allows you to create customized views in each of PractiTest’s main modules based on dynamic queries and fields information. By creating customized views, you can focus the list of items you see in each module, on a category of entities, instead of going through an endless list of items.
In addition to convenience, filters are also used for reporting purposes, allowing you to create reporting items based on a filter’s content.
For each filter you create, you can decide on the columns that will show in the filtered view, which you can also sort the view by.
To create a new filter in one of the modules, click on ‘+New Filter’ from the left side of the screen.
After adding a name for your filter, add an ‘AND’ Query from the filter criteria section, and select the field you want to filter by from the dropdown menu.
At the bottom of the screen, drag the fields you want to show on the grid from ‘Available’ to ‘Displayed’.
Auto Filters
In most cases, when you want to filter by list type fields (single select fields), you should use the auto filter option. This option will allow you to create sub-filters automatically based on all possible field values (entries), saving you a lot of time creating them all manually. When creating a new filter, tick the ‘Auto filter’ box and select a field from the filter criteria section to create automatic filters based on the field.
You can also decide to create a hierarchical automatic filter tree by adding an additional list field as a child auto filter.
Cross Filters
Cross filters allows you to create a filter in one module, based on a filter in another module. For example, if you want to see all issues related to a certain sprint, and you have a relevant 'Sprint' filter in your Test Sets & Runs module, you can create a cross filter in the issues module and base it on the sprint filter from the Test Sets & Runs module.
For more information and examples about filters visit this page.
Dashboards, Reports and Task Board
All reporting modules can be accessed from the 'Info Center' section of the main navigation.
Dashboards
The dashboards module is where you can view and create your project dashboard graphs. Each project can have an unlimited number of different dashboard tabs, and each tab can contain up to 8 items. Dashboard can display a high level picture of your project's progress.
Dashboard Items
When creating a new dashboard item, first, you need to select the entity you want to focus on: Tests, Test Sets, Instances, Runs, Issues or Requirements. To focus on test execution data, make sure you select the Instances entity. If you want to show test automation data, select the Runs entity. After you selected the entity, select the item type you would like to create, and press on continue. Next, select the filter you want to focus on from the selected entity.
For example, if I want to create an item that will display test execution data from my last cycle (cycle 3) - I will create an instance based item and the ‘Cycle 3’ filter.
By ticking the box next to ‘After saving stay on this page to preview the Dashboard Item’, you can preview your item in this same page, and make changes to it until you are satisfied with the result.
Test, Test Set, and LastRun Fields In Instances Based Items
When creating items based on instances, you can select to show data from fields linked to tests, test sets, and runs in your dashboard item. In the drop down list of fields, fields starting with an entity name, followed by a colon, are fields from one of the different levels. Fields with no entity name at the beginning, are fields from the instance level.
For example, if I want to show information from a ‘Feature’ field linked to my tests in the test library - I will select the field ‘Tests: Feature’ as my X field.
See the full guide for dashboards here.
Cloning Dashboard Tabs
You can reuse dashboard items you already created by cloning a dashboard tab. For example, to use a dashboard tab you created for your previous cycle, for your current cycle, clone the previous cycle’s tab, and simply edit the filters that the dashboard items are focused on to the new cycle’s filters.
External Dashboards
The external dashboard feature allows you to share your dashboard tabs as an external URL, with your stakeholders and non-PractiTest users.
See more information about external dashboards here.
Reports
While dashboards provide you with a high level view of your project, you can utilize reports to get a deep level look into your data. There are two main types of reports available for all modules, and a few additional types depending on the module you want to create the report for.
Like with dashboards, test execution reports should be created based on the instance entity, and test automation execution reports should be created based on the run entity.
When creating a new report, make sure you select a filter from the relevant module to display information for a specific slice of your information. For example, to create a report based on information from ‘Cycle 3’, I will create an instance report with the ‘Cycle 3’ filter selected.
Tabular summary reports - Excel based reports. Using tabular summary reports, you can fully customize the data displayed in the report, decide which fields you want to display, and which charts you want to display. Tabular with steps reports are identical to tabular summary reports, but also include steps data, and issues data.
Detailed reports - PDF based reports that provide detailed information about the selected entity and scope.
Scheduling Reports
When creating a report, you can decide to schedule it to run on a daily, weekly or monthly basis. The report will be regenerated with the settings you define for it based on the selected interval, and will be sent by email to users you select from the reports settings.
See more information about the reports module here, and about the report samples here.
Task Board
PractiTest's task board allows you to create task lane views in a way that will help you prioritize, share progress, and have a cross modules view of your entities. To access the Task board, click on the info center from the main navigation, then click 'Task Board'.
Adding items to Task Board
Adding individual entities to a lane
In order to insert real data and entities from your project into the task-board, go to any entity in your project (issue, test, requirement, Instance or testset) and on the right side, under Task Management, use the “Add to Task Board” drop down to insert the entity you are in, to the desirable lane in your task board.
Batch add entities to a lane
In order to add a few entities to a lane in one operation, select the entities from the relevant grid, then click 'Batch Edit'. Then, tick the box next to 'Add to task board' and select the relevant lane from the drop down-list.


Test Automation Integration
With PractiTest, you can run, control, view and manage your automation testing alongside your manual testing. Thus, have complete visibility for your entire process in one place. Below you can find the details about the different methods available for integrating your automated testing into PractiTest.
API
We have a fully featured JSON-based REST API that covers all modules of the platform and can integrate PractiTest with any automated testing framework. Our API allows you to fetch details of test sets from the system and then push back the results of your automated runs into PractiTest. Our API documentation is here.
FireCracker
Firecracker is a PractiTest developed tool that allows you to integrate any CI/CD framework and any XML test result file with your PractiTest project. FireCracker allows you to parse and modify XML report files and upload them into PractiTest easily and automatically. See the step-by-step guide for using FireCracker here.
xBot
xBot is a PractiTest internal automation framework that allows users to run (or initiate a run by scheduling) automated test scripts on a remote machine from the PractiTest UI. See the step-by-step guide for using xBot here.
Personal Notification Settings
You can control which email notifications you will like to get from the system from the personal settings. To access your personal settings, click on your avatar on the right hand side of the main navigation, then 'Personal Settings'.
In the 'Personal Email Notifications' section at the bottom of the page, tick the boxes next to entities’ events you want to get notified about. For example, if you want to be notified when a test set is assigned to you, tick the 'When a TestSet or Test Instance is assigned to me' box.