To ensure correctness of HPX, we ship a large variety of unit and regression tests. The tests are driven by the CTest tool and are executed automatically on each commit to the HPX Github repository. In addition, it is encouraged to run the test suite manually to ensure proper operation on your target system. If a test fails for your platform, we highly recommend submitting an issue on our HPX Issues tracker with detailed information about the target system.
Running tests manually¶
Running the tests manually is as easy as typing
make tests && make test.
This will build all tests and run them once the tests are built successfully.
After the tests have been built, you can invoke separate tests with the help of
ctest command. You can list all available test targets using
| grep tests. Please see the CTest Documentation for further details.
Running performance tests¶
We run performance tests on Piz Daint for each pull request using Jenkins. To
run those performance tests locally or on Piz Daint, a script is provided under
tools/perftests_ci/local_run.sh (to be run in the build directory specifying
the HPX source directory as the argument to the script, default is
Adding new performance tests¶
To add a new performance test, you need to wrap the portion of code to benchmark
hpx::util::perftests_report, passing the test name, the executor name
and the function to time (can be a lambda). This facility is used to output the
time results in a json format (format needed to compare the results and plot
them). To effectively print them at the end of your test, call
hpx::util::perftests_print_times. To see an example of use, see
Finally, you can add the test to the CI report editing the
the executable name and
hpx_test_options for the corresponding options to
use for the run.
If you stumble over a bug or missing feature in HPX, please submit an issue to our HPX Issues page. For more information on how to submit support requests or other means of getting in contact with the developers, please see the Support Website page.