Quick start#

The following steps will help you get started with HPX. Before getting started, make sure you have all the necessary prerequisites, which are listed in _prerequisites. After Installing HPX, you can check how to run a simple example Hello, World!. Writing task-based applications explains how you can get started with HPX. You can refer to our Migration guide if you use other APIs for parallelism (like OpenMP, MPI or Intel Threading Building Blocks (TBB)) and you would like to convert your code to HPX code.

Installing HPX#

The easiest way to install HPX on your system is by choosing one of the steps below:

#.* * vcpkg * *

You can download and install HPX using the vcpkg dependency manager:

$ vcpkg install hpx

#.* * Spack * *

Another way to install HPX is using Spack:

$ spack install hpx

#.* * Fedora * *

Installation can be done with Fedora as well:

$ dnf install hpx*

#.* * Arch Linux * *

HPX is available in the Arch User Repository (AUR) as hpx too.

More information or alternatives regarding the installation can be found in the Building HPX, a detailed guide with thorough explanation of ways to build and use HPX.

Hello, World!#

To get started with this minimal example you need to create a new project directory and a file CMakeLists.txt with the contents below in order to build an executable using CMake and HPX:

cmake_minimum_required(VERSION 3.19)
project(my_hpx_project CXX)
find_package(HPX REQUIRED)
add_executable(my_hpx_program main.cpp)
target_link_libraries(my_hpx_program HPX::hpx HPX::wrap_main HPX::iostreams_component)

The next step is to create a main.cpp with the contents below:

// Including 'hpx/hpx_main.hpp' instead of the usual 'hpx/hpx_init.hpp' enables
// to use the plain C-main below as the direct main HPX entry point.
#include <hpx/hpx_main.hpp>
#include <hpx/iostream.hpp>

int main()
{
    // Say hello to the world!
    hpx::cout << "Hello World!\n" << std::flush;
    return 0;
}

Then, in your project directory run the following:

$ mkdir build && cd build
$ cmake -DHPX_DIR=</path/to/hpx/installation> ..
$ make all
$ ./my_hpx_program
$ ./my_hpx_program
Hello World!

The program looks almost like a regular C++ hello world with the exception of the two includes and hpx::cout.

  • When you include hpx_main.hpp HPX makes sure that main actually gets launched on the HPX runtime. So while it looks almost the same you can now use futures, async, parallel algorithms and more which make use of the HPX runtime with lightweight threads.

  • hpx::cout is a replacement for std::cout to make sure printing never blocks a lightweight thread. You can read more about hpx::cout in The HPX I/O-streams component.

Note

Caution

Ensure that HPX is installed with HPX_WITH_DISTRIBUTED_RUNTIME=ON to prevent encountering an error indicating that the HPX::iostreams_component target is not found.

When including hpx_main.hpp the user-defined main gets renamed and the real main function is defined by HPX. This means that the user-defined main must include a return statement, unlike the real main. If you do not include the return statement, you may end up with confusing compile time errors mentioning user_main or even runtime errors.

Writing task-based applications#

So far we haven’t done anything that can’t be done using the C++ standard library. In this section we will give a short overview of what you can do with HPX on a single node. The essence is to avoid global synchronization and break up your application into small, composable tasks whose dependencies control the flow of your application. Remember, however, that HPX allows you to write distributed applications similarly to how you would write applications for a single node (see Why HPX? and Writing distributed applications).

If you are already familiar with async and future from the C++ standard library, the same functionality is available in HPX.

The following terminology is essential when talking about task-based C++ programs:

  • lightweight thread: Essential for good performance with task-based programs. Lightweight refers to smaller stacks and faster context switching compared to OS threads. Smaller overheads allow the program to be broken up into smaller tasks, which in turns helps the runtime fully utilize all processing units.

  • async: The most basic way of launching tasks asynchronously. Returns a future<T>.

  • future<T>: Represents a value of type T that will be ready in the future. The value can be retrieved with get (blocking) and one can check if the value is ready with is_ready (non-blocking).

  • shared_future<T>: Same as future<T> but can be copied (similar to std::unique_ptr vs std::shared_ptr).

  • continuation: A function that is to be run after a previous task has run (represented by a future). then is a method of future<T> that takes a function to run next. Used to build up dataflow DAGs (directed acyclic graphs). shared_futures help you split up nodes in the DAG and functions like when_all help you join nodes in the DAG.

The following example is a collection of the most commonly used functionality in HPX:

#include <hpx/algorithm.hpp>
#include <hpx/future.hpp>
#include <hpx/init.hpp>

#include <iostream>
#include <random>
#include <vector>

void final_task(hpx::future<hpx::tuple<hpx::future<double>, hpx::future<void>>>)
{
    std::cout << "in final_task" << std::endl;
}

int hpx_main()
{
    // A function can be launched asynchronously. The program will not block
    // here until the result is available.
    hpx::future<int> f = hpx::async([]() { return 42; });
    std::cout << "Just launched a task!" << std::endl;

    // Use get to retrieve the value from the future. This will block this task
    // until the future is ready, but the HPX runtime will schedule other tasks
    // if there are tasks available.
    std::cout << "f contains " << f.get() << std::endl;

    // Let's launch another task.
    hpx::future<double> g = hpx::async([]() { return 3.14; });

    // Tasks can be chained using the then method. The continuation takes the
    // future as an argument.
    hpx::future<double> result = g.then([](hpx::future<double>&& gg) {
        // This function will be called once g is ready. gg is g moved
        // into the continuation.
        return gg.get() * 42.0 * 42.0;
    });

    // You can check if a future is ready with the is_ready method.
    std::cout << "Result is ready? " << result.is_ready() << std::endl;

    // You can launch other work in the meantime. Let's sort a vector.
    std::vector<int> v(1000000);

    // We fill the vector synchronously and sequentially.
    hpx::generate(hpx::execution::seq, std::begin(v), std::end(v), &std::rand);

    // We can launch the sort in parallel and asynchronously.
    hpx::future<void> done_sorting =
        hpx::sort(hpx::execution::par(          // In parallel.
                      hpx::execution::task),    // Asynchronously.
            std::begin(v), std::end(v));

    // We launch the final task when the vector has been sorted and result is
    // ready using when_all.
    auto all = hpx::when_all(result, done_sorting).then(&final_task);

    // We can wait for all to be ready.
    all.wait();

    // all must be ready at this point because we waited for it to be ready.
    std::cout << (all.is_ready() ? "all is ready!" : "all is not ready...")
              << std::endl;

    return hpx::local::finalize();
}

int main(int argc, char* argv[])
{
    return hpx::local::init(hpx_main, argc, argv);
}

Try copying the contents to your main.cpp file and look at the output. It can be a good idea to go through the program step by step with a debugger. You can also try changing the types or adding new arguments to functions to make sure you can get the types to match. The type of the then method can be especially tricky to get right (the continuation needs to take the future as an argument).

Note

HPX programs accept command line arguments. The most important one is --hpx:threads=N to set the number of OS threads used by HPX. HPX uses one thread per core by default. Play around with the example above and see what difference the number of threads makes on the sort function. See Launching and configuring HPX applications for more details on how and what options you can pass to HPX.

Tip

The example above used the construction hpx::when_all(...).then(...). For convenience and performance it is a good idea to replace uses of hpx::when_all(...).then(...) with dataflow. See Dataflow for more details on dataflow.

Tip

If possible, try to use the provided parallel algorithms instead of writing your own implementation. This can save you time and the resulting program is often faster.

Next steps#

If you haven’t done so already, reading the Terminology section will help you get familiar with the terms used in HPX.

The Examples section contains small, self-contained walkthroughs of example HPX programs. The Local to remote example is a thorough, realistic example starting from a single node implementation and going stepwise to a distributed implementation.

The Manual contains detailed information on writing, building and running HPX applications.