Design

Interface Basics

All parallel algorithms are intended to have signatures that are equivalent to the ISO C++ algorithms replaced. For instance, the std::adjacent_find function is declared as:

namespace std
{
  template<typename _FIter>
    _FIter
    adjacent_find(_FIter, _FIter);
}

Which means that there should be something equivalent for the parallel version. Indeed, this is the case:

namespace std
{
  namespace __parallel
  {
    template<typename _FIter>
      _FIter
      adjacent_find(_FIter, _FIter);

    ...
  }
}

But.... why the elipses?

The elipses in the example above represent additional overloads required for the parallel version of the function. These additional overloads are used to dispatch calls from the ISO C++ function signature to the appropriate parallel function (or sequential function, if no parallel functions are deemed worthy), based on either compile-time or run-time conditions.

Compile-time conditions are referred to as "embarrassingly parallel," and are denoted with the appropriate dispatch object, ie one of __gnu_parallel::sequential_tag, __gnu_parallel::parallel_tag, __gnu_parallel::balanced_tag, __gnu_parallel::unbalanced_tag, __gnu_parallel::omp_loop_tag, or __gnu_parallel::omp_loop_static_tag.

Run-time conditions depend on the hardware being used, the number of threads available, etc., and are denoted by the use of the enum __gnu_parallel::parallelism. Values of this enum include __gnu_parallel::sequential, __gnu_parallel::parallel_unbalanced, __gnu_parallel::parallel_balanced, __gnu_parallel::parallel_omp_loop, __gnu_parallel::parallel_omp_loop_static, or __gnu_parallel::parallel_taskqueue.

Putting all this together, the general view of overloads for the parallel algorithms look like this:

  • ISO C++ signature

  • ISO C++ signature + sequential_tag argument

  • ISO C++ signature + parallelism argument

Please note that the implementation may use additional functions (designated with the _switch suffix) to dispatch from the ISO C++ signature to the correct parallel version. Also, some of the algorithms do not have support for run-time conditions, so the last overload is therefore missing.

Configuration and Tuning

Some algorithm variants can be enabled/disabled/selected at compile-time. See <compiletime_settings.h> and See <features.h> for details.

To specify the number of threads to be used for an algorithm, use omp_set_num_threads. To force a function to execute sequentially, even though parallelism is switched on in general, add __gnu_parallel::sequential_tag() to the end of the argument list.

Parallelism always incurs some overhead. Thus, it is not helpful to parallelize operations on very small sets of data. There are measures to avoid parallelizing stuff that is not worth it. For each algorithm, a minimum problem size can be stated, usually using the variable __gnu_parallel::Settings::[algorithm]_minimal_n. Please see <settings.h> for details.

Implementation Namespaces

One namespace contain versions of code that are explicitly sequential: __gnu_serial.

Two namespaces contain the parallel mode: std::__parallel and __gnu_parallel.

Parallel implementations of standard components, including template helpers to select parallelism, are defined in namespace std::__parallel. For instance, std::transform from <algorithm> has a parallel counterpart in std::__parallel::transform from <parallel/algorithm>. In addition, these parallel implementations are injected into namespace __gnu_parallel with using declarations.

Support and general infrastructure is in namespace __gnu_parallel.

More information, and an organized index of types and functions related to the parallel mode on a per-namespace basis, can be found in the generated source documentation.