HPX V0.9.10 Available!

The STE||AR Group is proud to announce the release of HPX v0.9.10! In this release our team has focused on making large scale runs simple and reliable. With these changes we have currently shown the ability to run HPX applications on up to 24,000 cores! Other major features include new parcel-port (network-layer) implementations, variadic template support, more parallel algorithms, and the first distributed data structure, hpx::vector.

HPX’s parcel-port has been refactored to be more efficient and modular. Third party parcel-ports can, with little effort, be plugged into HPX and used instead of the default parcel-ports currently used in HPX. This change will allow users to integrate their own networking libraries as well as proprietary networking software or hardware without running into the licensing issues.

The days of specifying argument limits for actions are over! HPX now uses variadic templates which now allows users to add as many arguments in their actions as they desire. These changes have significantly reduced the code base and will help keep HPX lean going forward. This benefit does come at a small cost as breaking API changes will come to the following functions: hpx::async_continue, hpx::apply_continue, hpx::when_each, and hpx::wait_each.

We have continued our work since the last release to implement the C++ standardization proposal called ‘Parallelism TS’ (document number N4352). Since then we have completed about 75% of the algorithms. In addition, we have extended the proposal to allow for the algorithms to be called asynchronously.

Finally, we are excited to release the first implementation of hpx::vector! This is HPX’s first distributed data structure. hpx::vector is closely aligned to the functionality of std::vector. The difference is that hpx::vector stores the data in partitions where the partitions can be distributed over different localities. It should also be mentioned that we are working to allow the parallel algorithms to work with hpx::vector. Currently, not all of the parallel algorithms support this feature but we are making headway!

We would like to thank everyone who has made this release possible. And what better way to thank them than by downloading HPX here! For more details about the recent changes, please check out our release notes. If you have any questions, comments, or exploits to report you can comment below, reach us on IRC (#stellar on Freenode), or email us at hpx-users@stellar.cct.lsu.edu.

For more details about these (and all other) changes please see the release notes here.

GD Star Rating

    Leave a Reply

    Your email address will not be published. Required fields are marked *