GSoC 2017 Participants Announced!

We can now announce the participants in the STE||AR Group’s 2017 Google Summer of Code! We are very proud to announce the names of those 6 students who this year will be funded by Google to work on projects for our group.

These recipients represent only a handful of the many excellent proposals that we had to choose from. For those unfamiliar with the program, the Google Summer of Code brings together ambitious students from around the world with open source developers by giving each mentoring organization funds to hire a set number of participants. Students then write proposals, which they submit to a mentoring organization, in hopes of having their work funded.

Below are the students who will be working with the STE||AR Group this summer listed with their mentors and their proposal abstracts.


Praveen Velliengiri; PSG College of Technology, Coimbatore, India


Parsa Amini; Louisiana State University, LA, USA

Patrica Grubel; Los Alamos National Laboratory , NM, USA

Project: Distributed Component Placement
Implementation of EDSL for data placement policies in HPX using domain maps and adds allocators support over domain maps


Abhimanyu Rawat; Birla Institute of Technology and Science, Pila, India


Bryce Adelstein Lelbach aka wash; Lawrence Berkeley National Laboratory, CA, USA

Project: HPX- Stack overflow detection in Linux

As we all know that the stack overflow and segmentation fault shares some common properties, such as exceeding the address space.
Stack overflow occurs when the size of the program increases and demands for more than provided to it. It’s an existing problem in
HPX as at present stackoverflow is treated assegmentation fault.


Madhavan Seshadri; Nanyang Technological University, Singapor


Patick Diehl; Polytechnique Montreal, QC, Canada

Project: HPXCL – Asynchronous Integration of CUDA and OpenCL to HPX

Massively parallel applications are in the dawn of our future. The scale of problems and the amount of data available have made it infeasible to solve a meaningful problem in a single computer. As such, in such a scale, it is imperative to combine CPUs and accelerator cards such as GPUs to achieve one unified computing source. HPXCL aims at unifying kernel launch and data transfer into HPX asynchronous graph through seamless integration. However, the current implementation of the system with CUDA fails to achieve the desired performance improvement metrics while running in a cluster of computers. The proposal is aimed at improving the performance of CUDA-based HPXCL applications through an ‘Event’ triggered mechanism. Appropriate tests are also to be written to ensure necessary functionality performs as required. Standard benchmarking algorithms (Floyd Warshall, FFT etc.) will also be implemented in HPXCL, OpenCL and CUDA to validate, measure and improve the performance of the existing system, while also keeping in mind the new functionality added.


Denis Blank; Technische Universität München, Bavaria, Germany


Hartmut Kaiser; Louisiana State University, LA, USA

Project: Re-implementing hpx::util::unwrapped and unifying the API of hpx::wait and hpx::when

In order to fully support the requirements for accepting argument types described in #2456, #1404, #1400 and #1126 this proposal intends to extend the capabilities of the functions listed above to access nested hpx::future types as well as unwrapping such types. When the proposal is implemented, all the functions listed above should accept the same set of arguments (unification).


Ajai George; Technische Universität München, Bavaria, Germany


Marcin Copik; RWTH Aachen University, NRW, Germany

Project: Work on Parallel Algorithms I

My proposal is to complete the implementation of the N4409 C++ STL parallelization standard algorithms in HPX. Since #1141 has mostly been resolved by implementing many of the algorithms, my main focus will be on resolving #1338. But I will first work on implementing the few remaining algorithms in issue #1141 then move onto issue #1338. So I will implement is_heap, partial_sort, partition, remove, merge, and unique. After this I will move on to #1338, which is about ensuring that these algorithms work seamlessly with distributed data structures like hpx::vector. I will extend as many of the remaining algorithms as possible in the timeframe. I will do this along with unit tests and performance tests. I will also ensure that the documentation and the tutorials are upto date with the new features. I will also update or add relevant examples to the hpx code base.


Taeguk Kwon; Sogang University, South Kore


John Biddiscombe; Swiss National Supercomputing Centre, Switzerland

Project: Work on Parallel Algorithms II

1. Implement the rest of algorithms except sorting algorithms in #1141 for C++17 parallelism (N4409).

2. Improve hpx::parallel::util::scan_partitioner


GD Star Rating

    Leave a Reply

    Your email address will not be published. Required fields are marked *