PXFS: Rethinking I/O

The core hurdle of contemporary I/O on large HPC machines relates to issues of latency in large parts caused by the deficiencies of the historical I/O model. File systems are based on a model that was relevant when computers were exclusively large, centralized, single processor systems shared by many time-sharing programs. Currently, data is divided into logical files that consist of a sequence of bytes. Files are organized by directories which contain a list of files and other directories. Files and directories are accessed using an open, read, write, seek, and close protocol, the object of which is to transfer sequences of bytes between the file and memory controlled by the program. The main semantic model is sequential consistency, which is to say all operations by all programs on a file or files should produce a result that is consistent with some sequential ordering of the operations.

This model of I/O was designed for a single processor system. Today we are using this model of I/O on systems with millions of cores across tens of thousands of localities. Storage devices are separated from computation across networks, and each locality has its own memory domain which can be shared by cores on the node. There are significant latencies across the networks, contention at the storage systems, and significant overheads which can lead to starvation, bottlenecks, and stalled computation. There are even larger latencies at the I/O devices (which have increased in size and throughput over the years, but only modestly improved in latency). Barring certain guarantees from programs, maintaining sequential consistency involves heavy handed locking schemes which not only introduce their own synchronization latencies, but typically add up additional network latencies. Clearly as machines grow larger and larger these latencies and overheads will grow and compound.

Our work on PXFS aims to extend the ParalleX execution model with a new I/O system to produce a new, highly innovative parallel execution model which will allow researchers to develop highly efficient data management, discovery, and analysis codes for Big Data applications covering a wide range of fields. We intend on demonstrate this new model by implementing a framework of computational, analytic, and Big Data tools including Map/Reduce with a combination of HPX, an implementation of ParalleX based on C++, and a high performance parallel file system based on OrangeFS currently being co-designed with HPX. This work will be steered by the development of a workflow from the genomics application area.

This work is supported by NSF Award Number: 1447831

GD Star Rating
loading...

    Leave a Reply

    Your email address will not be published. Required fields are marked *