Output Performance of Petascale File Systems
HPC applications generate periodic output bursts for intermediate results, checkpointing and visualization. For a typical HPC application, when writing a burst its entire execution is stalled until all data reaches disks. In general, since cores must idle during a burst, a supercomputer and its I/O system must absorb the output burst fast to use compute cores efficiently.
This thesis studies the performance of file writes in supercomputer file systems under production load, including quantitative behavior analysis, component performance profiling and performance prediction modeling. Our target environment is Titan-the 4th fastest supercomputer in the world-and its Lustre parallel file stores. The results of behavior analysis and performance profiling can inform file system configuration choices and the design of I/O software in the application, operating system, and adaptive I/O middleware systems. Moreover, the predictive model we build is useful for output performance prediction of supercomputer file systems in live use.
To quantify the performance behavior of production supercomputer file systems, we introduce a statistical benchmarking methodology to measure the impact of parameter choices on burst absorption rates. Our approach combines many samples of their impacts over time to filter out interference caused by the transient congestion from competing workloads in the production setting. These samples are also used to characterize the performance of individual stages and components in the multi-stage write pipelines, and their variations over time.
We find that Titan's I/O system is variable across the machine at fine time scales. This variability has two major implications. First, stragglers lessen the benefit of coupled I/O parallelism (striping). Peak median output bandwidths are obtained with parallel writes to many independent files, with no striping or write-sharing of files across clients (compute nodes). I/O parallelism is most effective when the application-or its I/O middleware system-distributes the I/O load so that each target stores files for multiple clients and each client writes files on multiple targets, in a balanced way with minimal contention. Second, our results suggest that the potential benefit of dynamic adaptation is limited. In particular, it is not fruitful to attempt to identify "good locations" in the machine or in the file system: component performance is driven by transient load conditions, and past performance is not a useful predictor of future performance. For example, we do not observe diurnal load patterns that are predictable.
Beyond the observation of performance variability of Titan, we also notice that: 1. The mean performance is stable and consistent over typical application run times; 2. Output performance is non-linearly related to its correlated parameters due to interference and saturation on individual stages on the path. These observations enable us to build a predictive model of expected write times of output patterns and I/O configurations, using feature transformations to capture non-linear relationships. We identify the candidate features based on the structure of the Lustre/Titan write path, and use feature transformation functions to produce a linear model space with 135,000 candidate models. By searching for the minimal mean square error in this space we identify a good model and show that it is effective.
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Rights for Collection: Duke Dissertations