Statistical Learning of Particle Dispersion in Turbulence and Modeling Turbulence via Deep Learning Techniques

Loading...
Thumbnail Image

Date

2021

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

121
views
73
downloads

Abstract

Turbulence is a complex dynamical system that is strongly high-dimensional, non-linear, non-local and chaotic with a broad range of interacting scales that vary over space and time. It is a common characteristic of fluid flows and appears in a wide range of applications, both in nature and industry. Moreover, many of these flows contain suspended particles. Motivated by this, the research presented here aims at (i) studying particle motion in turbulence and (ii) modeling turbulent flows using modern machine learning techniques.

In the first research objective, we conduct a parametric study using numerical experiments (direct numerical simulations) to examine accelerations, velocities and clustering of small inertial settling particles in statistically stationary isotropic turbulent flow under different values of the system control parameters (Taylor Reynolds number $Re_\lambda$, particle Stokes number $St$ and settling velocity $Sv$). To accomplish our research goals, we leveraged a wide variety of tools from applied mathematics, statistical physics and computer science such as constructing the probability density function (PDF) of quantities of interest, radial distributionfunction (RDF), and three-dimensional Vorono\text{\"i} analysis. Findings of this study have already been published in two journal papers (PhysRevFluids.4.054301 and PhysRevFluids.5.034306), both of which received editors' suggestion awards. In the following paragraphs, some of the important results are highlighted.

The results for the probability density function (PDF) of the particle relative velocities show that even when the particles are settling very fast, turbulence continues to play a key role in their vertical relative velocities, and increasingly so as $Re_\lambda$ is increased. Thisoccurs because although the settling velocity may be much larger than typical velocities of the turbulence, due to intermittency, there are significant regions of the flow where the contribution to the particle motion from turbulence is of the same order as that from gravitational settling.

In agreement with previous results using global measures of particle clustering, such as the RDF, we find that for small Vorono\text{\"i} volumes (corresponding to the most clustered particles), the behavior is strongly dependent upon $St$ and $Sv$, but only weakly dependent upon $Re_\lambda$, unless $St>1$. However, larger Vorono\text{\"i} volumes (void regions) exhibit a much stronger dependence on $Re_\lambda$, even when $St\leq 1$, and we show that this, rather than the behavior at small volumes, is the cause of the sensitivity of the standard deviation of the Vorono\text{\"i} volumes that has been previously reported. We also show that the largest contribution to the particle settling velocities is associated with increasingly larger Vorono\text{\"i} volumes as $Sv$ is increased.

Our local analysis of the acceleration statistics of settling inertial particles shows that clustered particles experience a net acceleration in the direction of gravity, while particles in void regions experience the opposite. The particle acceleration variance, however, is a convex function of the Vorono\text{\"i} volumes, with or without gravity, which seems to indicate a non-trivial relationship between the Vorono\text{\"i} volumes and the sizes of the turbulent flow scales. Results for the variance of the fluid acceleration at the inertial particle positions are of the order of the square of the Kolmogorov acceleration and depend only weakly on Vorono\text{\"i} volumes. These results call into question the ``sweep-stick'' mechanism for particle clustering in turbulence which would lead one to expect that clustered particles reside in regions where the fluid acceleration is zero (or at least very small).

In the second research objective, we propose two cutting-edge, data-driven, deep learning simulation frameworks, with the capability of embedding physical constraints corresponding to properties of three-dimensional turbulence. The first framework aims to reduce the dimensionality of data resulting from large-scale turbulent flow simulations (static mapping), while the second framework is designed to emulate the spatio-temporal dynamics of a three-dimensional turbulent flow (dynamic mapping).

In the static framework, we apply a physics-informed Deep Learning technique based on vector quantization to generate a discrete, low-dimensional representation of data from simulations of three-dimensional turbulent flows. The deep learning framework is composed of convolutional layers and incorporates physical constraints on the flow, such as preserving incompressibility and global statistical characteristics of the velocity gradients.A detailed analysis of the performance of this lossy data compression scheme, with evaluations based on multiple sets of data having different characteristics to that of the training data, show that this framework can faithfully reproduce the statistics of the flow, except at the very smallest scales, while offering 85 times compression. %Compared to the recent study of Glaws. et. al. (Physical Review Fluids, 5(11):114602, 2020), which was based on a conventional autoencoder (where compression is performed in a continuous space), our model improves the CR by more than $30$ percent, and reduces the MSE by an order of magnitude. Our compression model is an attractive solution for situations where fast, high quality and low-overhead encoding and decoding of large data are required.

Our proposed framework for dynamic mapping consists of two deep learning models, one for dimension reduction and the other for sequence learning. In the model, we first generate a low-dimensional representation of the velocity data and then pass it to a sequence prediction network that learns the spatio-temporal correlations of the underlying data. For the sequence forecasting, the idea of Transformer architecture is used and its performance compared against a standard Recurrent Network, Convolutional LSTM. These architectures are designed to perform a sequence to sequence multi-class classification task, which is attractive for modeling turbulence. The diagnostic tests show that our Transformer based framework can perform short-term predictions that retain important characteristics of large and inertial scales of flow across all the predicted snapshots.

Description

Provenance

Citation

Citation

Momenifar, Reza (2021). Statistical Learning of Particle Dispersion in Turbulence and Modeling Turbulence via Deep Learning Techniques. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/23722.

Collections


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.