toplogo
Sign In

A Probabilistic, Data-Driven Closure Model for RANS Simulations with Aleatoric Model Uncertainty


Core Concepts
A probabilistic, data-driven framework is proposed to learn a closure model for Reynolds-Averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty.
Abstract
The proposed framework consists of two main components: A parametric closure model based on a non-linear eddy viscosity model (NLEVM) that uses a neural network to capture the anisotropic part of the Reynolds stress tensor. This parametric model is expressed in terms of invariants of the mean strain rate and rotation tensors. A stochastic discrepancy tensor field that is added to the parametric closure model to account for model errors and insufficiencies in the parametric part. This stochastic discrepancy is represented in a dimension-reduced form using a sparsity-inducing prior. The framework employs a fully Bayesian formulation that enables the quantification of epistemic uncertainties in the model parameters and the propagation of aleatoric uncertainties from the stochastic discrepancy tensor to the predictive estimates. The training is performed in a model-consistent manner by involving the RANS solver in the learning process. This allows the use of indirect observations of mean velocities and pressures, in contrast to the majority of existing data-driven RANS closure models that require direct Reynolds stress data. The proposed framework is demonstrated on the backward-facing step benchmark problem, where it is shown to produce accurate, probabilistic predictions of all flow quantities, even in regions where model errors are present.
Stats
The RANS equations are discretized using a finite element scheme, resulting in a residual equation R(z; τ) = 0, where z = [u, p]T are the discretized velocity and pressure fields, and τ is the discretized Reynolds stress tensor. The parametric closure model for the Reynolds stress tensor is given by: τθ = 2kbθ + 2k3I where bθ is expressed as a sum of tensor basis functions with coefficients learned using a neural network. The stochastic discrepancy tensor is represented as: ϵτ = WEτ where W is a matrix that maps the subdomain-level discrepancy tensor Eτ to the full grid.
Quotes
"We argue that even in model-consistent training, a discrepancy in the learnt RS closure model can arise due to the fact that a) the parametric, functional form employed may be insufficient to represent the underlying model, and b) the flow features which are used as input in the closure relation and which are generally restricted to each point in the problem domain (locality/Markovianity assumption [41]), might not contain enough information to predict the optimal RS tensor leading to irrecoverable loss of information." "We note however that stochastic RS discrepancy terms ϵτ or Eτ and the associated probabilistic model, are limited to the flow geometry used for the training. While it can be used for unseen flow scenarios (e.g. different Re number, inlet conditions, boundary conditions), it cannot be employed for a different flow geometry."

Deeper Inquiries

How can the proposed framework be extended to handle different flow geometries beyond the training conditions

To extend the proposed framework to handle different flow geometries beyond the training conditions, several modifications and enhancements can be implemented. One approach is to introduce a more flexible representation for the stochastic discrepancy tensor that can adapt to varying flow geometries. This could involve incorporating additional parameters or features in the model that capture the geometric characteristics of the flow. By training the model on a diverse set of flow geometries, the stochastic discrepancy tensor can learn to adjust to different conditions. Another strategy is to incorporate transfer learning techniques, where the model is initially trained on a set of flow geometries and then fine-tuned on new geometries. This approach leverages the knowledge gained from the initial training to improve performance on unseen geometries. By transferring the learned representations and patterns to new flow conditions, the model can effectively generalize to different geometries. Furthermore, the framework can be extended to include a mechanism for domain adaptation, where the model learns to adapt to new flow geometries by adjusting its parameters based on the specific characteristics of the new domain. This can involve incorporating domain-specific regularization techniques or data augmentation strategies to enhance the model's ability to handle diverse geometries. Overall, by incorporating these strategies and techniques, the proposed framework can be extended to effectively handle different flow geometries beyond the training conditions, improving its robustness and generalization capabilities.

What are the potential limitations of the sparsity-inducing prior used for the stochastic discrepancy tensor, and how could it be improved

The sparsity-inducing prior used for the stochastic discrepancy tensor may have certain limitations that could impact the model's performance. One potential limitation is the assumption of independence between subdomains, which may not always hold true in practice. If there are spatial correlations between subdomains, the current prior may not effectively capture these dependencies, leading to suboptimal model predictions. To address this limitation and improve the sparsity-inducing prior, one approach is to incorporate spatial correlations explicitly into the prior model. This can be achieved by introducing a structured covariance matrix that accounts for the spatial relationships between different regions in the flow domain. By modeling the spatial correlations, the prior can better capture the underlying patterns and dependencies in the stochastic discrepancy tensor, leading to more accurate and robust predictions. Additionally, exploring alternative prior distributions that allow for spatial dependencies, such as Gaussian processes with spatial kernels, could enhance the model's ability to capture complex spatial relationships in the stochastic discrepancy tensor. By incorporating more sophisticated prior models that account for spatial correlations, the framework can improve its predictive performance and adaptability to different flow conditions.

Can the proposed framework be further enhanced by incorporating spatial correlations in the stochastic discrepancy tensor, rather than assuming independence between subdomains

Yes, the proposed framework can be further enhanced by incorporating spatial correlations in the stochastic discrepancy tensor, rather than assuming independence between subdomains. By accounting for spatial dependencies, the model can better capture the underlying structure of the flow field and improve its predictive accuracy. One way to incorporate spatial correlations is to introduce a structured covariance matrix that captures the relationships between different regions in the flow domain. This covariance matrix can be designed to reflect the spatial proximity and interactions between subdomains, allowing the model to learn spatial patterns and dependencies in the stochastic discrepancy tensor. Furthermore, utilizing spatial kernels in Gaussian processes or other spatial modeling techniques can help capture complex spatial correlations in the stochastic discrepancy tensor. By incorporating these spatial correlation models, the framework can better adapt to the spatial characteristics of the flow field and make more accurate predictions across different regions of the domain. Overall, by incorporating spatial correlations in the stochastic discrepancy tensor, the proposed framework can enhance its ability to capture spatial dependencies and improve its predictive performance in diverse flow geometries.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star