toplogo
Sign In

Improving Turbulence Modeling for Unsteady Cavitating Flows Using a Data-Driven Approach


Core Concepts
This paper proposes a novel data-driven method, integrating Gene-Expression Programming (GEP) with a traditional RANS approach, to augment turbulence modeling for unsteady cavitating flows, demonstrating superior accuracy compared to standard RANS models in predicting Reynolds shear stress and turbulent kinetic energy.
Abstract
  • Bibliographic Information: Apte, D., Razaaly, N., Fang, Y., Ge, M., Sandberg, R., & Coutier-Delgosha, O. (2024). A novel data-driven method for augmenting turbulence modelling for unsteady cavitating flows. Elsevier.

  • Research Objective: This study aims to develop a more accurate and computationally efficient method for modeling turbulence in unsteady cavitating flows, addressing the limitations of traditional RANS and hybrid RANS-LES models.

  • Methodology: The researchers propose a data-driven approach that integrates Gene-Expression Programming (GEP) with a traditional RANS method. GEP is employed to generate an additional corrective term for the Boussinesq approximation, enhancing its accuracy in predicting Reynolds shear stress and turbulent kinetic energy. The model is trained using high-fidelity experimental data from a converging-diverging nozzle (venturi) case study. A Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is used to optimize the GEP-derived expressions, minimizing the error between the model predictions and experimental data. The performance of the proposed GEP-CFD approach is compared against both baseline RANS simulations and a linear regression model.

  • Key Findings: The GEP-CFD approach demonstrates superior accuracy compared to standard RANS simulations in predicting both Reynolds shear stress and turbulent kinetic energy fields. Incorporating the standard deviation of velocity profiles significantly improves the model's ability to capture the turbulent kinetic energy dynamics. Sensitivity analysis reveals the significant influence of void fraction, time-averaged velocities, and their standard deviations on the model's predictive performance.

  • Main Conclusions: The study concludes that integrating GEP with traditional RANS methods offers a promising approach for augmenting turbulence modeling in unsteady cavitating flows. The proposed method demonstrates improved accuracy and computational efficiency compared to traditional approaches, paving the way for more reliable simulations of complex multi-phase flow phenomena.

  • Significance: This research contributes to the growing field of data-driven turbulence modeling, offering a novel approach to address the limitations of traditional methods in simulating complex flows. The findings have significant implications for various engineering applications involving cavitation, including hydraulic machinery design and optimization.

  • Limitations and Future Research: The study acknowledges the limitations of using a single case study for model training and validation. Future research should focus on testing the GEP-CFD approach on a wider range of cavitating flow scenarios and exploring the potential of incorporating additional flow features and physics-informed constraints into the GEP algorithm to further enhance its accuracy and generalizability.

edit_icon

Customize Summary

edit_icon

Rewrite with AI

edit_icon

Generate Citations

translate_icon

Translate Source

visual_icon

Generate MindMap

visit_icon

Visit Source

Stats
The fitness value of a baseline k-ω SST model calculation is 1.93887. The Mean Squared Error (MSE) of the best expression derived from the GEP is 1.00851. The MSE of the optimal expression is 0.682.
Quotes

Deeper Inquiries

How might this data-driven approach be adapted for other multiphase flow phenomena beyond cavitation, and what challenges might arise in those contexts?

This data-driven approach, centered around augmenting turbulence modeling with Gene-Expression Programming (GEP) and optimization techniques, holds considerable promise for application to a variety of multiphase flow phenomena beyond cavitation. Let's explore some potential adaptations and challenges: Potential Applications: Boiling and Condensation: Similar to cavitation, these phenomena involve phase change and complex turbulence interactions. The GEP-CFD framework could be adapted by incorporating relevant physics-informed features like temperature gradients, heat fluxes, and surface tension into the input variables. Sediment Transport: Predicting sediment erosion, transport, and deposition in rivers, coastal areas, or pipelines involves complex interactions between the fluid and solid particles. Features like particle size distribution, bed shear stress, and settling velocity could be incorporated into the GEP model. Emulsions and Foams: Understanding the stability and flow behavior of mixtures like oil-water emulsions or foams is crucial in various industries. The GEP approach could be tailored by including features related to interfacial tension, droplet/bubble size, and rheological properties. Challenges: Data Availability and Quality: High-fidelity experimental or numerical data for training is crucial. Obtaining such data for complex multiphase flows can be expensive, time-consuming, and challenging due to measurement limitations. Feature Selection and Engineering: Identifying the most relevant physics-informed features that significantly influence the flow behavior is crucial for GEP model accuracy. This often requires domain expertise and iterative model refinement. Model Interpretability and Generalizability: While GEP offers more interpretability than black-box models like neural networks, ensuring the physical meaningfulness of the generated expressions and their generalizability to different flow conditions remains a challenge.

Could the reliance on high-fidelity experimental data for training limit the applicability of this method in scenarios where such data is scarce or expensive to obtain?

Yes, the reliance on high-fidelity experimental data for training can indeed pose a limitation to the applicability of this data-driven method, particularly in scenarios where such data is scarce or expensive to obtain. Here's a breakdown of the challenges and potential mitigation strategies: Challenges: Data Scarcity: In many engineering applications, especially those involving novel designs or extreme conditions, high-fidelity experimental data might be limited or non-existent. Experimental Cost: Conducting high-fidelity experiments, such as those involving advanced measurement techniques like Particle Image Velocimetry (PIV) or Laser Doppler Velocimetry (LDV), can be prohibitively expensive. Data-Specific Limitations: The GEP model's accuracy is inherently tied to the quality, quantity, and representativeness of the training data. If the training data is limited to a narrow range of flow conditions, the model's ability to generalize to other scenarios might be compromised. Mitigation Strategies: Hybrid Approaches: Combining experimental data with high-fidelity numerical simulations (e.g., Large Eddy Simulation - LES) can augment the training dataset. Transfer Learning: Leveraging pre-trained GEP models developed for similar flow phenomena or geometries and fine-tuning them with limited experimental data for the specific application can be effective. Physics-Informed Machine Learning: Incorporating fundamental physical principles and constraints into the GEP framework can reduce the reliance on extensive data and improve model generalizability. Data Augmentation Techniques: Applying data augmentation techniques, such as adding noise or perturbing existing data points, can artificially increase the size and diversity of the training dataset.

What are the ethical implications of using AI-driven models in engineering design, particularly concerning potential biases in the training data and the interpretability of the model's predictions?

The increasing use of AI-driven models in engineering design, while promising, raises important ethical considerations. Here's an examination of key concerns: Bias in Training Data: Source of Bias: Training data often reflects real-world systems, which can embed historical biases related to design choices, operating conditions, or even societal prejudices. If not addressed, these biases can be perpetuated and amplified by the AI model. Impact of Bias: Biased AI models can lead to unfair or unsafe outcomes. For instance, a model trained on data biased towards certain materials or manufacturing processes might overlook innovative solutions or perpetuate existing inequalities. Mitigation: Carefully curating and pre-processing training data to identify and mitigate biases is essential. Techniques like data balancing, re-sampling, and de-biasing algorithms can help create a more equitable representation. Interpretability of Predictions: Black-Box Problem: Many AI models, especially deep learning networks, are considered "black boxes" due to their complex internal workings, making it difficult to understand the reasoning behind their predictions. Accountability and Trust: Lack of interpretability can hinder accountability if a design failure occurs. It also makes it challenging to build trust in the AI system's recommendations, especially in safety-critical applications. Solutions: Employing more interpretable AI models like GEP, using explainable AI (XAI) techniques to provide insights into model decisions, and establishing clear lines of responsibility for AI-driven design choices are crucial steps. Additional Ethical Considerations: Job Displacement: The automation potential of AI in engineering design raises concerns about job displacement and the need for workforce retraining. Environmental Impact: The computational resources required to train and deploy some AI models can have a significant environmental footprint, particularly for energy-intensive deep learning algorithms. Over-Reliance and Deskilling: Over-reliance on AI models without a proper understanding of their limitations could lead to a decline in critical engineering judgment and problem-solving skills. Addressing Ethical Implications: Interdisciplinary Collaboration: Fostering collaboration between engineers, computer scientists, ethicists, and social scientists is crucial to develop guidelines and best practices for ethical AI development and deployment. Transparency and Explainability: Prioritizing transparency in AI model development, providing clear explanations of model predictions, and enabling human oversight are essential. Continuous Monitoring and Evaluation: Regularly monitoring AI system performance, evaluating for bias and fairness, and implementing mechanisms for feedback and improvement are crucial for responsible AI integration.
0
star