toplogo
Sign In

Transformer-Based Deep Learning Model for Predicting Bored Pile Load-Deformation Behavior in Bangkok Subsoil


Core Concepts
A novel transformer-based deep learning model can accurately predict the load-deformation behavior of large bored piles in Bangkok's complex subsoil conditions.
Abstract
The study proposes a conditional transformer-based model (CTM) to simulate the load-deformation behavior of bored piles in Bangkok's subsoil. The key highlights are: The model encodes the soil profile and pile features as tokenized input, and generates the load-deformation curve as output. It also incorporates the previous sequential data of the load-deformation curve into the decoder to improve prediction accuracy. The model shows satisfactory accuracy and generalization ability, with a mean absolute error of 5.72% for the test data. It can be used for parametric analysis and design optimization of piles under different soil, pile, and loading conditions. The trained CTM is publicly available on GitHub, allowing geotechnical engineers to simulate pile load-deformation behavior in subsoil models across different regions. The study contributes to the field of geotechnical engineering by providing a valuable tool for accurately predicting pile behaviors, which is essential for the optimal design and construction of deep foundation systems.
Stats
The subsoil of Bangkok presents significant geotechnical challenges due to the presence of alternating clay and sand layers with high spatial variability and compressibility. The cost per pile loading test in Bangkok ranges from 17,000 USD to 28,000 USD for large rectangular piles. The circular piles had diameters ranging from 0.8 to 1.8 m, while the barrettes had widths of 1 or 1.2 m and cross-section lengths of 3 or 3.8 m. The pile lengths varied from 39 to 60 m, depending on the intention to place the pile tip on sand or stiff clay layer.
Quotes
"The novel approach should be able to generate the load-deformation curve at different loading levels, different pile lengths and dimensions (diameter or width of pile) and soil profile." "To the best of our knowledge, there is scarce research on using Transformer-based generative models as predictive models in geotechnical engineering." "The trained deep learning model can be used as a generative model similar to large language models. By inputting the initial soil profile and feature, the generative model can predict desired geotechnical outcomes."

Deeper Inquiries

How can the proposed transformer-based model be extended to predict the behavior of other geotechnical systems, such as retaining walls or foundations, in different soil conditions?

The proposed transformer-based model can be extended to predict the behavior of other geotechnical systems by adapting the input data and model architecture to suit the specific characteristics of retaining walls or foundations. For retaining walls, the input tokenization can include parameters such as wall height, soil properties, and loading conditions. The model can be trained to predict wall deflection or stability under varying soil conditions and external loads. Similarly, for foundations, input features like foundation type, depth, soil properties, and applied loads can be encoded to predict settlement, bearing capacity, or overall structural response. To extend the model to different geotechnical systems, the training data should be expanded to include a diverse range of scenarios and conditions relevant to the specific system being analyzed. This would involve collecting comprehensive datasets of soil profiles, structural properties, and performance data for retaining walls or foundations. By incorporating this varied data into the training process, the model can learn to generalize and make accurate predictions for different geotechnical systems.

What are the potential limitations of the transformer architecture in handling highly complex and nonlinear soil-structure interaction problems, and how can these be addressed?

One potential limitation of the transformer architecture in handling highly complex and nonlinear soil-structure interaction problems is the computational resources required for training and inference. The transformer model's self-attention mechanism involves processing all input tokens simultaneously, leading to high memory and computational demands, especially for large datasets. This can pose challenges when dealing with extensive geotechnical data sets or when analyzing intricate soil-structure interaction phenomena. To address this limitation, several strategies can be employed: Reducing Model Complexity: Simplifying the transformer architecture by adjusting the number of layers, heads, or parameters can help mitigate computational requirements while maintaining model performance. Data Preprocessing: Optimal data preprocessing techniques, such as feature selection, dimensionality reduction, or data augmentation, can streamline the input data and make it more manageable for the model. Transfer Learning: Leveraging pre-trained transformer models or fine-tuning existing models on geotechnical data can reduce the computational burden of training from scratch. Parallel Processing: Utilizing distributed computing or parallel processing techniques can accelerate model training and inference by distributing computations across multiple processors or GPUs. By implementing these strategies, the transformer architecture can be optimized to handle complex soil-structure interaction problems efficiently and effectively.

Given the high computational requirements of transformer models, what strategies can be explored to develop more efficient and scalable deep learning approaches for geotechnical applications?

To develop more efficient and scalable deep learning approaches for geotechnical applications, the following strategies can be explored: Model Compression: Techniques like quantization, pruning, and knowledge distillation can be applied to reduce the size and computational complexity of deep learning models, including transformers, without significantly compromising performance. Architectural Optimization: Customizing the transformer architecture by modifying attention mechanisms, introducing sparsity, or incorporating domain-specific knowledge can improve efficiency and reduce computational overhead. Hybrid Models: Combining transformer models with simpler architectures like convolutional neural networks (CNNs) or recurrent neural networks (RNNs) in a hybrid model can enhance performance while reducing computational demands. Hardware Acceleration: Utilizing specialized hardware such as GPUs, TPUs, or dedicated AI accelerators can speed up model training and inference, making deep learning applications more computationally efficient. Incremental Learning: Implementing incremental learning strategies to update models gradually with new data can reduce the need for retraining the entire model from scratch, saving computational resources. By implementing these strategies, researchers and practitioners can develop deep learning approaches that are more efficient, scalable, and well-suited for geotechnical applications, even with the high computational requirements of transformer models.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star