Centrala begrepp
Developing robust and explainable models in computer vision is crucial for trustworthy AI applications.
Sammanfattning
The content discusses the importance of robust and explainable models in computer vision for trustworthy AI applications. It covers the challenges faced in deploying machine learning and deep learning models in practice, focusing on robustness, explainability, and reliability. The thesis explores the development of interpretable classifiers using radial basis function networks (RBFs) for CNNs, the detection of adversarial attacks using feature response maps, and the reduction of motion artifacts in medical imaging. It also delves into automated machine learning and deep learning, showcasing successful applications in affective computing, medical imaging, and fairness in face recognition systems.
Statistik
Recent breakthroughs in machine and deep learning research have provided tools for leveraging data and optimizing models.
Deep convolutional neural networks exhibit vulnerability to transformations and intentional manipulations.
Model robustness is crucial before certification and deployment in practice.
The thesis presents developments in computer vision models' robustness and explainability.
Several applications of ML and DL in medical imaging and affective computing are demonstrated.
Citat
"Developing interpretable and explainable models has become a research focus in its own right."
"Trustworthy AI literature summarizes the research effort as developing models aligned with positive societal effects."
"Understanding the behavior of computer vision models is crucial for monitoring the reasons for failures."