Fujii, S., & Yamagishi, R. (2024). Feasibility Study for Supporting Static Malware Analysis Using LLM. In Workshop on Security and Artificial Intelligence (SECAI 2024).
This research investigates the feasibility of utilizing large language models (LLMs), specifically ChatGPT (GPT-4), to assist security analysts in conducting static malware analysis. The study aims to determine if LLMs can generate accurate and helpful explanations of malware functionality, thereby potentially improving the efficiency and effectiveness of static analysis.
The researchers selected a ransomware sample (Babuk) with a publicly available analysis article for evaluation. They decompiled and disassembled the malware using Ghidra, inputting the results into ChatGPT with various prompts to generate explanatory text for each function. The accuracy of the generated explanations was assessed based on function coverage, BLEU, and ROUGE scores, comparing them to the analysis article. Additionally, a user study involving six security analysts was conducted. The analysts were tasked with performing a simulated static analysis of the malware using ChatGPT-generated explanations alongside decompiled/disassembled results. Their feedback was collected through questionnaires and interviews to evaluate the practicality and usefulness of LLM assistance in a real-world setting.
This study demonstrates the potential of LLMs as valuable assistants in static malware analysis. The findings suggest that LLMs can effectively generate accurate and helpful explanations of malware functionality, contributing to a more efficient analysis process. However, challenges such as confidentiality concerns regarding sensitive information and the need for seamless integration with existing analysis tools need to be addressed for wider practical adoption.
This research contributes to the growing body of work exploring the applications of LLMs in cybersecurity. It provides valuable insights into the potential benefits and challenges of using LLMs for static malware analysis, paving the way for the development of more sophisticated and user-friendly LLM-based security tools.
The study was limited to a single malware sample and a small number of participants. Future research should involve a wider range of malware families and a larger, more diverse group of analysts to validate the generalizability of the findings. Further investigation is needed to address the identified challenges, such as developing methods for handling sensitive information and improving the integration of LLM outputs with existing analysis workflows.
Egy másik nyelvre
a forrásanyagból
arxiv.org
Mélyebb kérdések