This paper presents a study on the integration of domain-specific knowledge in prompt engineering to enhance the performance of large language models (LLMs) in scientific domains. The authors have curated a benchmark dataset encompassing intricate physical-chemical properties of small molecules, drugability for pharmacology, functional attributes of enzymes, and crystal material properties, underscoring the relevance and applicability across biological and chemical domains.
The proposed domain-knowledge embedded prompt engineering method outperforms traditional prompt engineering strategies on various metrics, including capability, accuracy, F1 score, and hallucination drop. The effectiveness of the method is demonstrated through case studies on complex materials including the MacMillan catalyst, paclitaxel, and lithium cobalt oxide.
The results suggest that domain-knowledge prompts can guide LLMs to generate more accurate and relevant responses, highlighting the potential of LLMs as powerful tools for scientific discovery and innovation when equipped with domain-specific prompts. The study also discusses limitations and future directions for domain-specific prompt engineering development.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Hongxuan Liu... a las arxiv.org 04-24-2024
https://arxiv.org/pdf/2404.14467.pdfConsultas más profundas