Core Concepts
The author explores the effectiveness of integrating translation concepts into ChatGPT prompts, finding that assigning the persona of a translator leads to the best performance. However, providing a translation brief did not improve ChatGPT's translation quality as expected.
Abstract
This research delves into the impact of incorporating translation concepts in ChatGPT prompts. Findings suggest that while assigning the persona of a translator enhances performance, using a translation brief does not significantly improve translation quality. Human evaluation highlights issues with fluency, naturalness, reader-friendliness, and accuracy in machine-generated translations compared to human translations.
The study evaluates different prompts in ChatGPT for translation tasks. Results indicate that assigning the role of a translator yields better outcomes than other prompts tested. The research emphasizes the need to reconsider traditional translation tools in light of evolving technology and industry demands.
Stats
"Findings show that assigning the persona as a translator allowed ChatGPT to achieve the best performance among the four prompts."
"For human evaluation comments, it is shown that while the main issues with ChatGPT-generated translations rest on the issues of fluency and naturalness."
"Results from automatic evaluation metrics and human grading forms provide complementary insights into the quality of the generated TTs."