Can Large Language Models Effectively Generate Parallel Code?
Large language models struggle to generate correct and efficient parallel code, with closed-source models like GPT-3.5 and GPT-4 outperforming open-source models. LLMs perform best on simple, structured parallel problems like transform and reduction, but struggle with more complex parallel algorithms and sparse, unstructured problems.