Core Concepts
HCI researchers are increasingly integrating large language models (LLMs) across their research workflows, from ideation to system development and paper writing. While researchers acknowledge a range of ethical concerns, they often struggle to identify and address those issues in their own projects.
Abstract
The study examines how HCI researchers are using large language models (LLMs) across various stages of their research process, and the ethical considerations they encounter.
Key highlights:
HCI researchers use LLMs for ideation, literature review, study design, data analysis, system building and evaluation, and paper writing. They perceive LLMs as opening up new possibilities for their research.
Researchers express a range of ethical concerns, including:
Harms of engaging with LLM outputs (e.g., biased, discriminatory, or fabricated information)
Threats to privacy of participant data
Violations of intellectual integrity (e.g., ambiguity around ownership of LLM-generated content)
Overtrust and overreliance on LLMs
Environmental and societal impacts
However, researchers often struggle to identify and address these ethical concerns in their own projects. They employ strategies like conditional and reactive engagement with ethics, limited disclosure practices, restricting LLM use, and delaying responsibility.
The findings highlight the need for the HCI community to proactively engage with research ethics around LLM use, through mechanisms like IRB involvement, developing ethical frameworks, and shifting academic incentives.
Stats
"LLMs are useful but not creative" (P15)
"we have all these models that are competing against each other, that's like millions of dollars of electricity and computer components, [which is] really bad for the environment" (P2)
"I think many people are very hasty to say yes or no. And I think that's not the answer. The answer is always in a gray area." (P2)
"To me, it is not gonna be helpful in researchers assessing the credibility or validity of the work. It is just like a meta issue about how the actual document was formed and refined." (P10)
Quotes
"the main problem is that I don't know what bias it has, and I don't know how to figure it out." (P16)
"the challenge is the regulation needs to come much later rather than early. If you come up with regulations too early, you may kill innovation and on the other hand, you don't know what you want to regulate because the representation of the product hasn't been stabilized yet." (P12)
"LLMs are still an unknown territory, so people don't know how to react, I assume." (P9)