The use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other hand, writers who genuinely want to use LLMs must conform to publisher policies for AI-assisted writing, and readers need assurance that a text has been verified by a human. We argue that a system that captures the provenance of interaction with an LLM can help writers retain their agency, conform to policies, and communicate their use of AI to publishers and readers transparently. Thus we propose HaLLMark, a tool for facilitating and visualizing writers' interaction with LLMs. We evaluated HaLLMark with 13 creative writers, and found that it helped them retain a sense of control and ownership of the written text.
翻译:暂无翻译