Sound Research WIKINDX

WIKINDX Resources

Jiang, L., Chai, Y., Li, M., Liu, M., Fok, R., & Dziri, N., et al. 2025, December 2–7, Artificial hivemind: The open-ended homogeneity of language models (and beyond). Paper presented at 39th Conference on Neural Information Processing Systems. 
Added by: Mark Grimshaw-Aagaard (12/15/25, 12:38 AM)   Last edited by: Mark Grimshaw-Aagaard (12/15/25, 12:46 AM)
Resource type: Proceedings Article
Language: en: English
Peer reviewed
BibTeX citation key: Jiang2025
Email resource to friend
View all bibliographic details
Categories: General
Creators: Albalak, Chai, Choi, Dziri, Fok, Jiang, Li, Liu, Sap, Tsvetkov
Collection: 39th Conference on Neural Information Processing Systems
Views: 10/19
Abstract
"Language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet scalable methods for evaluating LM output diversity remain limited, especially beyond narrow tasks such as random number or name generation, or beyond repeated sampling from a single model. We introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth. We introduce the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs, comprising 6 top-level categories (e.g., brainstorm & ideation) that further breaks down to 17 subcategories. Using Infinity-Chat, we present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs. Infinity-Chat also includes 31,250 human annotations, across absolute ratings and pairwise preferences, with 25 independent human annotations per example. This enables studying collective and individual-specific human preferences in response to open-ended queries. Our findings show that LMs, reward models, and LM judges are less well calibrated to human ratings on model generations that elicit differing idiosyncratic annotator preferences, despite maintaining comparable overall quality. Overall, INFINITY-CHAT presents the first large-scale resource for systematically studying real-world open-ended queries to LMs, revealing critical insights to guide future research for mitigating long-term AI safety risks posed by the Artificial Hivemind."
Added by: Mark Grimshaw-Aagaard  Last edited by: Mark Grimshaw-Aagaard
WIKINDX 6.12.1 | Total resources: 1449 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA) | Time Zone: Europe/Copenhagen (+01:00)