| Beschreibung |
Large language models encode patterns of human behavior from their training data: opinions, social norms, economic reasoning, online interaction styles. This opens the possibility of using AI to simulate social processes. Researchers have started to use LLMs to replicate classic behavioral experiments, simulate survey respondents whose opinion distributions match real populations, generate community members who post and argue in online forums, and build agents with memory and planning that produce emergent social dynamics over time. In this project, we engage with the emerging field of LLM-based social simulation. We read the recent literature and develop our own simulation system: designing agent personas, writing prompt chains that generate behavior conditioned on those personas, and implementing a pipeline that produces synthetic interactions at scale. Depending on the group's focus, the simulation could take the form of a populated online community, a replicated behavioral experiment, an economic game, or an agent-based model of opinion dynamics. We evaluate when synthetic populations behave like real ones and what these simulations can and cannot tell us about human behavior. |
| Literatur |
Park, Joon Sung, et al. "Social simulacra: Creating populated prototypes for social computing systems." UIST’22. Park, Joon Sung, et al. "Generative agents: Interactive simulacra of human behavior." UIST’23. Argyle, Lisa P., et al. "Out of one, many: Using language models to simulate human samples." Political Analysis 31.3 (2023. Horton, John J. Large language models as simulated economic agents: What can we learn from homo silicus?. NBER, 2023. Aher, Gati V., et al. "Using large language models to simulate multiple humans and replicate human subject studies." PMLR, 2023. |