Beschreibung |
Our communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, and produce entire articles. While the development and deployment of large language models is progressing expeditiously, the social consequences are hardly known.
In this project we will discuss potential social risks posed by large language models, drawing on multidisciplinary literature from computer science, linguistics, and social sciences. We will look at approaches that critically probe machine learning systems and examine the impact technology may have on users and society.
After initial engagement with the relevant literature and tools, participants will design and execute their audit and experiment, probing a social risk of a large language model in small groups. The project concludes with writing sessions, and the expected output will be an initial draft of an investigative report or scientific paper. |
Voraussetzungen |
Basic programming knowledge is required. Prior exposure to data science tools, machine learning and experiments is useful, but not a requirement.
Most of all, participants should have a keen interest in interdisciplinary investigative work. |
Zielgruppe |
M.Sc. Computer Science and Media, M.Sc. Computer Science for Digital Media, M.Sc. Human Computer Interaction, B.Sc. Medieninformatik, B.Sc. Informatik, M.Sc. Digital Engineering |