Emergent properties in Large Language Models
Prof. Michal Kosinski
Associate Professor of Organizational Behavior
Graduate School of Business
Stanford University
Large Language Models (LLMs) trained to predict the next word in a sentence have surprised their creators by displaying emergent properties, ranging from a proclivity for biases to an ability to write computer code and solve mathematical tasks. This talk discusses the results of several studies evaluating LLMs’ performance on tasks typically used to study human psychological processes. Findings indicate that as LLMs increase in size and linguistic ability, they can navigate false-belief scenarios, sidestep semantic illusions, and tackle cognitive reflection tasks. This talk explores what these emergent properties reveal about the nature of intelligence—human and artificial—and what they might mean for the future of technology, society, and our understanding of the mind itself.













