XHuman “The Mirror and the Mentor”: Using LLMs to Explain Human Decision-Making at Work
This is a joint seminar organized by HKU Business School’s IIM Area and Institute of Digital Economy & Innovation (IDEI).
Professor Alok Gupta
Senior Associate Dean of Faculty, Research and Administration
Curtis L. Carlson Schoolwide Chair in Information Management
Carlson School of Management, University of Minnesota
As AI systems become increasingly embedded in decision-making, a significant amount of research attention has focused on explainable AI (XAI) — making decisions made by machines understandable to humans. But what if a more transformative opportunity lies in reversing this lens? In this talk, I argue that AI systems including large language models (LLMs), when designed and used appropriately, can serve as powerful tools to explain human decisions — better than humans can explain themselves.
Drawing from a series of published and unpublished studies, I will first talk about how human limitations — such as poor metaknowledge — impair effective collaboration with AI, especially in task delegation. These cognitive blind spots are not mere user-interface problems but fundamental constraints that limit human-AI synergy. Next, I will explore how AI systems can mitigate these challenges — not only by better advising humans but also by selectively choosing when to advise. Building on these ideas, I introduce new empirical findings showing that LLMs can externalize tacit human knowledge from behavior more effectively than humans can articulate it. This has profound implications for training, team design, and knowledge transfer. Rather than relying solely on human introspection, organizations can use LLMs to model the “how” behind expert decisions — facilitating more scalable, accurate, and interpretable workflows using machines as well as humans.
Taken together, these results invite a reimagination of the role of AI — not just as a tool to augment decisions, but as a cognitive mirror and mentor, helping us better understand and organize human judgment in the future of work.













