2025 Oxford Seminar: AI x Philosophy
Open-source syllabus and a chance to participate in an Oxford seminar with leading AI builders and philosophers
Co-delivered by Professor Philipp Koralus, Director of the Laboratory for Human-Centered AI (HAI Lab) at the University of Oxford, and Brendan McCord, Chair of the Cosmos Institute, this seminar brings together leading thinkers from philosophy and the cutting edge of AI development.
Each week, distinguished visitors from organizations including Midjourney, Google DeepMind, Prime Intellect, Apple, and academia will engage with seminar participants in substantive dialogue about how we might embed human flourishing into the global technology development pipeline from first principles.
From truth-seeking AI systems to collective intelligence, from privacy concerns to democratic deliberation, we'll explore how philosophical frameworks can guide the development of technologies that enhance rather than diminish human autonomy.
The seminar is primarily designed for philosophy and computer science graduate students, though participants from other disciplines with relevant interests are welcome. Space is intentionally limited to foster meaningful discussion and intellectual community.
Independent project opportunity: We will be awarding grants to select students for independent summer building project on related themes via the fast grants arm of Cosmos Institute.
How to join virtually: We have a limited number of virtual attendee slots available—if you're interested, you can apply here. [Update: applications are now closed]
Overview
Week 1: Truth-seeking AI
Instructors: Philipp Koralus (HAI Lab) and Brendan McCord (Cosmos)
How should AI systems be designed to support truth-seeking?
Readings:
Week 2: The Inquiry Complex
Visitor: Jules Desai (HAI Lab)
How do humans and AI systems engage in inquiry, and what structures support effective knowledge-seeking?
Readings:
Plato, Meno (excerpt on Meno’s paradox)
Koralus, “The Philosophic Turn for AI Agents: Replacing Centralized Digital Rhetoric with Decentralized Truth-Seeking” (link)
Week 3: Privacy and the Future of AI
Visitors: Helen Nissenbaum (Cornell) and Carina Peng (Apple)
What does privacy mean in an age of AI?
Readings:
Week 4: Decentralized AI & Scientific Discovery
Visitor: Vincent Weisser (Prime Intellect)
How might decentralized superintelligence transform scientific discovery?
Readings:
Polanyi, “Republic of Science” (link)
INTELLECT–1: The First Decentralized Training of a 10B Parameter Model” (link)
Accelerating Scientific Breakthroughs with an AI Co-Scientist (link)
The AI Scientist: Toward Fully Automated Open-Ended Scientific Discovery (link)
DeepSeek-R1: A Decentralized AI Research Platform (link)
Week 5: Collective Intelligence
Visitor: Ivan Vendrov (Midjourney)
How can AI systems enhance collective human intelligence?
Readings:
Hayek, “The Creative Powers of a Free Civilization”
Stray, Vendrov, Nixon, Adler, Hadfield-Menell, “What are You Optimizing For? Aligning Recommender Systems with Human Values.” (link)
Optional:
Week 6: The Habermas Machine
Visitors: Chris Summerfield (Oxford and AI Security Institute), MH Tessler (Google DeepMind)
Can AI support democratic deliberation and public discourse?
Readings:
Habermas, The Structural Transformation of the Public Sphere (short excerpt)
Summerfield, et al., “AI Can Help Humans Find Common Ground in Democratic Deliberation.” (link)
Optional:
Summerfield, et al, “How Will Advanced AI Systems Impact Democracy?” (link)
Week 7: AI and Human Autonomy
Visitor: Bethanie Drake-Maples (Stanford HAI)
What happens to human autonomy as AI systems become more capable?
Readings:
Humboldt, The Sphere and Duties of Government, Ch. 2, “Of the Individual Man and the Highest Ends of his Existence”
Tocqueville, Democracy in America, Volume 2, Part 4, Ch. 6, “What Kind of Despotism Democratic Nations Have to Fear”
Maples, “Designing for Human Autonomy in an Age of AI” (presentation of research and framework for design)
Week 8: Project Clinic
Instructors: Brendan McCord (Cosmos), Philipp Koralus (HAI Lab), and the HAI Lab team
How can philosophical insights about AI be translated into concrete projects?
The final session will provide structured group discussion for those who plan to submit an application for an independent summer building project on related themes, in collaboration with the fast grants arm of Cosmos Institute.
Preparation: Draft a Cosmos Ventures application for feedback (optional)
Biographies of Instructors
Philipp Koralus is the McCord Professor of Philosophy and AI at the University of Oxford and Director of the Oxford Human-Centered AI Lab (HAI Lab). Previously, he was the Fulford Clarendon Professor of Philosophy and Cognitive Science at the University of Oxford and Fellow at St. Catherine's College. His research–including his recent book, Reason and Inquiry–focuses on the nature of reason. Koralus has developed a new mathematical framework for understanding human-like reason both in success and in failure cases that sheds new light on standards of rationality for AI systems. He holds a Ph.D. in Philosophy and Neuroscience from Princeton University and has collaborated extensively with computer scientists, linguists, and psychologists.
Brendan McCord is the founder and Chair of Cosmos Institute and Cosmos Holdings, and a key thinker at the intersection of AI and philosophy. In the private sector, Brendan was the founding CEO of two AI startups that were acquired for $400 million. In the public sector, Brendan was the principal founder of the first applied AI organization for the US Department of Defense and author of its first AI strategy. Brendan is a graduate of MIT and Harvard Business School and was a Visiting Fellow at St Catherine's College at the University of Oxford. After MIT, he spent 610 days underwater on a submarine. He lives in Austin, TX with his wife and two children.
Biographies of Visitors
Jules Desai is a philosopher, computer scientist, and electronic musician. His research spans advanced reasoning capabilities in LLMs, the development of neurally-inspired, energy- and compute-efficient machine learning models, and the intersection of logic, AI, and the philosophies of Kant, Heidegger, and Wittgenstein. A former researcher at the Oxford Internet Institute, Jules recently concluded an academic hiatus, during which he composed and produced four albums of electro-acoustic music, blending techno, classical, and jazz. He holds a BPhil in Philosophy from the University of Oxford, focusing on cognitive science, Wittgenstein, and Heidegger, and an MPhysPhil in Physics and Philosophy from the University of Oxford, where he received two Gibbs Prizes for outstanding achievement.
Helen Nissenbaum is the Andrew H. and Ann R. Tisch Professor at Cornell Tech and in the Information Science Department at Cornell University. She is also Director of the Digital Life Initiative, which was launched in 2017 at Cornell Tech to explore societal perspectives surrounding the development and application of digital technology, focusing on ethics, policy, politics, and quality of life. Her own research takes an ethical perspective on policy, law, science, and engineering relating to information technology, computing, digital media and data science. Topics have included privacy, trust, accountability, security, and values in technology design. Her books include Obfuscation: A User's Guide for Privacy and Protest, with Finn Brunton (MIT Press, 2015) and Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford, 2010). Grants from the NSF, AFOSR, and the U.S. DHHS-ONC have supported her work. Nissenbaum holds a Ph.D. in philosophy from Stanford University and a B.A. (Hons) in philosophy and mathematics from the University of the Witwatersrand, South Africa.
Carina Peng is a machine learning engineer at Apple with a diverse academic and project background. While completing her B.A. Honors degree in Computer Science and Philosophy at Harvard College, she helped kickstart Harvard's Undergraduate Data Analytics Group and led programming and outreach for the Harvard College China Forum. Carina has worked on helping build the homegrown software system that runs all Tesla Gigafactories, statistical applications used across epidemic intelligence at the World Health Organization, and algorithmic pricing engine at QuantCo. Carina was a John Harvard Scholar, Mahindra Fellow, and studied abroad in Peking University and Oxford University.
Vincent Weisser is the Founder + CEO of Prime Intellect, where he commoditizes compute and intelligence through a decentralized AI platform. He also serves as an Advisor at Molecule AG, focusing on AI, crypto-economics, and ecosystem development. He is Chief Ecosystem + AI at Molecule, a platform advancing drug development and therapeutics in the pharmaceutical and biotech industry. As a Founding Steward at bio.xyz, Vincent accelerates science projects and biotech collectives. He is a Founding Steward at VitaDAO, a decentralized collective funding longevity research, and was a Founding Member at dex.blue, a decentralized exchange providing a professional trading experience. Vincent has a background in AI and has pursued education in AI Safety Fundamentals.
Ivan Vendrov leads the collective intelligence team at Midjourney, which is building AI tools to help people better understand and coordinate with each other. Previously, he was a member of the technical staff at Anthropic, working on the safe deployment of advanced AI systems. Prior to Anthropic, he was the founder and CTO of Omni and a researcher at Google Research and the University of Toronto. Ivan received his Bachelor's (double honours) in mathematics and computer science from the University of Saskatchewan and a Master's in computer science from the University of Toronto.
Christopher Summerfield has one foot in the field of cognitive neuroscience — studying the brains of humans as a professor of cognitive neuroscience at the University of Oxford — and the other in AI research — helping to build intelligent systems as a staff research scientist at the pioneering Google DeepMind. He has won several awards, including the prestigious Cognitive Neuroscience Society Young Investigator Award in 2015. He is regularly invited to give keynote talks across the world. Christopher has published over 100 peer-reviewed articles, reviews, and book chapters and his academic book, Natural General Intelligence: How Understanding the Brain Can Help Us Build AI, was widely acclaimed. His first book for a general readership is These Strange New Minds: How AI Learned to Talk and What It Means.
Michael Henry (MH) Tessler is a senior research scientist at Google DeepMind based in London, working on the safety and alignment of large language models (LLMs). He is deeply interested in the potential for LLMs to support and scale human deliberation in the service of strengthening democracy. Before becoming an AI researcher, Tessler studied human language use and language understanding as a postdoctoral researcher at MIT and as a Ph.D. student in the Department of Psychology at Stanford University. His research has been featured in journals such as Science, Nature Human Behaviour, and Psychological Review.
Bethanie Drake-Maples is a PhD student at the Stanford Institute for Human-Centered Artificial Intelligence, where she conducts research as part of the Stanford Autonomous Agents, AI+Ed, Human-Computer Interface, and Generative AI+Education Labs. Her doctoral research focuses on designing embodied artificial intelligence and human-machine interface systems for cognitive development and education. Previously, she managed technical teams at Google AI and Google X and co-founded an NLP startup. Bethanie is currently the Founder and CEO of Atypical AI and has taught in low-income communities across Uganda, India, and Mexico. A New Zealander, she enjoys sailing and reading science fiction when not working on AI and education.
Perfect syllabus
I'm assuming folks have been notified by now if they got in or not? I submitted a form but haven't received any updates on the form or the outcome.