I co-lead the Meaning Alignment Institute, where I work to align markets, democracies, and AIs with what's important to people. There, my focus is on new, values-explicit democratic and post-market structures. I also curate a database of what's good about life, and field-build towards a paradigm shift in social choice, AI alignment, and mechanism design.
I am also helping to start the Turtle Mafia, a support group for researchers.
My philosophy work descends pretty clearly from that of Charles Taylor, Ruth Chang, Amartya Sen, and David Velleman.
It concerns the nature of values and norms, and how they play into the choices we make, and into our retrospective assessments. That is, I work mainly in the theories of choice, action, and practical reason.
My biggest contribution is definitions for "human values" and “meaningful choices" that are precise enough to create surveys, metrics, aligned ML models, new democratic mechanisms, etc. Perhaps this will also lead to explainable, moral learning in AI, and offer a path past mechanisms that optimize for engagement and revealed preference, rather than underlying values.
My deepest motivation is not just to contribute to philosophy, but to answer pressing questions like:
- Why are some human needs sensed/addressed by markets and bureaucracies, but not others?
- Is there a metric it's safe to maximize?
- What drives the modern trend towards atomization and social isolation?
I believe these are ultimately questions about what in human life is worth honoring, and that the answers are found in the details of how people make choices, and how they assess them. E.g.: What do people mean when they say an experience was meaningful (as opposed to pleasurable, important, etc) or a choice was wise (as opposed to effective, clever, etc)?
Read more at MAI's Related Academic Work page.
My origins are in HCI and in game design.
In HCI, I was lucky to learn from people like Alan Kay, Terry Winograd, and Bill Verplank at Interval Research, and from Howie Shrobe and Marvin Minksy at MIT. And more recently through conversations with Bret Victor and Rob Ochshorn.
My tactic of running social experiments through games and performance emerged from study with Christian Wolff (partipatory music) and Peter Parnell (playwriting) at Dartmouth, and then various improvisational scores with Nancy Stark Smith, Mike Vargas, Ruth Zaporah, and others. I had the great fortune to work alongside Albert Kong and Catherine Herdlick on the real world games festival Come Out and Play.
My concern with meaning and metrics has its origins working with Casey Fenton at CouchSurfing, where I developed the meaning-based organizational metrics which guided the company. I then co-founded the Center for Humane Technology with Tristan Harris, and coined the term “Time Well Spent” for a family of metrics adopted by teams at Facebook, Google, and Apple.
I then started an online school and wrote a textbook on Values-Based Design, and finally launched a nonprofit to bring about a future where wise AIs and humans collaborate to help people live well.
I continue to benefit from working alongside Ellie Hain, Oliver Klingefjord, and Ryan Lowe, and from many conversations with Anne Selke.