I co-lead the Meaning Alignment Institute, where I work to align markets, democracies, and AIs with what's important to people. I'm a PI on the AGI Institutions and Full-Stack Alignment research programs, which include a network of 50+ researchers at universities and industry labs. My focus is on new, values-explicit democratic and post-market structures.
My philosophy work descends pretty clearly from that of Charles Taylor, Ruth Chang, Amartya Sen, and David Velleman.
It concerns the nature of values and norms, and how they play into the choices we make, and into our retrospective assessments. That is, I work mainly in the theories of choice, action, and practical reason.
One contribution is an operationalization for "human values" and meaningful choices precise enough that mechanisms, metrics, and ML models can be built on top. Perhaps this will also lead to explainable moral reasoning in AI, and offer a path past mechanisms that optimize against us.
My deepest motivation is not just to contribute to philosophy, but to answer pressing questions like:
- Why are some human needs sensed/addressed by markets and bureaucracies, but not others?
- Is there a metric it's safe to maximize?
- What drives the modern trend towards atomization and social isolation?
I believe these are ultimately questions about what in human life is worth honoring, and that the answers are found in the details of how people make choices, and how they assess them. E.g.: What do people mean when they say an experience was meaningful (as opposed to pleasurable, important, etc) or a choice was wise (as opposed to effective, clever, etc)?
My origins are in HCI and in game design.
In HCI, I was lucky to learn from people like Alan Kay, Terry Winograd, and Bill Verplank at Interval Research, and from Howie Shrobe and Marvin Minksy at MIT. And more recently through conversations with Bret Victor and Rob Ochshorn.
My tactic of running social experiments through games and performance emerged from study with Christian Wolff (partipatory music) and Peter Parnell (playwriting) at Dartmouth, and then various improvisational scores with Nancy Stark Smith, Mike Vargas, Ruth Zaporah, and others. I had the great fortune to work alongside Albert Kong and Catherine Herdlick on the real world games festival Come Out and Play.
My concern with meaning and metrics has its origins working with Casey Fenton at CouchSurfing, where I developed the meaning-based organizational metrics which guided the company. I then co-founded the Center for Humane Technology with Tristan Harris, and coined the term “Time Well Spent” for a family of metrics adopted by teams at Facebook, Google, and Apple.
I then started an online school and wrote a textbook on Values-Based Design, and finally launched a nonprofit to bring about a future where wise AIs and humans collaborate to help people live well.
I continue to benefit from working alongside Ellie Hain, Oliver Klingefjord, and Ryan Lowe, and from many conversations with Anne Selke.