Kickstarting the Humane Tech Playbook
Metrics, frameworks, and interventions for an ethical tomorrow
On May 14, our roundtable of founders, technologists, and passionate builders gathered for a hands-on, honest conversation: What does it mean to build technology that is truly humane—serving well-being, connection, and integrity—in a world that races ever faster into the future?
We shared stories, surfaced challenges, and began constructing an open-source, human tech playbook to guide our products and daily building practices.
Real Stories, Real Impact
To ground our conversation, we shared personal transformative moments with tech, A few that stayed with me:
Empowerment through AI: Several people described how tools like ChatGPT helped non-native English speakers refine writing, making technology an accessible partner in expressing complex thoughts.
Storytelling for clarity: A founder in well-being shared how using AI to narrativize a complicated project enabled holistic understanding, turning tangled tasks into meaningful narratives.
Simple tech, human enrichment: From the tactile wonder of the first iPhone touch to the personal assist of a health tracker keeping goals in focus, these human moments rooted our ambitions for what tech can be.
Unpacking the Problem(s)
Discussion quickly turned to what’s not working: why do so many feel disconnection, overwhelm, or outright harm from the tools around us?
Systems, Incentives & Ethics: Citing Donella Meadows’ “leverage points”, we mapped the structure of current “big tech”—incentives to maximize engagement (often at odds with well-being), business models that reward addictive design, and the challenges even insiders face trying to shift these forces.
Toward an Actionable Humane Tech Playbook
Our most generative discussions centered around creating an open-source playbook for building humane technology—one that addresses every function and role:
How UX designers can craft interfaces that respect attention
How engineers can explicitly build values into codebases
How product managers can prioritize features based on human flourishing
How content writers can craft language that empowers rather than manipulates
Key strategies, frameworks, and prompts from the evening:
1. End-User Metrics for Humane Design
We began piloting a basic “humane tech” scoring, which we used on companion bots, asking:
Do I feel cared for?
Am I present (mentally, emotionally)?
Do I feel more fulfilled?
Do I feel connected (to myself, others, or the world)?
Participants used these questions to rate their experience, uncovering gaps and sparking ideas for more nuanced, user-centered metrics. Weaknesses surfaced—bots were often “too shallow” or sycophantic, highlighting concrete areas for improvement. For a deep dive on companion bots, read my post, “How a world of companion bots could erode our conscience.”
2. Externalities Framework
Borrowed from the Center for Humane Technology, we discussed how companies might proactively mitigate harms (like the well-publicized tragic outcomes of companion bots for teens). Questions include:
Who is most at risk from this product? (Teens, the lonely, the marginalized—how do we protect them in design?)
What age restrictions and intervention mechanisms are actually in place?
How do products signal their artificiality and provide help when users are in distress (e.g., escalating to real human/emergency support)?
3. Ethics of Training Data and System Design
An actionable takeaway: Examine and curate training data for bias, harmful patterns, and dual-use risks. Don’t just accept “garbage in, garbage out.”
Technical strategies included:
Bucketing outputs for review, not just trusting model output at face value.
Engineering for objectivity (but with awareness of “context window creep”—the longer the interaction, the more a model can drift into simply confirming the user’s worldview).
4. Digital Wellbeing Interventions
Inspired by Julia Cameron’s “media fasting” and Jonathan Haidt’s research (“The Anxious Generation”), we brainstormed digital detox features and collective agreements for product builders, not relying solely on the user to self-police, but also building in breaks, transparency, and reflective moments into software itself.
5. First Principles & Problem Validation
Before building, participants resolved to always ask: What real (human) problem am I solving? Who is actually helped—or harmed—by this solution? Drawing on Neil Postman’s six-question framework, this pushed us to validate assumptions and priorities at the outset.
A Call for Coalition
We move further and faster together. Despite many aligned communities doing parallel work—Conscious Tech Collective, Effective Altruism, Center for Humane Technology, All Tech is Human—unifying efforts and sharing resources is critical to creating systemic change.
If you have expertise or passion for value-sensitive design, humane UX, engineering with ethics, or community health metrics, please contribute as we grow this open-source playbook.
With Gratitude
A huge thank you to UpHonest Capital for sponsoring our gathering, enabling authentic discussion and action. Your support makes coalition, community, and new paradigms possible.
And to all who shared their stories, knowledge, and hopes: you model the humility, curiosity, and constructive spirit required for long-term change.
What’s Next
Contribute to the Humane Tech Playbook: Fill out this form to participate.
Join our community: Connect with other humane tech enthusiasts.
Upcoming events: Attend our Humane Tech Hackathon, an in-person Meetup, or an online workshop.