The responsibility renaissance for AI builders
Why the quiet voices in AI research are getting louder about empathy and keeping humans as architects of their own systems

On Monday in San Francisco, we gathered for our TechCrunch Disrupt side event — a screening of our documentary short, presented by Building Humane Technology in collaboration with Amazon Science.
It wasn’t your typical Silicon Valley networking event. Instead, it represented something rare: a space where frontier AI researchers, academics, founders, and builders could explore the quieter undercurrents of technological development—those voices asking not just “can we build this?” but “should we, and how?”
The evening unfolded “more like a living room conversation with like-minded people,” according to moderator Danielle Perszyk, a cognitive scientist at Amazon’s AGI lab —a gathering designed to surface the interdisciplinary thinking that challenges Silicon Valley’s dominant narratives around AI development.







From Isolation to Collective Agency
The Recognition of Community
The evening opened with a powerful acknowledgment from several panelists about discovering they weren’t alone in their concerns. Personally, I often hear builders saying, “I feel like an island. I thought it was the only one,” when in fact I see that we are many.
This sentiment echoed throughout the conversation, with Danielle noting that “over the past month, I’ve been talking with people actually all over the world at conferences. And this is an undercurrent. People are talking about these things. It’s just not making it out into the broader narratives”.
Empowerment vs. Replacement: The Central Tension
The Philosophy of Human-Centered Design
Deniz Birlikci, researcher at Amazon’s AGI Lab, emphasized that “everything starts with empathy.” Drawing from his startup experience, he cautioned against the common trap: “You have to first not look at something and be like, ‘Hey, we can automate this by AI.’ You should talk to the people that’s in the industry and, like, really understand what their inputs and outputs are. Otherwise, you cannot really design something to help that human.”

Avoiding Human Mechanization
Phoebe Yao, founder and CEO of Pareto.ai, articulated a crucial distinction in AI system design: “When we’re designing these systems where we put humans in this larger systems of intelligence really matters... I think it’s really important that we design these systems where humans remain the architects of the systems, rather than just a small part of the system. And I think the latter would be mechanizing people. It would replace people’s jobs. In the former, we would be finding ways that can multiply the impact of an individual.”

The Measurement Challenge
Yao identified a critical gap in the current discourse: “The piece around human empowerment, rather than replacement, is so important, but it’s so hard to measure. If we can’t find tools to measure, to really concretize this work, then we will continue to push this narrative of replacement, which is fundamentally disempowering and removes agency from humans.”
Amish Foundations for Humane AI

Joel Lehman, Research Associate at the University of Oxford and author, drew inspiration from Kevin Kelly’s work on the Amish approach to technology. “What would it mean if, as individuals or society, we were like 1% more Amish?” he asked. The Amish, he explained, “are deliberately choosing to adopt technologies into their community in a really kind of trial and error kind of way to make sure that technology actually is helping strengthen their community. So it’s a value-based way of adopting technology.”
Join Us: Tend the Fire
If any of this resonates with you, know that you’re not alone. The movement toward humane technology isn’t waiting for permission from the top—it’s emerging from builders like you who refuse to accept that “this is just how tech works.”
How you can get involved:
For Builders:
Contribute to our open-source frameworks on GitHub
Join our November 8th hackathon to help build the evaluation infrastructure for humane AI, either online or in-person in SF.
Test our humane system prompts & let us know how they impact the output
For Everyone:
Join our Slack community to connect with others building more humanely
Visit buildinghumanetech.com for resources and tools
Share your experiments, your successes, your challenges—because this movement grows stronger every time someone chooses to build differently
The future isn’t predetermined. It’s a choice we make, in every line of code, every design decision, every business model we create or challenge. The fire is already lit. The question is: will you help tend it?
What would you build if you could start from care instead of extraction? The tools are here, the community is waiting, and the moment is now.
Ready to hack with us? Join our November 8th event and help build the standard for ethical AI evaluation. Because when we build the measurement tools together, we build the future together.
Huge thanks to our gold sponsor, Amazon Science, for supporting our work.



