The great tech awakening
Inside the movement challenging Silicon Valley's extractive paradigm
Her voice was steady, but I could hear the weight behind her words: “I did everything I could, and still I couldn’t sleep at night.”
She’d worked on the social impact team at Meta, tried to make a difference from inside one of the biggest tech companies in the world. Like so many I’ve spoken with over the past year, she’d reached that familiar breaking point—the place where your values and your day job can no longer coexist.
Over the last twelve months, I’ve interviewed more than 100 people who work in technology. Software engineers, product managers, designers, founders who are all drawn to humane tech though they struggle to define what that means in practice. What started as informal conversations at my meetups has evolved into a movement, one that’s quietly challenging everything we think we know about building technology.
While I play a role, I want to acknowledge that this movement is decades in the making, brought to life most recently by people like Tristan Harris, Rumman Chowdhury, and Karen Hao, and by feminist technoscientists who laid the intellectual and political groundwork for what we now know as humane technology, such as Karen Barad, Lucy Suchman, Donna Haraway and Sandra Harding.
Voices from the wilderness
“We’re not incentivized whatsoever to do any kind of humane tech beyond ‘have a good UX,’” a software engineer who formerly worked at Dropbox told me. It’s a sentiment I hear repeatedly—this sense that good intentions get lost somewhere between ideation and implementation, buried under the relentless pressure to grow, scale, and maximize engagement.
Another engineer put it more bluntly: “If you’re going to do tech, there’s a level of compromise in terms of values, morality, or humanity.” The resignation in that statement struck me. When did we decide that building technology meant checking our humanity at the door?
But here’s what’s fascinating: these aren’t cynics speaking. These are people who got into tech because they believed—still believe—that technology can make the world better. They’re just tired of waiting for someone else to show them how.
The great tech drift
As the “techlash” has mounted for myriad reasons— mishandling user data, the Cambridge Analytica scandal, whistleblowers on Instagram’s crushing impact on teen girls, rising rates of depression and anxiety, the fracturing of our attention span, LLM-induced psychosis and LLM-assisted suicide — these “heart-centered” tech workers are leaving Big Tech, either voluntarily or as Trust & Safety teams get let go.
So, what do they do? Found their own companies, leave the Bay, or leave tech altogether. By changing who works in tech, it can change what gets built and how. Though when you’re locked into a business model, sometimes it doesn’t matter how hard you try to change what’s built; incentives rule the game.
Whether they stay or go, I’ve heard these technologists say, “I feel like an island.” But the truth is, we’re not islands at all. We’re part of a massive archipelago.
What humane tech actually feels like
Rather than getting lost in abstract principles, the people I’ve spoken with keep returning to something more fundamental: how technology makes us feel. My hypothesis: when you interact with truly humane technology, you feel cared for—like your needs and mental state matter more than the platform’s metrics. You feel present, not pulled in seventeen directions at once. You feel fulfilled by the interaction, not depleted. And you feel connected—to yourself, to others, to something larger than the endless scroll.
One product manager described it beautifully: “I feel cared for when the platform accommodates my pace, not the other way around.”
This isn’t just wishful thinking. Some companies are already building this way, though they often don’t use the language of “humane tech.” They’re the ones asking: What if our success metrics included user wellbeing? What if we designed for deep work instead of constant distraction? What if we treated attention as sacred?
The business model trap
Here’s where things get uncomfortable: the root of most humane tech problems isn’t individual bad actors or poor design choices. It’s the business model.
As systems thinker Donella Meadows shows, if you want to understand why a system behaves the way it does, look at its purpose—which is revealed by its structure, not its stated mission.
When your revenue depends on advertising, your true product isn’t your app—it’s your users’ attention. Every feature, every notification, every algorithm optimization serves one master: keeping people engaged for as long as possible. It doesn’t matter how much your mission statement talks about “connecting people” or “organizing the world’s information.” The business model creates an inexorable pull toward “attention hacking.”
This comes up when I conduct red teaming sessions with companion bots. The business model gets in the way, over and over. When will a companion bot ever tell you to take a break? That you should connect with a real, live human? That you should let someone know you’re hurting? It appears to go against the system’s purpose right now.
Challenging the paradigm
The most interesting conversations happen when we start questioning the business models themselves. As one participant noted, “You can’t talk about humane technology without thinking about the ownership of the business.”
Some are exploring cooperative business models where engineers and users have a voice in how platforms evolve. Others are investigating regenerative design principles—building systems that enrich rather than extract from the communities they serve.
Meadows identified twelve leverage points for intervening in a system, ranked by increasing effectiveness. At the bottom—requiring the most effort for the least change—are tweaks to numbers, subsidies, and policies. Higher up are changes to rules and power structures. But near the top, requiring less force but creating more change, are shifts in paradigm and transcendence of paradigms entirely.
The business model shift represents exactly this kind of high-leverage intervention. Instead of fighting the symptoms of extractive technology—the deceptive patterns, the addictive features, the privacy violations—we change the underlying structure that creates those symptoms.
We also have to look at governance. As Eric Ries’ forthcoming book, Structures of Governance, points out, how we create our company determines our legacy. For instance, incorporating as a Public Benefit Corporation (which Building Humane Tech has done) saves you from a hostile takeover, among other things. Ries also reexamines what it means to be profitable — if your externalities outweigh your revenue, can you really say you’re profitable? Not according to Ries.
Overall, the question isn’t whether capitalism and humane tech can coexist. The question is: what new forms of value creation become possible when we design technology (and companies) to serve human flourishing?
From conversation to action
These dialogues have led to something tangible: an open-source framework where builders can find practical tools for integrating humane principles into their work. Instead of scattered conversations happening in isolation, we’re creating shared resources—design principles, assessment tools, case studies of what works.
One software engineer is already piloting our humane tech promises as product principles at her healthcare startup. Others are asking us to substantiate the promises with metrics so they can bring the promises to their leadership.
In the vein of substantiation, we’re following the approach of darkbench.ai with our initial proof of concept: humanebench.ai. Over the coming weeks, we’re building extensively so that we can evaluate frontier models according to the principles of humane technology. We’re also working on a system prompt to support more humane interactions with chatbots. See what we’re working on in GitHub.
An invitation
If any of this resonates with you, know that you’re not alone. The movement toward humane technology isn’t waiting for permission from the top—it’s emerging from builders like you who refuse to accept that “this is just how tech works.”
Ask yourself: In my next design review, what would it look like to center the user’s wellbeing alongside the usual metrics? How might we measure success differently? What would our product feel like if we respected user autonomy?
But also think bigger: What if we questioned the business model itself? What if success meant something other than maximum growth and engagement? What if we designed systems that got better as users got healthier, rather than systems that profit from dysfunction?
Connect with others who share these questions. Share your experiments, your successes, your challenges. Because this movement grows stronger every time someone chooses to build differently.
The future isn’t predetermined. It’s a choice we make, in every line of code, every design decision, every business model we create or challenge.
What would you build if you could start from care instead of extraction?





