How to minimize the costs of growth at all costs
Use this externality framework to minimize harm from whatever you're building
How would you say we’re doing as a society? Take a breath. Hold this litmus test in your mind: how are our children? Our teens? Are they finding their way? Becoming healthy, happy, and well-adjusted?
As Soren Gordhamer put it at Wisdom 2.0 & AI in San Francisco this October,
“[To understand] whether we’re moving in the right direction, creating a healthy, wise, compassionate society, we can look to the well-being of our children and our teenagers to determine, are we doing a good job? And the data would suggest we're doing a horrible job.”
Let’s acknowledge the state that we’re in, vis-a-vis technology:
phones as a wall between us, inhibiting connection
As our lives have become increasingly digital, humanity is at risk. Social media algorithms have already shown us how easy it is to lose ourselves when we interact online. With the pace of change speeding up, we will face a reckoning if we don’t wake up to this pain.
So, we have to ask ourselves, who are we as a people? What facets of humanity— kindness, connection, wisdom, and compassion—do we want to retain? Answering this question intersects with how I define humane technology: retaining our humanity with technology, not in spite of it.
I wish I could tell you the solution was well-defined. It’s not. Yet that is our opportunity: to determine the many paths to minimizing harm together.
There is too much bad news to justify complacency.
There is too much good news to justify despair.
Let’s be clear: minimizing harm is just one piece of the puzzle. For a broader view of how we can enact change, see Donella Meadows’ leverage points to intervene in a system.
We have big tech on the left side of the fulcrum, with leverage points to intervene on the right. If you’re a product designer, you could start by exploring design changes, using the humane design guide to improve UI /UX as we did at our meetup this past August. While this won’t change the world, it will change your users’ experience, and, perhaps, their behavior. If you’re company leadership, you could affect the business model. Everyone can attempt a paradigm shift — it has the most impact and tends to culminate from a series of concerted actions.
At our September meetup, we focused on the next two leverage points: internal governance and external regulation. To do so, we worked through this externality framework to protect users and society from short-term and long-term consequences.
Insistence of growth
At early-stage startups, the pressure to survive is louder than the beat of any drum. As Y Combinator founder Paul Graham has written, in the beginning startups are default dead. Assuming you don’t become part of Silicon Valley’s cemetery of failed dreams, as your company stabilizes, you need to plan for impact, as do “default-alive” companies (see MANGA and stalwarts like IBM).
Why? Because tech’s ability to scale is the alchemy that turns ideas into impact—just look at the rapid pace of generative AI. While our products have positive, intended consequences, they also have negative, unintended consequences, otherwise known as externalities. As builders, we have the responsibility to plan for and minimize the impact of externalities.
See the externality rubric in action
Use the following externality rubric, from the Foundations in Humane Technology course, to identify the impact of externalities from a feature or product:
What’s the scale of the externality?
How widespread is it?
What are the stakes of the impact?
Is it reversible or irreversible?
How vulnerable is the group or system impacted?
How exposed or resilient is it?
What are the long-term costs?
If left unaddressed, who will bear the costs?
How costly will it be?
Paths to harm reduction
How might less of this externality be created?
For instance, car accidents are an externality for automakers. While many factors lead to car accidents (distracted drivers, poor weather conditions, etc), companies must plan for harm and build ways to minimize it. As such, automakers have rolled out safety features over the years, from seat belts (1968) to anti-lock brakes (2012)— improvements that save lives. Side note: While we take seat belts as a given, it took substantial pressure on automakers to provide them and for state law to require them. What’s more, lawmakers faced a major backlash in their rollout for ‘limiting personal freedom.” Paradigm shifts don't tend to go down smoothly.
In our meetup, we explored externalities for:
TikTok
Screen time for kids & teens
ChatGPT
Here’s how we focused our attention on TikTok:
Externality: Misinformation
What’s the scale of the externality?
How widespread is it? Global; anyone who uses the app or gets their information from someone who is using the app
What are the stakes of the impact?
Is it reversible or irreversible?
Irreversible:
Regarding the outcome of elections
Potentially Reversible:
Public opinion, communication skills, or physical well-being
How vulnerable is the group or system impacted?
How exposed or resilient is it?
The health of democracy is at risk; democracy itself is proving less resilient
Our brains tend toward cognitive and confirmation bias, making us more likely to accept misinformation as fact
What are the long-term costs?
Inability to disagree peaceably
Stunted critical thinking
Fragile democratic societies
Decreased health and happiness in poorly governed nations
If left unaddressed, who will bear the costs?
Society at large; we’re becoming so fractious that we can’t have basic conversations
Democracy
How costly will it be?
The US has dropped out of the top 20 countries in the World Happiness Report; perhaps we’ll fall even more
How do you measure the loss of a resilient democracy?
Paths to harm reduction
How might less of this externality be created?
External regulation: Require age-appropriate digital citizenship courses in K-12 education
Internal governance (may take external regulation to necessitate) :
Within TikTok, institute “failure KPIs” for misinformation
A “failure KPI” measures externalities, as opposed only tracking engagement
Modify the algorithms to remove misinformation from the platform
Present different perspectives and viewpoints on an issue; prevent rabbit-holes based on a single belief system

Let’s dive into screen time for kids and teens:
Externality: Screen addiction
What’s the scale of the externality?
How widespread is it? Anyone under 18 who uses screens / smart phones
What are the stakes of the impact?
Is it reversible or irreversible?
Irreversible:
Macular degeneration
Potentially Reversible:
Self-harm
Bullying
Disruption of circadian rhythms
Behavioral issues
Sedentary lifestyle
Reduced capacity to connect with others in person
Low self-esteem
Limited attention span
Unrealistic views about sex
Porn and gaming addictions
Increased selfishness
How vulnerable is the group or system impacted?
How exposed or resilient is it?
Child and teen brains are developing, making them far more vulnerable than adults
What are the long-term costs?
A more fragmented society
Lifetime prescriptions to anti-anxiety meds
Less social resiliency
Poorer eyesight
Decreased physical and emotional health: obesity, depression, etc
Decreased sexual intimacy
Insomnia
If left unaddressed, who will bear the costs?
Kids, teens, teachers and their families
Society
Healthcare system
How costly will it be?
It could negatively impact the social determinants of health
Paths to harm reduction
How might less of this externality be created?
External regulation
Building blue light filters into every screen
Program child-safe firewall option for every router by default
Improve sex education in schools
Address what kids are seeing on their phones
Require age-appropriate digital citizenship courses in K-12 education
Internal governance
Automatic screen time cut off per age of user, programmed into devices
Institute “failure KPIs” for all each measurable long-term cost
Batching notifications, which Apple has recently released
While each harm reduction proposal may not be feasible in the immediate future, some of them are already in play. As humane technologists, we adopt the mindset of “start by starting,” because the world needs us. And we need us, too.
We use Big Tech as the backdrop because it’s easier to see the issues in products that have already scaled. But the real question is this: how will you bring these principles alive in your company?
As famed systems thinker Donella Meadows asserts, the way to set a paradigm shift in motion is to show all the ways that the current paradigm isn’t working — and to chart an alternate path, one that we can action in our own lives. That’s what we do, every month, at my meetup in the Bay Area. Come join us!
Humane technology, as defined by the Center for Humane Technology:
Respects human nature: How can technology work in harmony with the vulnerabilities and biases with which all humans have evolved?
Minimizes harmful consequences: What economic forces affect products, and how can product teams help address and reduce harmful externalities?
Centers on values: How do the conditions of our lives shape our values? How might product development be informed by metrics but centered on values?
Creates shared understanding: How can technology engender the trust and understanding we need to solve complex problems together?
Supports fairness and justice: How can technology enable a more just world, and practically integrate voices of people who experience harm?
Helps people thrive: How can products help people act in alignment with their deeper intentions, rather than optimizing for engagement?