1. Respect Human Nature: Design products that work in harmony with human cognitive biases and limitations rather than exploiting them for engagement or profit.
Examples
Removing “like” counts to bypass the tendency toward insecurity
Implementing features like "focus mode" or “batch notifications” that limit notifications during certain hours to help users concentrate without distractions.
2. Prioritize Values Over Metrics: Be metrics-informed but values-driven. Center product decisions on core values and user well-being rather than solely optimizing for engagement or growth metrics.
Example: Choosing not to implement addictive dark patterns, such as flashing notifications for unsolicited content, in favor of promoting user autonomy and well-being.
3. Minimize Harmful Externalities: Proactively identify and mitigate potential negative consequences of your product on individuals, communities, and the environment. Use tools like externality rubrics to assess and address potential harms.
Example: Introducing features that limit screen time
instead of infinite scroll, generate friction to introduce lag time so users can make conscious choices about whether to continue scrolling
set reminders to take breaks, or
enforce digital detox periods to promote better mental and physical health.
4. Foster Intentionality: Create features that help users reflect on their intentions and make conscious choices, rather than promoting compulsive or addictive behaviors.
Example: Providing a customizable dashboard that allows users to set goals and track their usage patterns, encouraging them to reflect on their tech habits and adjust for better productivity.
5. Protect and Empower Attention: Design interfaces and interactions that respect users' attention as a finite, valuable resource and enhance their ability to focus on what matters to them.
Example: Designing minimalist interfaces that remove unnecessary elements and focus on core tasks, helping users to concentrate on primary objectives without distractions.
6. Promote Shared Understanding: Build features that bridge divides, facilitate constructive dialogue, and help users gain accurate perspectives on complex issues.
Example: Incorporating collaborative tools that allow users from diverse backgrounds to engage in joint projects, share perspectives, and solve problems together.
7. Support Fairness and Justice: Actively work to identify and address potential biases in your product that could exacerbate societal inequities or unfairly impact vulnerable groups. Include diverse perspectives in the design process.
Example: Example: Including accessibility features and ensuring content is equally available and adaptable to users of all abilities.
8. Enable Human Thriving: Go beyond surface-level engagement to design experiences that contribute to users' long-term well-being, personal growth, and life satisfaction.
Example: Offering learning modules or features that encourage skill development, enhance knowledge, or support emotional well-being, thereby contributing to users' personal growth.
9. Build Trust Through Transparency: Implement clear accountability measures and be willing to openly discuss product decisions, especially with stakeholders most impacted by them.
Example: Implementing clear privacy settings that users can easily understand and control, and providing transparent reports about how their data is used and protected.
10. Design for Complexity and Change: Recognize that technology exists within complex systems. Be open to adapting products as societal values and user needs evolve. Embrace uncertainty and practice epistemic humility in your approach.
Example: Rolling out adaptive algorithms that adjust recommendations based on evolving user preferences and societal changes, while allowing users to provide feedback on their effectiveness and relevance.
Who am I and why did I create this?
When I saw the GenAI wave coming in early 2022, I realized I wanted — and needed — to be a part of it. AI is shaping our present and will certainly define our future. How can we get it right? I co-founded an AI startup to find out.
To learn more, I took the Foundations in Humane Technology course and created a humane tech meetup in the Bay Area so we can all learn the principles and endeavor to bring them into our orgs.
I consider myself a humane technologist and I am seeking to co-define the frameworks for this nascent movement. To create these draft product principles, I uploaded the entire Foundations course into Storytell.ai and queried it.
Which product principle can you take action on? Let me know.