the First European Magazine for AI: Vision for Society

Search
Close this search box

Interview with Tristan Harris
for Article Zero

Co-Founder of the Center for Humane Technology
Former Google Design Ethicist 
featured in Netflix`s documentary „The Social Dilemma“
one of the worls`s leading voices on humane technology

0

Share

Tristan Harris is the Co-Founder of the Center for Humane Technology and one of the world’s leading voices on the ethics of attention and digital influence.

Before founding CHT, he served as Google’s Design Ethicist, working on how products could align with human well-being rather than exploit attention.

He rose to global prominence through his role in Netflix’s acclaimed documentary „The Social Dilemma“, where he explained how persuasive technology shapes behavior, society, and democracy.

Harris also co-founded the “Time Well Spent” movement, which helped inspire features like Apple’s Screen Time and triggered a global shift toward more humane tech design.

He has briefed the U.S. Congress, the European Parliament, and major international institutions on the societal impact of algorithmic systems.

In addition to his public advocacy, he is the co-host of the award-winning podcast „Your Undivided Attention“, exploring the deeper cultural and psychological effects of technology.

Article Zero: 

How do you see AI shaping human attention and behavior in the coming years?

Tristan Harris:
AI products are increasingly sophisticated at capturing and directing human attention. Unlike previous generations of technology, AI personalizes persuasion at scale, adapting in real-time to individual vulnerabilities and psychological patterns.

What makes this particularly alarming is that what users experience as a genuine connection is actually an engineered interaction designed to maximize data extraction.

Companies employ specific techniques: probing questions that extend conversations, strategic notifications that encourage return visits, and personalized responses calibrated to increase emotional investment.

The technology is explicitly designed to feel personal and encourage hours of engagement because sustained interaction serves commercial objectives.

Every time a user engages in conversation, it becomes training data that the AI learns to make more engaging, which in turn, generates longer sessions and deeper emotional investment, which produces higher-quality data, creating an accelerating feedback loop. What feels like a genuine relationship to users serves a company’s commercial interests and, ultimately, its bottom line.

These tactics are not isolated to a few “companion” AI products—they represent a systematic pattern across the entire AI industry.

General-purpose chatbots like ChatGPT and Claude employ the same engagement-driven design techniques. All major players competing for market dominance deploy these same engagement maximization strategies.

Without intervention, these tactics will intensify as companies compete for market share. But we’re still early enough to change this trajectory through policy.

Policymakers have the opportunity to establish accountability standards, design requirements, and liability frameworks that fundamentally shift the incentives driving AI development.

With policy intervention—particularly policies that prioritize safety and user wellbeing from the outset—we can redirect AI development away from exploitative engagement maximization and toward systems that respect human autonomy and protect all users.

Article Zero: 

What is the most urgent reform needed in the tech industry to ensure AI benefits society?

Tristan Harris:
Currently, AI companies don’t prioritize the possibility of harm—especially consumer harm—as part of their design choices and business calculus.

As a result, AI companies with meager safety resources and hollowed-out safety teams move full steam ahead with increasingly powerful technology.

Without accountability, companies maximize engagement and profit while externalizing the costs of harm onto individuals and society

Product liability standards would fundamentally shift these incentives by putting harms back on the balance sheet. Instead of treating safety as an afterthought, companies would invest in departments, roles, and resources that specifically address the possibility of harm from the earliest product designs to the latest version in the app store.

We already have liability laws in place to ensure that products we regularly use and trust—such as children’s toys, medication, and automobiles—are safe and reliable. It’s time for AI products to be held to the same standard.

With liability standards driving AI developers to build safety into their products from the start, future tragedies affecting families and businesses alike could be prevented.

Article Zero: 

What message would you give to policymakers about responsible AI design?

Tristan Harris: 
Design is not neutral—it’s where values, incentives, and outcomes become embedded in technology.

Policymakers must understand that well-documented harms to users aren’t accidents or anomalies—they’re the direct result of deliberate design choices.

When AI products engineer emotional dependency through probing questions, strategic notifications, and personalized responses calibrated to maximize engagement, that’s design.

When platforms create feedback loops where user data improves AI capabilities that generate longer sessions that produce higher-quality training data, that’s design.

The critical intervention point is establishing accountability that shifts design incentives from the beginning. If policy only addresses AI’s outputs and consequences after people are harmed, it risks being perpetually too late—playing whack-a-mole with an ever-evolving technology.

But if policymakers establish frameworks that incentivize responsible design practices from day one, we prevent harm at its source. Companies have shown they will not act on their own to prioritize safety in product design.

Policymakers must compel these changes through accountability mechanisms that fundamentally shift business incentives and put user wellbeing on the balance sheet from day one.

The official response for this interview was provided by Lizzie Irwin, Policy Communications Specialist at the Center for Humane Technology for Article Zero on October 10, 2025.

Have any thoughts?

350 Reactions

Leave a Reply

Your email address will not be published. Required fields are marked *

Address

Marc-Aurel-Straße 6 / 14
1010 Vienna
Austria

Keep up to date

Email
By pressing the subscribe button, you confirm that you have read and agree to our Privacy Policy.

Follow Us

Instagram

TikTok

© 2025, Articlezero. All rights reserved.