• Feb 6

Food for Thought by ES: A Pro-AI Manifesto for Responsible UX Research Leadership

  • Emmanuelle Savarit
  • 0 comments

I completed the AI & Ethics programme at the London School of Economics at the end of last year.

Let me be very clear from the outset.
I am pro-AI.

I use AI a lot. Across my work and my life. Writing, synthesis, planning, learning, analysis, and decision support. If a tool reduces friction and makes my life easier, I will try it. I do not rely on a single model or platform. I experiment constantly. What matters to me is usefulness, not brand loyalty.

The course I took in autumn 2025 did not make me step back from AI.
It made me step back and think.

It changed how consciously I use AI, how vigilant I am about its effects, and how seriously I take my responsibility as a leader in UX research.

Because AI is not just a productivity layer.
It is a system of power.

AI systems shape decisions, priorities, visibility, and outcomes. Often invisibly. Often at scale. And the smoother they work, the less likely they are to be questioned.

One of the most important ideas from the course was legitimacy. Who gets to exercise power, on what basis, and with what right to challenge or contest decisions?

This question matters deeply for UX research.

We are no longer just designing interfaces or improving usability. We are contributing to systems that classify people, predict behaviour, prioritise some users over others, and automate decisions that used to involve human judgement.

If UX research focuses only on efficiency, usability, and optimisation, it is no longer neutral. It is taking part in how power is exercised.

Bias reinforced this point even further.

The course made something very clear to me. Bias in AI is rarely an accident. It is rarely just a data problem. It is usually the result of structural choices. Choices about training data, optimisation goals, thresholds, incentives, and trade-offs.

AI does not distribute harm evenly. It concentrates it. Error rates are not random. Exclusions are not incidental. And focusing on the “average user” can hide who actually bears the cost of automation.

UX research can unintentionally legitimise these systems. By validating them. By improving them. By optimising them for scale.

That does not make research unethical. But it makes vigilance non-negotiable.

Fairness is not just about representation in samples. It is about understanding who benefits, who bears the risk, and who can challenge outcomes.

The course also sharpened my thinking about human judgment.

I use AI extensively in research and I will continue to do so. But I no longer see human judgment as something to minimise or automate away. I see it as a safeguard.

AI outputs are not truths. They are interpretations. They reflect assumptions, values, thresholds, and priorities set by someone, often far from the people affected by the decisions.

Misalignment rarely looks dramatic. It shows up quietly. When efficiency overrides dignity. When prediction replaces understanding. When automation removes the friction that once protected people.

Leadership in AI-driven environments is not about blocking innovation. It is about knowing when to slow down, when to intervene, and when to insist on human oversight.

As UX researchers, we are well placed to do this. We understand context. We see second-order effects. We sit close to the human consequences of technical decisions.

That is why I am sharing this.

Not because I have all the answers.
But because my thinking has shifted.

Being pro-AI today means being deliberate. Conscious. Vigilant.

It means asking harder questions, not fewer.

Questions I continue to ask myself

  • When AI influences decisions about people, where does accountability really sit?

  • Are users able to understand, question, or contest those decisions?

  • When optimisation and fairness come into tension, which one wins?

  • Where should human judgment remain non-negotiable?

  • Which decisions should never be fully automated, regardless of accuracy?

For me, responsible AI leadership in UX research is not about restraint for its own sake. It is about using AI well, with clarity about power, consequences, and responsibility.

That is the mindset shift the LSE course gave me.
And I believe it is one point that our field urgently needs.

0 comments

Sign upor login to leave a comment

About the Author

Emmanuelle Savarit

Emmanuelle Savarit is a recognised leader in UX research and product strategy, with more than two decades of experience shaping research at scale. With a PhD in psychology and a background spanning technology, platforms, and digital transformation, she brings a rare combination of depth, clarity, and strong business focus.

She is the author of Practical User Research and The UX Research Powerhouse, creator of The UX Research Club, and an advisor to senior leaders on embedding research into strategic decision-making. Emmanuelle is a regular speaker and runs executive masterclasses for research and product leaders seeking greater influence and impact.