Thinking

Lesson learned: integrating AI into user research without losing value

The introduction of AI into our research projects sparked a mix of curiosity and caution. On the one hand, there was excitement about new possibilities. On the other hand, there was concern about relying too heavily on a powerful technology that still needs guidance.

Published on
Posted in
Insights
Innovazione
Written by
Claudio Guerra
Stylized illustrations depicting digital tools, emotions, and circular processes on a light background with a dark side panel.

In recent months, we have publicly shared some of our most meaningful experiments at Tangible in both Valentina Marzola’s talk at UXday and Caterina Amato’s talk at URCA!.
This article compiles what we’ve learned and offers an honest look at the potential, limits, and practical implications of using AI in research with people.

Why did we decide to start experimenting?

Anyone conducting research in the real world is familiar with the daily challenges: tight timelines, limited budgets, difficulty synthesizing information, and the need to share clear insights quickly. In this scenario, we wondered whether AI could support us at specific moments, lightening the operational workload without compromising quality.

From the start, however, we set a shared rule: Use AI to create space, not to replace.
The goal isn’t to automate thinking but to create space for better thinking.

Two field experiments

For the projects presented by Caterina and Valentina, we chose to focus on two phases that exemplify how AI can provide concrete, measurable support: the discovery phase and usability testing.

Though we’ve run several experiments in both areas, the two talks highlighted two specific activities:

  • Preparing the research
    During the discovery phase, we used ChatGPT to explore an unfamiliar domain for both the team and the client, as well as to simulate realistic dialogues, explore hypothetical scenarios, and identify variables and potential issues. We also tested whether ChatGPT could help co-create a research script.
    This collaboration enabled us to develop stronger questions and scripts.

  • Synthesizing data and preparing reports
    IIn both of the presented case studies, the number of interviewees exceeded 100. However, the real challenge that led us to explore AI support was time: ten full days for the usability test and about twelve hours of recordings for the discovery phase. In both cases, the time available for synthesis and reporting was extremely limited. What did we do? We asked NotebookLM and ChatGPT to act as assistants.

    For the discovery activity, we preferred NotebookLM, and for usability testing, we preferred ChatGPT. After rigorously structuring the note-taking file, we asked the tool to identify patterns, extract quotes, and uncover insights.
    It was an intense phase, but AI support made it more manageable.
Hand‑drawn illustrations showing user insight analysis and a superhero designer interviewing a diverse group of people.

What worked well

We discovered that AI can be a valuable ally, especially when the volume of data is too much for the team to handle. Automating repetitive tasks such as transcription, initial synthesis of interviews, identifying recurring themes, and extracting quotes freed us to focus on critical thinking tasks such as interpretation, connection, and storytelling.

During the exploration phase in unfamiliar contexts, AI also proved helpful in addressing initial doubts, building plausible scenarios, and generating hypotheses for validation. Starting with a few footholds can make a big difference.

When working alone, in small teams, or under tight timelines, having a digital "copilot" to bounce ideas off of can be a real advantage, even just to cross-check emerging perceptions and verify the consistency of collected insights.

What worked less well

Not everything went smoothly. In some cases, using AI revealed more limitations than benefits. For example, we realized that, when supporting synthesis during a usability test, AI cannot "see". It struggles to link what’s happening on the screen with the behaviors described in the notes, unless it is given extremely detailed context. This makes AI less useful for analyzing complex interfaces unless it is paired with meticulous mapping work.

Moreover, the quality of the output is closely tied to the quality of the input. If the notes are disorganized or multiple researchers use different structures, the AI struggles. In some cases, it even invents content or emphasizes minor details, which skews the actual priorities.

Finally, we realized that AI has no awareness of project context. It doesn't distinguish between relevant information and noise unless you explain it. In a delicate phase like synthesis, this can pose more risk than benefit.

What we’ve learned so far

If these experiments have taught us one thing, it’s that AI can unlock value, but it can't create it on its own. Making it work requires method, care, and critical thinking. AI is not an expert system; it’s a tool that can amplify or confuse our abilities.

The key lies in integration. It's not about replacing people with algorithms but rather creating space for collaboration between the two types of intelligence. In this sense, we prefer to think of AI as an assistant that helps you do your job better rather than as a replacement.

We’re also coming away with a greater awareness of how important it is to: We understand the importance of structuring information well, agreeing on conventions within the team, and scheduling critical and shared reviews.

AI is neither neutral nor magical. However, it’s a powerful tool when used methodically.

Based on this awareness, we’ve gathered a few principles that we’re keeping in mind as we continue to experiment, and that we think are worth sharing with anyone who has the same questions:

  • Well-structured data matters more than the tool: If the notes are disorganized, the AI will be too. If they’re well-organized, AI can really help.
  • AI understands text better than interfaces: For now, it can’t "see" a screen or read a sketch.
  • Speed isn't the goal, but a side effect: AI saves us time so we can focus on what requires care, listening, and empathy.
  • Never trust blindly: AI can invent, distort, or overemphasize irrelevant themes. Human, critical, and contextual validation is always needed.

Not faster, but more strategic

This all reminds us that the real value of research lies in our ability to deeply interpret what emerges, not in how quickly we conduct it.

We’re integrating AI not to conduct research faster, but to improve our work. This allows us to free up room for reflection, dive deeper into an insight, and choose the right words to tell the story. In an increasingly pressured context, this isn’t a shortcut; it’s a design choice.

Our exploration is far from over. We’re still testing tools, adapting processes, and learning from other teams. But if there’s one thing we can say with confidence, it’s this: AI doesn’t change how we do research. It changes the conditions that allow us to do it well.

Caterina and Valentina on stage during two talks about the strategic role of AI, with audience in the room and projected slides.

We’re living in a time of great uncertainty. Business models are shifting, new technologies are advancing rapidly, and expectations are evolving. In this context, testing and validating ideas with people isn't a luxury, it's a necessity.

That’s why we believe in the value of user research more than ever: to understand before building, to make better choices, and to avoid solutions that only work on paper. At Tangible, we support product teams and organizations by guiding them through focused, agile, and sustainable user research journeys.

If you’re looking for a concrete way to assess your progress, we can help.

Related posts