Blog

How a chatbot can keep your participants more engaged

Failing chatbots

We’ve all been there: your chat with a customer service chatbot is not helpful at all and leaves you utterly frustrated. I surely recognized some of the examples of failing chatbots here! Part of such failing user experience is explained by how chatbots are designed. Manually programmed chatbots follow a simple decision tree, which makes them unable to respond to text that is not in their decision tree options. However, under specific circumstances, this is exactly why chatbots can be a useful method for UX research.

how to use messenger bot
Example of failing chatbot, from https://www.netomi.com/the-worst-chatbot-fails-and-how-to-avoid-them

Personal UX research

A few years ago, I was (co-)designing a couple of explorative studies for a health tech company. In all of the studies, we wanted to closely monitor participants’ behaviour. In order to do so, we needed to ask them the same questions every day, preferably timed as close to the actual experience as possible (in clinical research also called ecological momentary assessments). In one of the studies, this meant measuring sleep experiences right after people had woken up. Doing this through pen and paper diaries had proven to be unreliable and a lot of manual effort. Calling participants early in the morning every day just seemed unfeasible for both participants and researchers.

Chatbot research

Instead, we used chatbot conversations to frequently engage with our participants at relevant moments. Our first chatbot program was based on a clinical sleep diary (such as this one). We transformed the diary items into conversational questions to be asked by the bot (see the example below). Because the bot could respond immediately to a participant’s answer, we were also able to ask clarifying follow-up questions.

Example of building a chatbot for UX research

Advantages of chatbots as a UXR method

Making it personal

In one of our small studies, participants indicated that interacting with a chatbot felt more personal to them, when compared to a sleep diary on paper. Higher participant engagement can lead to a higher response rate and more reliable insights.

“I liked it that we (the bot and participant) had a connection. I spend almost all my time on the computer because I work from home. I have not much person to person communication. I liked it because it looked like I was talking to somebody”

(participant)

Tone of voice is crucial for this personal touch to the bot. We chose an informal and friendly, but not too “Yo what’s up” conversation style. And you might want to give the personality of your chatbot some thought. In one of the tools we used (Flow.ai) it is possible to add photos, emoticons and GIFs to the conversation. This added a powerful human dimension to the bot and made the conversation less text-heavy. Always a good thing, because we learned that people did not read the introduction to a new topic or a set of questions (in general, people tend to read little).

A photo makes the interaction more interesting. Credits to Design for Co-responsibility

 Starting point for interviews

Chatbot interactions can perfectly complement interviews. In one of the projects, we used participants’ answers in the chat as a starting point for in-depth questions in the closing interviews. This made participants’ psychological needs and motivations surface quickly, enabling the researchers to understand our users better.

Relevant timing

Interviews carried out at the start of a study can also drive chatbot interactions. We started each project with an interview on participants’ daily routines. When a participant would share that he normally has breakfast at 7:45 a.m., we would program the bot to send a question on breakfast experiences at 8:00 a.m. Asking questions at a relevant time increased the chance that people would respond. Also, sending chat questions acted as a notification, which may have increased participants’ response rates.

Relevant questions from the chatbot can also be triggered by data from sensors. We used activity trackers to monitor physical activity. When a participant would not get to a specific threshold (e.g. 6000 steps or less), this would trigger chatbot questions to understand why a person was less active than expected.

Example of a short chatbot conversation, here with a small tree with 2 branches in Flow.ai

Pre-schedule for long-term interactions

A chatbot can be pre-programmed to ask the same questions, many days in a row. Pre-scheduling means you could theoretically go on vacation during the study period (not recommended though 😉 ). It also means that many people can be reached simultaneously, which is a big advantage over doing interviews.

Direct feedback

Through an analytics functionality we could monitor whether participants were (not) responding to our chatbot questions. This direct feedback loop allowed us to react appropriately. We could address bugs or issues quickly and contact participants directly or ask follow-up questions to understand why they were not responding.

Things to consider

Chat is not the right place for an extensive conversation

We used the chatbots mostly for multiple choice questions. This gave us a good general impression of participants’ behaviour, but did not tell us much on the underlying reasons for this behaviour. Asking open-ended questions on why participants engaged in a particular behaviour often resulted in very superficial answers or in no response at all. An important learning that chat is not the time and place for long explanations (but interviews are 🙂 ). 

Inextricable branches

In one of the studies, we used a lot of if>then>else statements to individualize the chatbot’s questions as much as possible. This resulted in a very dense decision tree, which soon became too complex to use. In the end we decided to keep the very specific, personalized questions to the follow-up interview and ask the more general questions through the chatbot.

Context is needed

Participants found it difficult to understand how long a chat would take them. We added context to the conversations by including short introductions, such as ‘We would like to discuss 3 things with you’. However, adding more text did not help; participants often skimmed over our intros. Drawing attention with a picture or a GIF to the introduction helped. A bit.

You have to follow the flow

When using scientifically validated questionnaires turned into chat (like in the sleep diary case), you have to stick to the original formulation and order of the questions to not afflict validity. However, having to answer exactly the same questions every day for weeks is not very engaging to people. This means finding a balance between measuring behaviour in a proven valid way and providing your users with an engaging user experience (i.e. using different wording, order of questions, GIFs, images, stories to ask those questions). The nature and aim of your study will have to decide on the optimal balance.

Setting up and running a chatbot is a lot of work

A proper chatbot conversation requires preparation. Because the questions and answering options need to be spot on, you are forced to define clear hypotheses on what you want to know from your participants – which is a good thing, it just costs more time. Writing chats that are in line with your bot’s personality and tone of voice also takes considerable effort and you may want to involve a UX copywriter.

Concluding

Consider whether your study is worth the effort of drafting and running a chatbot or whether an interview or questionnaire could do the trick. Is your aim to reach many people at the same time, at ‘unconventional’ moments during the day? Do you plan to monitor participants intensely over a longer period of time? Is engaging your participants your highest priority? Or will the chatbot become an integral part of the (future) product? Then it may be very valuable to use a chatbot as a UX research method.

Special kudos to Tudor Vacaretu, Karin Niemandsverdriet, Anne-Wil Burghoorn, Janis van Lokven, Jos-marien Jansen for your input!

Carmen van der Zwaluw

Carmen is always thinking about ways to improve products and services. She loves talking to people and using those insights to optimize the experience with a service. Her background in psychology, quantitative research and UX research allows her to be a bridge-builder between people (users), services and technology.

6 min. read
2.8K reads

Stay ahead in UX research

Get inspiring UX research content straight to your inbox.

  • This field is for validation purposes and should be left unchanged.