“What we know is a drop, what we don’t know is an ocean”Isaac Newton
Since the Industrial Revolution, many workers have had to look over their shoulders, worrying that technology and automation might make their jobs obsolete. Historically, manual or repetitive jobs such as switchboard operators, travel agents, and factory workers have been first on technology’s butcher’s block.
Recent developments in generative artificial intelligence (AI) and Large Language Models (LLMs) – like ChatGPT – have raised unprecedented questions and uncertainty about the future of knowledge work, traditionally thought to be protected from past forms of automation. Even human creativity, in-born talent, and artistic expression may not be safe from this new AI revolution.
The field of User Experience Research (UXR) has yet to escape the precariousness caused by this wave of AI advancement, raising questions about how best to integrate AI into our workflow and whether we should prepare for a career change.
In this article, I will claim that this panic and fear is mainly overblown. Its central argument is that AI can’t do the type of work that we do. By comparing AI to UXR, it will become evident that our profession is safe and durable. This analysis might also reveal a new way to frame UXR’s unique value-add, which sometimes gets overlooked by our business partners because (a) of the many hats we wear to get the job done and (b) because of persistent misunderstandings about the type of value we bring to product teams.
Narrow and General AI
Before we explain why you should stop updating your resume, let’s define a few terms and pin down what we mean when we say “narrow” and “general” AI. An easy way to make the distinction is that artificial general intelligence (AGI) is still the stuff of fiction. Think of Hal from “2001: A Space Odyssey,” Data from “Star Trek,” or Samantha from “Her.” Obviously, these systems don’t yet exist, and some technologists predict they never will, but this is controversial¹.
The important thing to remember is that today’s AI systems, including LLMs, are narrow and domain-specific; they can do some tasks better than humans (e.g., Chess), but others not so much (change a baby).
So, what are LLMs? A concise way to describe these systems is that they are really good at retrieving all known human knowledge unimaginably fast. ChatGPT, the most famous LLM as of this writing, metaphorically shrinks down 2,500 years of human writing plus the internet and makes most known answers accessible with a few simple prompts.
¹ Why Machines Will Never Rule the World. Smith & Landgrebe. (2022)
Primary and Secondary User Research
Readers will likely have a good grasp of who UXRs are and the types of things we do, but let’s outline a few basic concepts and functions. The job indeed requires many different hard and soft skills that change based on industry, team goals, and user groups, but for our purposes, let’s hone in on UXR deliverables. A deliverable is the product our stakeholders, business partners, and teammates consume and act upon. The first main distinction to make is between a primary and secondary research deliverable.
Primary research asks questions that don’t have answers and seeks to answer them by generating new knowledge about the world. Its aim is to make discoveries. To obtain that knowledge, UXRs have to devise activities (called studies) creatively, which involve making observations and collecting data that shed light on those initial questions under investigation.
Secondary research also starts with a problem or question but, instead, relies on analyzing existing knowledge: data, publications, reports, and other sources and doesn’t make empirical observations about the world.
LLMs are already quite good at producing secondary research. These systems are adept at finding and synthesizing preexisting sources, facts, and information and coming up with the best possible answers to questions based on what is already known. LLMs may already be replacing employees who do these types of tasks. There is little doubt that LLMs are fantastic tools for secondary research and are already helping UXRs create better, more robust deliverables. I can imagine LLMs becoming a standard tool that every UXR has in his or her toolkit in the near future.
Where LLMs begin to show weakness is in producing end-to-end primary research. I want to make a very strong claim and say LLMs like ChatGPT will never be able to generate primary research. I see two core reasons why LLMs fail at this.
(1) The background of primary research
Before primary research can begin UXRs have to take stock of the situation and identify why any research is needed. That might sound simple, but it is far from it. This understanding is the foundation of the research’s scope and is critical in producing impactful primary research.
While LLMs do seem to be able to formulate questions based on prompts and other inputs, its capability to fully understand all aspects of the project’s background, such as its overarching goals, the value the new knowledge will bring, why to choose a certain participant profile, the generalizability of the data, and key constraints to manage remains highly doubtful. All these and other background elements are often implicitly understood by UXRs when they design the primary research’s objectives, scope, and design.
These salient and veiled background elements are also immeasurably important when moderating interviews and asking probing questions, visualizing survey data, and presenting the newly discovered knowledge in an accessible and engaging manner.
(2) LLMs don’t collect empirical observations
Primary research first requires asking great questions and knowing the project’s background. But then, a researcher must act. He or she must collect data. Systematic empirical observations typically lead to discoveries – new knowledge that couldn’t have been predicted before the data was collected.
In many cases, this is exactly why primary research is carried out: to give us new information about the world. But making these discoveries falls outside the purview of LLM’s. Feeding an LLM a qualitative interview transcript or a survey’s .svg file for data interpretation is feasible, and its proficiency in this task will likely improve. But actually being able to generate new data for a given purpose is where these systems stumble.
LLMs think quickly, and their knowledge sources are vast, but these powers are meaningless when it comes to making observations, realizing what they mean, and generating new knowledge. Collecting data and deriving meaning and new understandings from them is categorically different from spitting out answers based on everything that has already been written, which is all that LLMs are really doing.
AGI and Product Teams
My main claim is that AI does not have the capacity to conduct good primary research, but this only applies to narrow AI. It should be noted that AGI, by virtue of its definition, will certainly be able to understand the background of primary research projects, collect empirical observations, and produce new knowledge. This applies to applied UXR work and to the research scientists are undertaking.
There is no telling how far away this type of technology is, but it is safe to say that once it arrives, it will be a game changer akin to fire, the wheel, and electricity, and may even be the single most important development in human history.Christopher Kovel
Comparing AI with UXR in this way may also help clarify the unique contributions UXRs make to businesses. Recognizing that our strength lies in producing primary research deliverables, and clearly articulating their nature and aim, may better define our roles on cross-functional product teams.
Emphasizing primary research could also give stakeholders a more straightforward description of the kind of work that we specialize in. For instance, viewing UXRs as experts in answering unknown questions and generating new knowledge can significantly impact product discovery. When product teams recognize they have specialists who provide answers to current unknowns and assumptions about what is being built, they may see more value in using our skills to generate a better understanding of the current landscape and problem space before a single feature is defined or pixel is created.
After understanding what we mean by AI, its basics, and the two main kinds of UXR deliverables, it becomes clear that UXR is not in jeopardy of being outsourced. Until the advent of AGI, our profession will not be rendered obsolete.
The main reasons why LLMs can’t threaten the type of work we do (and the work of other highly trained professions, for that matter) are twofold: firstly, we possess extensive and intangible background understanding regarding the context of our work, its aims and goals, the scope and purpose, and the acceptance criteria for success or failure. These background elements can be amorphous and dynamic, including an all too human set of skills, intuitions, and gut feelings that an LLM can’t be prompted to know.
Secondly, the process of structuring a data collection methodology, arranging the parts based on the background, and empirically observing the world with an eye toward creating new knowledge is not what LLMs have been trained for. The work we do can’t be done by programs or algorithms; we play by a different set of rules.
💬 For those interested in continuing the conversation with Chris, you can connect with him on his LinkedIn page.
UXinsight Festival 2024 – Stay Curious, Be BoldApril 15-17
Do you want to continue the discussion about the influence of AI on the role of the UX Researcher?
Join us at the next edition of Europe’s largest UXR conference for everyone passionate about UX research ❤️
Join the conversation