There are three basic methods used in user experience (UX) research: asking people what they do and think through interviews and surveys; observing what people do through ethnographic observations and user testing; and inspecting prototypes and artifacts through guideline-based inspections and walkthroughs. These methods are used together and iteratively over the course of a design project to understand users and guide the product design towards a great user experience.
Download as TXT, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
46 views
Basic Methods of UX Research
There are three basic methods used in user experience (UX) research: asking people what they do and think through interviews and surveys; observing what people do through ethnographic observations and user testing; and inspecting prototypes and artifacts through guideline-based inspections and walkthroughs. These methods are used together and iteratively over the course of a design project to understand users and guide the product design towards a great user experience.
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 3
So we use a lot of different methods in UX research.
But it's useful to think of really three basic methods or
approaches that encompass pretty much everything that we do in UX Research. So they're really three basic methods that we use in user experience research to guide a product towards a great user experience. We can ask people what they do and what they think, we can observe what they do, and we can inspect prototypes and artifacts, to determine whether or not they're likely to deliver a good user experience. When we talk about gaining insight through asking, there are a number of different specific methods that we use. By far, the most common are interviews and surveys. Interviews consist of conversations with stakeholders to understand aspects of their experience. And surveys consist of questions that are distributed to lots of people, to elicit information about their attitudes, behaviors, and characteristics. There are other methods we're not gonna go into right now, like focus groups, diary studies and experience sampling. When we talk about gaining insights through observations, there are a number of different specific ways that we do that. One is through ethnographic observations, which basically consists of hanging around in particular environments, while people are performing activities. And watching people engage in those activities to understand how they go about them. We can also observe how people interact with prototypes and systems that we've developed, by asking them to perform scripted tasks, to see if a system supports them through user testing. And after a system is built and has been deployed in the wild, we can employ usage analytics, often called web analytics or mobile analytics depending on the platform, to analyze large scale traces of system usage, to understand the patterns of use. And again there are other techniques that are more specialized, like video analysis and social media mining that we won't go into. And there are others as well. And finally, when we talk about using inspection methods to gain insight, there are a couple of different ways that we do that. We can perform guideline based inspections, where we compare a system design against known best practices, to find places where it probably breaks down, because it violates some principle of what we know is likely to work for people in a particular context. We can also perform walkthrough based inspections, where we step through an interaction sequence using specific techniques to take a users-eye view to find probable breakdowns. And we also can perform comparative analysis, where we systematically compare a design with similar designs to identify strengths and weaknesses, and we often combine these different approaches in specific methods. So in user testing, we not only observe people performing tasks, but we usually accompany those observations with interviews, to get more information about people's reactions to the product and learn more about what works and why. And we also combine watching and asking through contextual interviews, often in the early stages of a design project where we ask questions of users, while observing their natural activities take place. So we might sort of watch over their shoulders as they perform tasks, using their current system, and ask them why they're doing things and what works and what doesn't. And there again, other techniques, like artifact-based interviewing, that would employ a combination of watching and asking, which we aren't gonna say much about here. So a question might be, when do you use which of these different techniques? Well, at a very high level, we probably wanna use asking techniques where we're interacting and having conversations or asking questions of members of our target population. When perhaps observation is infeasible, maybe these are activities that are infrequent, or take a very long time to unfold. Or are private and not things that we can easily observe. We wanna ask questions, when values and motivations are key, because these are not things that are easy to get out of an observation, where you can't find out why people are doing things. And we specifically employ surveys when we need to get large numbers of responses. And a high degree of certainty where we can employ statistical methods to be able to make strong claims that particular characteristics are present in our user population. We wanna lean more towards observation, when self-report will miss information. Perhaps because of the frailty of human memory, or because there's tacit knowledge involved, that people will not be able to tell you about in an interview or survey. We wanna conduct observations, when process and communication are important. Where it's very important to see how people go about doing things and how communication with other participants, in the activity, is critical to the way that people conduct these activities. And we specifically employ techniques like analytics, again, where large numbers and high certainty are needed. And we wanna perform inspections when, first of all, you have a product to inspect. So inspection methods don't work early in the stages of a design project, where you're understanding users and their current practices, and try to understand what the possible options for a solution are. But once you have a prototype, you can perform things like guideline based inspections and walkthroughs. Inspection methods are also useful, when interacting with users as too expensive or cumbersome. Perhaps because users are difficult to recruit, or you're simply not at a point in the design process where it makes sense to invest in those kinds of methods. And the critical thing to realize is that you will use all of these methods in a typical design project. So for example, and this is just one example. Every project is different and they all unfold differently. But for example, you might start out at the beginning of a project, in the assessment phase, by conducting interviews, where you try to understand what it is that users are doing, why they're doing things the way that they do. And you might follow that up with observations, where you try to understand the things that they're not telling you. And try to understand the process by which they're conducting the activities that you wanna support with your design. After going through one cycle of the iterative design process, going through the assess, design and build phase, you might come back around and say, okay, now that we have some ideas, specific ideas. How does that compare with what else is out there? You might perform a comparative analysis, and go out and look at the other products that try to solve the same kind of problem. And understand which ones work better and why they work better, and which ones don't work as well. After another turn of the design cycle, you might be ready to perform some low fidelity user testing. So, you might take a paper prototype, for example, and put it in front of users and ask them to perform some tasks, and try to understand again what works and what doesn't about your design, so that you can fix it in the next cycle. After the next turn around iterative design cycle, you might be ready to perform a heuristic evaluation, which is a specific form of guideline based inspection. Where you can now take your design and say, okay, what are the aspects of this design that are likely to deliver a positive user experience? And which ones are probably going to lead to a negative user experience? And then after another design and build phase, we might be ready for high-fidelity user testing. So now you have something that's pretty close to the final product that you think you might be ready to ship. And you wanna iron out those last few details, by putting it in front of users and getting their feedback, and observing where it works and where it doesn't. And even after the design process has concluded, the user experience research doesn't necessarily end. So, after a product has been released, it's common to set up an analytics process where you can see how people are actually using their product, so that you can use that information to improve the product in the next cycle. And you might conduct surveys, to ask people about their experience using the product, what their satisfaction level is, what works for them and what doesn't. In this lecture, I've introduced the three basic methods that we use in UX research, asking, observing, and inspecting. In future lectures, we will go into much more detail about specific ways that we do that through specific UX research methods.