0% found this document useful (0 votes)
55 views

Chapter 2 HCI

The document discusses various techniques for evaluating human-computer interaction and user interfaces, including expert-based methods like cognitive walkthroughs and heuristic evaluation used early in development, as well as user-based methods like laboratory studies, field studies, and questionnaires used later to evaluate designs and implementations. The goal of evaluation is to assess functionality, usability, and identify problems by testing with both experts and end users at different stages of the design process.

Uploaded by

ya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Chapter 2 HCI

The document discusses various techniques for evaluating human-computer interaction and user interfaces, including expert-based methods like cognitive walkthroughs and heuristic evaluation used early in development, as well as user-based methods like laboratory studies, field studies, and questionnaires used later to evaluate designs and implementations. The goal of evaluation is to assess functionality, usability, and identify problems by testing with both experts and end users at different stages of the design process.

Uploaded by

ya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT 2

Human Computer Interaction

Evaluation Techniques

1
Evaluation Techniques
 Evaluation
 tests usability and functionality of system
 occurs in laboratory, field and/or in collaboration with
users
 evaluates both design and implementation
 should be considered at all stages in the design life cycle

2
 evaluation occurs throughout the software development
lifecycle
• note that we focus here on user interface evaluation
 there are two main types of evaluation:
– evaluation by system designers or experts
typically conducted early in the development lifecycle,
though may give late feedback
on system performance
– evaluation by end users
typically conducted late in the development lifecycle,
though may give early feedback on system design

3
Goals of Evaluation

 assess extent of system functionality

 assess effect of interface on user

 identify specific problems

4
Evaluating Designs
evaluation by experts
• there are many types of evaluations conducted by experts or
system designers:

Cognitive Walkthrough,
Heuristic Evaluation,
Review-based evaluation

5
Cognitive Walkthrough
 Proposed by Polson et al.
 “walkthrough” means a sequence of steps
 evaluates design on how well it supports user in learning
task
 usually performed by expert in cognitive psychology
 Expert 'walks through' design to identify potential
problems using psychological principles
 forms used to guide analysis

6
Cognitive Walkthrough (ctd)
 For each task walkthrough considers
 what impact will interaction have on user?
 what cognitive processes are required?
 what learning problems may occur?

 Analysis focuses on goals and knowledge: does the


design lead the user to generate the correct goals?

7
Heuristic Evaluation
 Proposed by Nielsen and Molich.

 usability criteria (heuristics) are identified


 design examined by experts to see if these are
violated

 Example heuristics
 system behaviour is predictable
 system behaviour is consistent
 feedback is provided

 Heuristic evaluation `debugs' design.


8
• the 10 heuristics are:
 1. visibility of system status (i.e., inform users with
feedback)
 2. match between system and real world (i.e., “speak”
user’s language; is system
 understandable by the user?)
 3. user control and freedom (i.e., can user “undo” and
“redo” actions?)
 4. consistency and standards (does system follow
standard conventions for symbols,
 menu items, etc?)
 5. error prevention (does system make it hard for the user
to make mistakes?)

9
Cont…
 6. recognition rather than recall (does system make it
easy for the user to remember
 7. flexibility and efficiency of use (does the system let
users tailor the interface to
 accommodate frequent actions or sequences of actions,
e.g., macros?)
 8. aesthetic and minimalist design (e.g., dialogues
shouldn’t contain extra information;
 interface shouldn’t be cluttered)
 9. help users recognize, diagnose and recover from errors
(are error messages in clear language?)
 10.help and documentation (does the system have any
documentation and/or a help facility?)

10
Review-based evaluation
 Results from the literature used to support or refute
parts of design.

 Care needed to ensure results are transferable to new


design.
– population of users (novice, experts)
– Assumptions made
– Analyses performed
 Model-based evaluation
 Design rationale can also provide useful evaluation
information

11
Evaluating through user Participation
 can be done at different stages of development
 system developers can simulate missing
(undeveloped) pieces of interface using techniques
like “Wizard of Oz” where a human takes the part of
the pieces that will later be automated
 elements of user evaluation:
– styles of evaluation
– experimental evaluation design
– observational techniques
– query techniques
– evaluation through physiological responses

12
users: styles of evaluation
 laboratory studies
– conducted in controlled settings
– advantages: no distractions
– disadvantages: but no context
 field studies
– conducted “in situ”, i.e., in situated settings
– advantages: context
– disadvantages: but can add distractions

13
Laboratory studies
 Advantages:
 specialist equipment available
 uninterrupted environment

 Disadvantages:
 lack of context
 difficult to observe several users cooperating

 Appropriate
 if system location is dangerous or impractical for
constrained single user systems to allow controlled
manipulation of use

14
Field Studies
 Advantages:
 natural environment
 context retained (though observation may alter it)
 longitudinal studies possible

 Disadvantages:
 distractions
 noise

 Appropriate
 where context is crucial for longitudinal studies

15
Evaluating Implementations
 Requires an artefact:
• simulation
• prototype
• full implementation
 Types of evaluation
• Empirical or experimental methods
• Observational methods
• Query techniques

16
Experimental evaluation
 controlled evaluation of specific aspects of
interactive behaviour
 evaluator chooses hypothesis to be tested
 a number of experimental conditions are considered
which differ only in the value of some controlled
variable.
 changes in behavioural measure are attributed to
different conditions

17
Experimental factors
 Subjects
 who – representative, sufficient sample
 Variables
 things to modify and measure
 Hypothesis
 what you'd like to show
 Experimental design
 how you are going to do it

18
Variables

 independent variable (IV)


 characteristic changed to produce different conditions
 e.g. interface style, number of menu items

 dependent variable (DV)


 characteristics measured in the experiment
 e.g. time taken, number of errors.

19
Hypothesis
 prediction of outcome
 framed in terms of IV and DV

 e.g. “error rate will increase as font size decreases”

 null hypothesis:
 states no difference between conditions
 aim is to disprove this

 e.g. null hyp. = “no change with font size”

20
Observational Methods

Think Aloud, Cooperative evaluation, Protocol analysis,


Automated analysis, Post-task walkthroughs

21
Think Aloud
 user observed performing task
 user asked to describe what he is doing and why,
what he thinks is happening etc.

 Advantages
 simplicity - requires little expertise
 can provide useful insight
 can show how system is actually use
 Disadvantages
 subjective
 selective
 act of describing may alter task performance

22
Cooperative evaluation
 variation on think aloud
 user collaborates in evaluation
 both user and evaluator can ask each other questions
throughout

 Additional advantages
 less constrained and easier to use
 user is encouraged to criticize system
 clarification possible

23
Protocol analysis
 paper and pencil – cheap, limited to writing speed
 audio – good for think aloud, difficult to match with
other protocols
 video – accurate and realistic, needs special equipment,
obtrusive
 computer logging – automatic and unobtrusive, large
amounts of data difficult to analyze
 user notebooks – coarse and subjective, useful insights,
good for longitudinal studies

 Mixed use in practice.


 audio/video transcription difficult and requires skill.
 Some automatic support tools available

24
Automated analysis – EVA
 Workplace project
 Post task walkthrough
 user reacts on action after the event
 used to fill in intention
 Advantages
 analyst has time to focus on relevant incidents
 avoid excessive interruption of task
 Disadvantages
 lack of freshness
 may be post-hoc interpretation of eventS
 Examples
 Noldus Pocket Observer XT (http://www.noldus.com)

25
Post-task Walkthroughs
 Transcript played back to participant for comment
 immediately  fresh in mind
 delayed  evaluator has time to identify questions
 useful to identify reasons for actions and alternatives
considered
 necessary in cases where think aloud is not possible

26
Query Techniques

Interviews
Questionnaires

27
Interviews
 analyst questions user on one-to-one basis
usually based on prepared questions
 informal, subjective and relatively cheap

 Advantages
 can be varied to suit context
 issues can be explored more fully
 can elicit user views and identify unanticipated problems
 Disadvantages
 very subjective
 time consuming

28
Questionnaires
 Set of fixed questions given to users

 Advantages
 quick and reaches large user group
 can be analyzed more rigorously
 Disadvantages
 less flexible
 less probing

29
Questionnaires (ctd)
 Need careful design
 what information is required?
 how are answers to be analyzed?

 Styles of question
 general
 open-ended
 scalar
 multi-choice
 ranked

30
Choosing an Evaluation Method
 when in process: design vs. implementation
 style of evaluation: laboratory vs. field
 how objective: subjective vs. objective
 type of measures: qualitative vs. quantitative
 level of information: high level vs. low level
 level of interference: obtrusive vs. unobtrusive
 resources available: time, subjects,
equipment, expertise

31
32

You might also like