0% found this document useful (0 votes)
50 views

Unit 3

Usability testing is a method of evaluating a product by observing representative users perform tasks while thinking aloud. This provides insights into how well the design supports users in completing tasks successfully, independently, and with enjoyment. Key objectives are determining task success rates, assessing user performance and satisfaction. Usability testing is iterative - plans are made to test specific hypotheses, users complete prioritized tasks in realistic scenarios, and issues are identified to improve the design. Heuristic evaluation is when experts use usability "heuristics" or rules of thumb to independently analyze a design and report any violations without using the product. Common predictive analytics models include forecast models for predicting future metric values based on historical patterns, and classification models for categorizing information to answer questions.

Uploaded by

NADHIYA S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Unit 3

Usability testing is a method of evaluating a product by observing representative users perform tasks while thinking aloud. This provides insights into how well the design supports users in completing tasks successfully, independently, and with enjoyment. Key objectives are determining task success rates, assessing user performance and satisfaction. Usability testing is iterative - plans are made to test specific hypotheses, users complete prioritized tasks in realistic scenarios, and issues are identified to improve the design. Heuristic evaluation is when experts use usability "heuristics" or rules of thumb to independently analyze a design and report any violations without using the product. Common predictive analytics models include forecast models for predicting future metric values based on historical patterns, and classification models for categorizing information to answer questions.

Uploaded by

NADHIYA S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT 3-VALUATION OF INTERACTION

USABILITY TESTING
Usability testing is an evaluation method in which one or more representative users at a
time perform tasks or describe their intentions under observation. To get information
about the users’ cognitive processes, the test participants are usually asked to think
aloud while performing the test tasks.

Through usability testing, you can find design flaws you might otherwise overlook. When
you watch how test users behave while they try to execute tasks, you’ll get vital insights into
how well your design/product works. Then, you can leverage these insights to make
improvements. Whenever you run a usability test, your chief objectives are to:

1) Determine whether testers can complete tasks successfully and independently.


2) Assess their performance and mental state as they try to complete tasks, to see how well
your design works.

3) See how much users enjoy using it.

4) Identify problems and their severity.

5) Find solutions.

While usability tests can help you create the right products, they shouldn’t be the only tool
in your UX research toolbox. If you just focus on the evaluation activity, you won’t improve
the usability overall.

There are different methods for usability testing. Which one you choose depends on your
product and where you are in your design process.

Usability Testing is an Iterative Process

To make usability testing work best, you should:

1) Plan –

a. Define what you want to test. Ask yourself questions about your design/product. What
aspect/s of it do you want to test? You can make a hypothesis from each answer. With a
clear hypothesis, you’ll have the exact aspect you want to test.

b. Decide how to conduct your test – e.g., remotely. Define the scope of what to test (e.g.,
navigation) and stick to it throughout the test. When you test aspects individually, you’ll
eventually build a broader view of how well your design works overall.
2) Set user tasks –

a. Prioritize the most important tasks to meet objectives (e.g., complete checkout), no


more than 5 per participant. Allow a 60-minute timeframe.

b. Clearly define tasks with realistic goals.

c. Create scenarios where users can try to use the design naturally. That means you let
them get to grips with it on their own rather than direct them with instructions.

3) Recruit testers – Know who your users are as a target group. Use screening
questionnaires (e.g., Google Forms) to find suitable candidates. You can advertise and offer
incentives. You can also find contacts through community groups, etc. If you test with only
5 users, you can still reveal 85% of core issues.

4) Facilitate/Moderate testing –Set up testing in a suitable environment. Observe and


interview users. Notice issues. See if users fail to see things, go in the wrong direction or
misinterpret rules. When you record usability sessions, you can more easily count the
number of times users become confused. Ask users to think aloud and tell you how they
feel as they go through the test. From this, you can check whether your designer’s mental
model is accurate: Does what you think users can do with your design match what
these test users show?

If you choose remote testing, you can moderate via Google Hangouts, etc., or use
unmoderated testing. You can use this software to carry out remote moderated and
unmoderated testing and have the benefit of tools such as heatmaps.

Keep usability tests smooth by following these guidelines.

1) Assess user behavior – Use these metrics:

Quantitative – time users take on a task, success and failure rates, effort (how many clicks
users take, instances of confusion, etc.)
Qualitative – users’ stress responses (facial reactions, body-language changes, squinting,
etc.), subjective satisfaction (which they give through a post-test questionnaire)
and perceived level of effort/difficulty

2) Create a test report – Review video footage and analyzed data. Clearly define design
issues and best practices. Involve the entire team.

Overall, you should test not your design’s functionality, but users’ experience of it. Some
users may be too polite to be entirely honest about problems. So, always examine all data
carefully.

Heuristic Evaluation And Walkthrough

▪ A method developed by Jacob Nielsen (1994) o Structured design critique


o Using a set of simple and general heuristics o Executed by a small group of experts (3-5)
o Suitable for any stage of the design (sketches, UI, …)
o Goal: find usability problems in a design ▪ Also popularized as “Discount Usability”

Define a set of heuristics (or principles)


▪ Give those heuristics to a group of experts
o Each expert will use heuristics to look for problems in the design
▪ Experts work independently
o Each expert will find different problems
▪ At the end, experts communicate and share their findings
Findings are analyzed, aggregated, ranked
▪ The discovered violations of the heuristics are used to fix problems or to re-design
Heuristic evaluation is a process where experts use rules of thumb to measure
the usability of user interfaces in independent walkthroughs and report issues. Evaluators
use established heuristics (e.g., Nielsen-Molich’s) and reveal insights that can help design
teams enhance product usability from early in development.
A heuristic is a fast and practical way to solve problems or make decisions. In user
experience (UX) design, professional evaluators use heuristic evaluation to systematically
determine a design’s/product’s usability. As experts, they go through a checklist of criteria
to find flaws which design teams overlooked. The Nielsen-Molich heuristics state that a
system should:
1. Keep users informed about its status appropriately and promptly.
2. Show information in ways users understand from how the real world operates, and
in the users’ language.
3. Offer users control and let them undo errors easily.
4. Be consistent so users aren’t confused over what different words, icons, etc. mean.
5. Prevent errors – a system should either avoid conditions where errors arise or warn
users before they take risky actions (e.g., “Are you sure you want to do this?”
messages).
6. Have visible information, instructions, etc. to let users recognize options, actions,
etc. instead of forcing them to rely on memory.
7. Be flexible so experienced users find faster ways to attain goals.
8. Have no clutter, containing only relevant information for current tasks.
9. Provide plain-language help regarding errors and solutions.
10. List concise steps in lean, searchable documentation for overcoming problems

To conduct a heuristic evaluation, you can follow these steps:

1. Know what to test and how – Whether it’s the entire product or one procedure,
clearly define the parameters of what to test and the objective.
2. Know your users and have clear definitions of the target audience’s goals,
contexts, etc. User personas can help evaluators see things from the users’
perspectives.
3. Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.
4. Define the heuristics (around 5–10) – This will depend on the nature of the
system/product/design. Consider adopting/adapting the Nielsen-Molich heuristics
and/or using/defining others.
5. Brief evaluators on what to cover in a selection of tasks, suggesting a scale of
severity codes (e.g., critical) to flag issues.
6. 1st Walkthrough – Have evaluators use the product freely so they
can identify elements to analyze.
7. 2nd Walkthrough – Evaluators scrutinize individual elements according to the
heuristics. They also examine how these fit into the overall design,
clearly recording all issues encountered.
8. Debrief evaluators in a session so they can collate results for analysis and suggest
fixes.
ANALYTICS PREDICTIVE MODELS
 predictive analytics models are designed to assess historical data, discover patterns,
observe trends and use that information to draw up predictions about future trends. While
the economic value of predictive analytics is often talked about, there is little attention given
to how they are developed. 
variety of predictive data models that have been developed to meet specific requirements and
applications.

Forecast models
A forecast model is one of the most common predictive analytics models. It handles metric value
prediction by estimating the values of new data based on learnings from historical data. It is
often used to generate numerical values in historical data when there is none to be found. One
of the greatest strengths of predictive analytics is its ability to input multiple parameters. For this
reason, they are one of the most widely used predictive analytics models in use. They are used in
different industries and business purposes. For example, a call centre can predict how many
support calls they will get in a day or a shoe store can calculate inventory they need for the
upcoming sales period using forecast analytics. Forecast models are popular because they are
incredibly versatile.
Classification models
One of the most common predictive analytics models are classification models. These models
work by categorising information based on historical data. Classification models are used in
different industries because they can be easily retrained with new data and can provide a broad
analysis for answering questions. Classification models can be used in different industries like
finance and retail, which explains why they are so common compared to other models.
Outliers Models
While classification and forecast models work with historical data, the outliers model works with
anomalous data entries within a dataset. As the name implies, anomalous data refers to data
that deviates from the norm. It works by identifying unusual data, either in isolation or in relation
with different categories and numbers. Outlier models are useful in industries where identifying
anomalies can save organisations millions of dollars, namely in retail and finance. One reason
why predictive analytics models are so effective in detecting fraud is because outlier models can
be used to find anomalies. Since an incidence of fraud is a deviation from the norm, an outlier
model is more likely to predict it before it occurs. For example, when identifying a fraudulent
transaction, the outlier model can assess the amount of money lost, location, purchase history,
time and the nature of the purchase. Outlier models are incredibly valued because of their close
connection to anomaly data.
Time series model
While classification and forecast models focus on historical data, outliers focus on anomaly data.
The time series model focuses on data where time is the input parameter. The time series model
works by using different data points (taken from the previous year’s data) to develop a numerical
metric that will predict trends within a specified period.
If organisations want to see how a particular variable changes over time, then they need a Time
Series predictive analytics model. For example, if a small business owner wants to measure sales
for the past four quarters, then a Time Series model is needed. A Time Series model is superior
to conventional methods of calculating the progress of a variable because it can forecast for
multiple regions or projects simultaneously or focus on a single region or project, depending on
the organisation’s needs. Furthermore, it can take into account extraneous factors that could
affect the variables, like seasons.
Clustering Model
The clustering model takes data and sorts it into different groups based on common attributes.
The ability to divide data into different datasets based on specific attributes is particularly useful
in certain applications, like marketing. For example, marketers can divide a potential customer
base based on common attributes. It works using two types of clustering – hard and soft
clustering. Hard clustering categorises each data point as belonging to a data cluster or not.
While soft clustering assigns data probability when joining a cluster.

How do predictive analytics models work?


Predictive analytics models have their strengths and weaknesses and are best used for specific
uses. One of the biggest benefits applicable to all models is that they are reusable and can be
adjusted to have common business rules. A model can be reusable and trained using algorithms.
But how do these predictive analytics models actually work?

The analytical models run one or more algorithms on the data set on which the prediction is
going to be carried out. It is a repetitive process because it involves training the model.
Sometimes, multiple models are used on the same data set before one that suits business
objectives is found. It is important to note that predictive analytics models work through an
iterative process. It starts with pre-processing, then data is mined to understand business
objectives, followed by data preparation. Once preparation is complete, data is modelled,
evaluated and finally deployed. Once the process is completed, it is iterated on again.

Data algorithms play a huge role in this analysis because they are used in data mining and
statistical analysis to help determine trends and patterns in data. There are several types of
algorithms built into the analytics model incorporated to perform specific functions. Examples of
these algorithms include time-series algorithms, association algorithms, regression algorithms,
clustering algorithms, decision trees, outlier detection algorithms and neural network
algorithms. Each algorithm performs a specific function. For example, outlier detection
algorithms detect the anomalies in a dataset, while regression algorithms predict continuous
variables based on other variables present in the dataset.

Limitations of predictive analytics models

Despite the immense economic benefits predictive analytics models, it is not a fool-proof, fail-
safe model. There are some disadvantages to predictive analytics. Predictive models need are
specific set of conditions to work, if these conditions are not met, then it is of little value to the
organisation.

The need for massive training datasets


For predictive analytics models to be successful at predicting outcomes, there needs to be a
huge sample size representative of the population. Ideally, the sample size should be in the high
thousands to a few million. If datasets are smaller than the predictive analytics models will be
unduly influenced by anomalies in the data, which will distort findings. The need for massive
datasets inevitably locks out a lot of small to medium-sized organisations who may not have this
much data to work with.

Properly categorising data


Predictive analytics models rely on machine learning algorithms, and these algorithms can
properly assess data if it is labelled properly. Data labelling is a particularly demanding and
meticulous process because it needs to be accurate. Incorrect classification and labelling cause
several problems, like poor performance and accuracy in findings.

Applying learnings to different cases


Data models have a problem with generalisability, which is the ability to transfer findings from
one case to another. While predictive models are effective in their findings for one case, they
often struggle to transfer their findings to a different situation. Hence, there are some
applicability issues when it comes to the findings derived from a predictive analytics model.
However, there is a solution in certain methods, like transfer learning that could help mitigate
some of these shortcomings.

Predictive models in the future


The future will see predictive analytics models play an integral role in business processes
because of the immense economic value they generate. While not perfect, the value they offer
organisations, both public and private, is immense. With predictive analytics, organisations have
the opportunity to take action proactively in a variety of functions. Fraud prevention in banks,
disaster prevention for governments and sublime marketing campaigns are just some of the
possibilities tangible with predictive analytics models, which is why they will be an intangible
asset for the future.
Predictive data analysis can help your organisation identify trends and patterns that will allow
you to improve your organisation’s performance. 

COGNITIVE MODELS
Cognitive modeling is a computational model that hinges upon psychological notions,
demonstrating how people go about problem-solving and performing tasks.

Cognitive modeling can be outlined simply on paper or may be developed on a more


complicated system such as a computer program. However, the purpose remains the same:
to predict users’ behavior with regard to the tasks. The behaviors of particular concern
address: the amount of time it takes to complete certain tasks, the menu items and buttons
the users may click, as well as the corresponding errors that are bound to occur.
SOCIO-ORGANIZATIONAL ISSUES AND STAKEHOLDER REQUIREMENTS
COMMUNICATION AND COLLOBORATION MODELS
1. Face-to-Face Communication
a. It involves speech, hearing, body language and eye-gaze.

b. A person has to be familiar with existing norms, to learn a new norm.

c. Another factor is the personal space, this varies based on the context, environment, diversity and
culture.
d. The above factor comes into pitcher, when there is a video conference between two individuals
from different background.

e. The factor of eye gaze is important during a video conference as the cameras are usually mounted
away from the monitor and it is important to have eye contact during a conversation.

f. Back channels help giving the listener some clues or more information about the conversation.

g. The role of interruptions like 'um's and 'ah's are very important as they can be used by
participants in a conversation to claim the turn.

2. Conversation
a. Transcripts can be used as a heavily annotated conversation structure, but still lacks the back
channel information.

b. Another structure is of turn-taking, this can be interpreted as Adjacency pairs, e.g.: A-x, B-x, A-y,
B-y

c. Context varies according to the conversation.

d. The focus of the context can also varies, this means that it is difficult to keep track of context using
adjacency pairs.

e. Break-downs during conversations is often a case and can be noticed by analyzing the transcripts.
f. Reaching a common ground or grounding is very essential to understand the shared context.

g. Speech act theory is based on the statements and its propositional meaning.

h. A state diagram of the above can be constructed considering these acts as illocutionary points in
the diagram. This is called Conversation for Action.

3. Text-Based Communication
a. 4 types of communication

i. discrete e.g. email

ii. linear e.g. single transcript

iii. non-linear e.g. linked through hypertext fashion

iv. spatial e.g. messages arranged in 2D surface

b. Difference between this and face-to-face communication is that it has lack of back channels and
states

c. Turn-taking is the fundamental structure used here.

4. Group working
a. The roles and relationship between the group individuals are different and may change during the
conversation.

b. Physical layout is important to consider here to maintain the factors in face-to-face


communication.
5. Summary
a. face-to-face communication is complex. Personal space maintenance is disrupted by using video
links. But we can use the back channels.

b. Context is usually the most important during the conversation.

c. Text-based conversation can be enhanced by using multiplexing messages

You might also like