Unit 3
Unit 3
USABILITY TESTING
Usability testing is an evaluation method in which one or more representative users at a
time perform tasks or describe their intentions under observation. To get information
about the users’ cognitive processes, the test participants are usually asked to think
aloud while performing the test tasks.
Through usability testing, you can find design flaws you might otherwise overlook. When
you watch how test users behave while they try to execute tasks, you’ll get vital insights into
how well your design/product works. Then, you can leverage these insights to make
improvements. Whenever you run a usability test, your chief objectives are to:
4) Identify problems and their severity.
5) Find solutions.
While usability tests can help you create the right products, they shouldn’t be the only tool
in your UX research toolbox. If you just focus on the evaluation activity, you won’t improve
the usability overall.
There are different methods for usability testing. Which one you choose depends on your
product and where you are in your design process.
1) Plan –
a. Define what you want to test. Ask yourself questions about your design/product. What
aspect/s of it do you want to test? You can make a hypothesis from each answer. With a
clear hypothesis, you’ll have the exact aspect you want to test.
b. Decide how to conduct your test – e.g., remotely. Define the scope of what to test (e.g.,
navigation) and stick to it throughout the test. When you test aspects individually, you’ll
eventually build a broader view of how well your design works overall.
2) Set user tasks –
c. Create scenarios where users can try to use the design naturally. That means you let
them get to grips with it on their own rather than direct them with instructions.
3) Recruit testers – Know who your users are as a target group. Use screening
questionnaires (e.g., Google Forms) to find suitable candidates. You can advertise and offer
incentives. You can also find contacts through community groups, etc. If you test with only
5 users, you can still reveal 85% of core issues.
If you choose remote testing, you can moderate via Google Hangouts, etc., or use
unmoderated testing. You can use this software to carry out remote moderated and
unmoderated testing and have the benefit of tools such as heatmaps.
Quantitative – time users take on a task, success and failure rates, effort (how many clicks
users take, instances of confusion, etc.)
Qualitative – users’ stress responses (facial reactions, body-language changes, squinting,
etc.), subjective satisfaction (which they give through a post-test questionnaire)
and perceived level of effort/difficulty
2) Create a test report – Review video footage and analyzed data. Clearly define design
issues and best practices. Involve the entire team.
Overall, you should test not your design’s functionality, but users’ experience of it. Some
users may be too polite to be entirely honest about problems. So, always examine all data
carefully.
1. Know what to test and how – Whether it’s the entire product or one procedure,
clearly define the parameters of what to test and the objective.
2. Know your users and have clear definitions of the target audience’s goals,
contexts, etc. User personas can help evaluators see things from the users’
perspectives.
3. Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.
4. Define the heuristics (around 5–10) – This will depend on the nature of the
system/product/design. Consider adopting/adapting the Nielsen-Molich heuristics
and/or using/defining others.
5. Brief evaluators on what to cover in a selection of tasks, suggesting a scale of
severity codes (e.g., critical) to flag issues.
6. 1st Walkthrough – Have evaluators use the product freely so they
can identify elements to analyze.
7. 2nd Walkthrough – Evaluators scrutinize individual elements according to the
heuristics. They also examine how these fit into the overall design,
clearly recording all issues encountered.
8. Debrief evaluators in a session so they can collate results for analysis and suggest
fixes.
ANALYTICS PREDICTIVE MODELS
predictive analytics models are designed to assess historical data, discover patterns,
observe trends and use that information to draw up predictions about future trends. While
the economic value of predictive analytics is often talked about, there is little attention given
to how they are developed.
variety of predictive data models that have been developed to meet specific requirements and
applications.
Forecast models
A forecast model is one of the most common predictive analytics models. It handles metric value
prediction by estimating the values of new data based on learnings from historical data. It is
often used to generate numerical values in historical data when there is none to be found. One
of the greatest strengths of predictive analytics is its ability to input multiple parameters. For this
reason, they are one of the most widely used predictive analytics models in use. They are used in
different industries and business purposes. For example, a call centre can predict how many
support calls they will get in a day or a shoe store can calculate inventory they need for the
upcoming sales period using forecast analytics. Forecast models are popular because they are
incredibly versatile.
Classification models
One of the most common predictive analytics models are classification models. These models
work by categorising information based on historical data. Classification models are used in
different industries because they can be easily retrained with new data and can provide a broad
analysis for answering questions. Classification models can be used in different industries like
finance and retail, which explains why they are so common compared to other models.
Outliers Models
While classification and forecast models work with historical data, the outliers model works with
anomalous data entries within a dataset. As the name implies, anomalous data refers to data
that deviates from the norm. It works by identifying unusual data, either in isolation or in relation
with different categories and numbers. Outlier models are useful in industries where identifying
anomalies can save organisations millions of dollars, namely in retail and finance. One reason
why predictive analytics models are so effective in detecting fraud is because outlier models can
be used to find anomalies. Since an incidence of fraud is a deviation from the norm, an outlier
model is more likely to predict it before it occurs. For example, when identifying a fraudulent
transaction, the outlier model can assess the amount of money lost, location, purchase history,
time and the nature of the purchase. Outlier models are incredibly valued because of their close
connection to anomaly data.
Time series model
While classification and forecast models focus on historical data, outliers focus on anomaly data.
The time series model focuses on data where time is the input parameter. The time series model
works by using different data points (taken from the previous year’s data) to develop a numerical
metric that will predict trends within a specified period.
If organisations want to see how a particular variable changes over time, then they need a Time
Series predictive analytics model. For example, if a small business owner wants to measure sales
for the past four quarters, then a Time Series model is needed. A Time Series model is superior
to conventional methods of calculating the progress of a variable because it can forecast for
multiple regions or projects simultaneously or focus on a single region or project, depending on
the organisation’s needs. Furthermore, it can take into account extraneous factors that could
affect the variables, like seasons.
Clustering Model
The clustering model takes data and sorts it into different groups based on common attributes.
The ability to divide data into different datasets based on specific attributes is particularly useful
in certain applications, like marketing. For example, marketers can divide a potential customer
base based on common attributes. It works using two types of clustering – hard and soft
clustering. Hard clustering categorises each data point as belonging to a data cluster or not.
While soft clustering assigns data probability when joining a cluster.
The analytical models run one or more algorithms on the data set on which the prediction is
going to be carried out. It is a repetitive process because it involves training the model.
Sometimes, multiple models are used on the same data set before one that suits business
objectives is found. It is important to note that predictive analytics models work through an
iterative process. It starts with pre-processing, then data is mined to understand business
objectives, followed by data preparation. Once preparation is complete, data is modelled,
evaluated and finally deployed. Once the process is completed, it is iterated on again.
Data algorithms play a huge role in this analysis because they are used in data mining and
statistical analysis to help determine trends and patterns in data. There are several types of
algorithms built into the analytics model incorporated to perform specific functions. Examples of
these algorithms include time-series algorithms, association algorithms, regression algorithms,
clustering algorithms, decision trees, outlier detection algorithms and neural network
algorithms. Each algorithm performs a specific function. For example, outlier detection
algorithms detect the anomalies in a dataset, while regression algorithms predict continuous
variables based on other variables present in the dataset.
Despite the immense economic benefits predictive analytics models, it is not a fool-proof, fail-
safe model. There are some disadvantages to predictive analytics. Predictive models need are
specific set of conditions to work, if these conditions are not met, then it is of little value to the
organisation.
COGNITIVE MODELS
Cognitive modeling is a computational model that hinges upon psychological notions,
demonstrating how people go about problem-solving and performing tasks.
c. Another factor is the personal space, this varies based on the context, environment, diversity and
culture.
d. The above factor comes into pitcher, when there is a video conference between two individuals
from different background.
e. The factor of eye gaze is important during a video conference as the cameras are usually mounted
away from the monitor and it is important to have eye contact during a conversation.
f. Back channels help giving the listener some clues or more information about the conversation.
g. The role of interruptions like 'um's and 'ah's are very important as they can be used by
participants in a conversation to claim the turn.
2. Conversation
a. Transcripts can be used as a heavily annotated conversation structure, but still lacks the back
channel information.
b. Another structure is of turn-taking, this can be interpreted as Adjacency pairs, e.g.: A-x, B-x, A-y,
B-y
d. The focus of the context can also varies, this means that it is difficult to keep track of context using
adjacency pairs.
e. Break-downs during conversations is often a case and can be noticed by analyzing the transcripts.
f. Reaching a common ground or grounding is very essential to understand the shared context.
g. Speech act theory is based on the statements and its propositional meaning.
h. A state diagram of the above can be constructed considering these acts as illocutionary points in
the diagram. This is called Conversation for Action.
3. Text-Based Communication
a. 4 types of communication
b. Difference between this and face-to-face communication is that it has lack of back channels and
states
4. Group working
a. The roles and relationship between the group individuals are different and may change during the
conversation.