How To Avoid Machine Learning Pitfalls
How To Avoid Machine Learning Pitfalls
Abstract
Mistakes in machine learning practice are commonplace, and can result in a loss of
confidence in the findings and products of machine learning. This guide outlines
arXiv:2108.02497v5 [cs.LG] 29 Aug 2024
common mistakes that occur when using machine learning, and what can be done
to avoid them. Whilst it should be accessible to anyone with a basic understanding
of machine learning techniques, it focuses on issues that are of particular concern
within academic research, such as the need to do rigorous comparisons and reach
valid conclusions. It covers five stages of the machine learning process: what to
do before model building, how to reliably build models, how to robustly evaluate
models, how to compare models fairly, and how to report results.
1 Introduction
It’s easy to make mistakes when applying machine learning (ML), and these mistakes
can result in ML models that fail to work as expected when applied to data not seen
during training and testing [Liao et al., 2021]. This is a problem for practitioners, since
it leads to the failure of ML projects. However, it is also a problem for society, since it
erodes trust in the findings and products of ML [Gibney, 2022].
This guide aims to help newcomers avoid some of these mistakes. It’s written by an
academic, and focuses on lessons learnt whilst doing ML research in academia. Whilst
primarily aimed at students and scientific researchers, it should be accessible to anyone
getting started in ML, and only assumes a basic knowledge of ML techniques. However,
unlike similar guides aimed at a more general audience, it includes topics that are of
a particular concern to academia, such as the need to rigorously evaluate and compare
models in order to get work published.
To make it more readable, the guidance is written informally, in a Dos and Don’ts style.
It’s not intended to be exhaustive, and references (with publicly-accessible URLs where
available) are provided for further reading. Since it doesn’t cover issues specific to partic-
ular academic subjects, it’s recommended that readers also consult subject-specific guid-
ance where available, e.g. in clinical medicine [Stevens et al., 2020], genomics [Whalen
et al., 2022], environmental research [Zhu et al., 2023], materials science [Karande et al.,
∗
School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, Scotland, UK,
Email: [email protected], Web: www.macs.hw.ac.uk/∼ml355, Substack: Fetch Decode Execute.
1
2022], business and marketing [Van Giffen et al., 2022], computer security [Arp et al.,
2022] and social science [Malik, 2020].
The review is divided into five sections. Before you start to build models covers issues
that can occur early in the ML process, and focuses on the correct use of data and
adequate consideration of the context in which ML is being applied. How to reliably
build models then covers pitfalls that occur during the selection and training of models
and their components. How to robustly evaluate models presents pitfalls that can lead
to an incorrect understanding of model performance. How to compare models fairly
then extends this to the situation where models are being compared, discussing how
common pitfalls can lead to misleading findings. How to report your results focuses on
reproducibility and factors that can lead to incomplete or deceptive reporting.
Changes
ML pitfalls are not static, and continue to evolve as ML develops. To address this, this
guide has been updated annually since it was first released in 2021, and it will continue
to be updated in the future. Feedback is welcome. If you cite it, please include the arXiv
version number (currently v51 ).
Changes from v4 Added Do use meaningful baselines, Do clean your data and Do
consider model fairness. Extended Do look at your models, Do make sure you have
enough data, Do think about how your model will be deployed, Don’t allow test data to
leak into the training process and Do evaluate a model multiple times.
Changes from v3 Added Do use a machine learning checklist and Do think about how
and where you will use data. Rewrote Do evaluate a model multiple times. Revised Do
keep up with progress in deep learning (and its pitfalls), Do be careful where and how
you do feature selection, Do avoid sequential overfitting, Do choose metrics carefully and
Do combine models (carefully). Extended Do use an appropriate test set.
Changes from v1 Added Don’t do data augmentation before splitting your data and
Don’t assume deep learning will be the best approach. Rewrote Don’t use inappropriate
models. Expanded Don’t allow test data to leak into the training process, Do be careful
when reporting statistical significance and Do be transparent.
1
This version is published as “Avoiding machine learning pitfalls” in Patterns (Cell Press) [Lones, 2024]
2
Contents
1 Introduction 1
2 Before you start to build models 4
2.1 Do think about how and where you will use data . . . . . . . . . . . . . . . . . 4
2.2 Do take the time to understand your data . . . . . . . . . . . . . . . . . . . . 4
2.3 Don’t look at all your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Do clean your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 Do make sure you have enough data . . . . . . . . . . . . . . . . . . . . . . . 6
2.6 Do talk to domain experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.7 Do survey the literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.8 Do think about how your model will be deployed . . . . . . . . . . . . . . . . . 7
3 How to reliably build models 7
3.1 Don’t allow test data to leak into the training process . . . . . . . . . . . . . . 8
3.2 Do try out a range of different models . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Don’t use inappropriate models . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 Do keep up with progress in deep learning (and its pitfalls) . . . . . . . . . . . . 9
3.5 Don’t assume deep learning will be the best approach . . . . . . . . . . . . . . 10
3.6 Do be careful where and how you do feature selection . . . . . . . . . . . . . . 11
3.7 Do optimise your model’s hyperparameters . . . . . . . . . . . . . . . . . . . . 11
3.8 Do avoid learning spurious correlations . . . . . . . . . . . . . . . . . . . . . . 13
4 How to robustly evaluate models 14
4.1 Do use an appropriate test set . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Don’t do data augmentation before splitting your data . . . . . . . . . . . . . . 15
4.3 Do avoid sequential overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 Do evaluate a model multiple times . . . . . . . . . . . . . . . . . . . . . . . . 15
4.5 Do save some data to evaluate your final model instance . . . . . . . . . . . . . 17
4.6 Do choose metrics carefully . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.7 Do consider model fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.8 Don’t ignore temporal dependencies in time series data . . . . . . . . . . . . . . 19
5 How to compare models fairly 19
5.1 Don’t assume a bigger number means a better model . . . . . . . . . . . . . . . 19
5.2 Do use meaningful baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Do use statistical tests when comparing models . . . . . . . . . . . . . . . . . . 21
5.4 Do correct for multiple comparisons . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Don’t always believe results from community benchmarks . . . . . . . . . . . . 22
5.6 Do combine models (carefully) . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6 How to report your results 23
6.1 Do be transparent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 Do report performance in multiple ways . . . . . . . . . . . . . . . . . . . . . 24
6.3 Don’t generalise beyond the data . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.4 Do be careful when reporting statistical significance . . . . . . . . . . . . . . . 25
6.5 Do look at your models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.6 Do use a machine learning checklist . . . . . . . . . . . . . . . . . . . . . . . . 26
7 Final thoughts 27
3
2 Before you start to build models
It’s normal to want to rush into training and evaluating models, but it’s important to
take the time to think about the goals of a project, to fully understand the data that
will be used to support these goals, to consider any limitations of the data that need to
be addressed, and to understand what’s already been done in your field. If you don’t do
these things, then you may end up with results that are hard to publish, or models that
are not appropriate for their intended purpose.
2.1 Do think about how and where you will use data
Data is central to most ML projects, but is often in short supply. Therefore it’s impor-
tant to think carefully about what data you need and how and where you will use it.
Abstractly, you need data for two things, training models and testing models. However,
for various reasons, this does not necessarily translate into using a single dataset divided
into two parts. To begin with, model development often involves a period of experimen-
tation: trying out different models with different hyperparameters, and preprocessing
the data in different ways. To avoid overfitting (see Do avoid sequential overfitting), this
process requires a separate validation set, i.e. an additional set of training data that’s not
used directly in training or testing models. If you have no prior idea of what modelling
approach you’re going to use, then this experimentation phase could potentially involve
a lot of comparisons. Due to the multiplicity effect (see Do correct for multiple compar-
isons), the more comparisons you do, the more likely you are to overfit the validation
data, and so the less useful the validation set will become in guiding your modelling
decisions. So, in practice you might want to set aside multiple validation sets for this.
Then there’s the question of how you adequately test your selected model. Because it
has the same biases as the training data, a test set taken from the same dataset as the
training data may not be sufficient to measure the model’s generality — see Do use an
appropriate test set and Do report performance in multiple ways for more on this —
meaning that, in practice, you may need more than one test dataset to robustly evaluate
your model. Also be aware that you will often need additional test data when using
cross-validation; see Do save some data to evaluate your final model instance.
4
train your model using bad data, then you will most likely generate a bad model: a
process known as garbage in garbage out. One way to avoid bad data sets is to build
a direct relationship with people who generate data, since this increases the likelihood
of obtaining a good-quality dataset that meets your needs. It also avoids problems
of overfitting community benchmarks; see Don’t always believe results from community
benchmarks. Yet regardless of where your data comes from, always begin by making sure
that your data makes sense. Do some exploratory data analysis (see Cox [2017] for
suggestions). Look for missing or inconsistent records. It is much easier to do this now,
before you train a model, rather than later, when you’re trying to explain to reviewers
why you used bad data.
5
a time-consuming process, and becomes more challenging as the complexity of data
increases. For this reason, many people have explored automating data cleaning using
ML approaches; see Côté et al. [2024] for a review.
6
2.7 Do survey the literature
You’re probably not the first person to throw ML at a particular problem domain, so
it’s important to understand what has and hasn’t been done previously. Other people
having worked on the same problem isn’t a bad thing; academic progress is typically an
iterative process, with each study providing information that can guide the next. It may
be discouraging to find that someone has already explored your great idea, but they most
likely left plenty of avenues of investigation still open, and their previous work can be
used as justification for your work. To ignore previous studies is to potentially miss out
on valuable information. For example, someone may have tried your proposed approach
before and found fundamental reasons why it won’t work (and therefore saved you a few
years of frustration), or they may have partially solved the problem in a way that you
can build on. So, it’s important to do a literature review before you start work; leaving
it too late may mean that you are left scrambling to explain why you are covering the
same ground or not building on existing knowledge when you come to write a paper.
7
Test Test
set set
Estimated Over-Estimated
Performance Performance
Figure 1: See Don’t allow test data to leak into the training process. [left] How things
should be, with the training set used to train the model, and the test set used to measure
its generality. [right] When there’s a data leak, the test set can implicitly become part of
the training process, meaning that it no longer provides a reliable measure of generality.
3.1 Don’t allow test data to leak into the training process
It’s essential to have data that you can use to measure how well your model generalises.
A common problem is allowing information about this data to leak into the configuration,
training or selection of models (see Figure 1). When this happens, the data no longer
provides a reliable measure of generality, and this is a common reason why published
ML models often fail to generalise to real world data. There are a number of ways that
information can leak from a test set. Some of these seem quite innocuous. For instance,
during data preparation, using information about the means and ranges of variables
within the whole data set to carry out variable scaling or imputation — in order to
prevent information leakage, these statistics should be calculated using only the training
data. Other common examples of information leakage are carrying out feature selection
before partitioning the data (see Do be careful where and how you do feature selection),
using the same test data to evaluate the generality of multiple models (see Do avoid
sequential overfitting and Don’t always believe results from community benchmarks),
and applying data augmentation before splitting off the test data (see Don’t do data
augmentation before splitting your data). The best thing you can do to prevent these
issues is to partition off a subset of your data right at the start of your project, and only
use this independent test set once to measure the generality of a single model at the end
of the project (see Do save some data to evaluate your final model instance). There are
also forms of data leakage which are specific to certain types of data. Time series data is
particularly problematic, since the order of samples is significant, and random splits can
easily cause leakage and overfitting — see Don’t ignore temporal dependencies in time
series data for more on this. Even for non-time series data, the experimental conditions
used to generate data sets may lead to temporal dependencies, or other problematic
conditions such as duplicated or similar samples — see Do use an appropriate test set
for an example. In order to prevent leakage, these kinds of issues need to be identified
and taken into account when splitting data. For a broader discussion of data leakage,
see Kapoor and Narayanan [2023].
8
3.2 Do try out a range of different models
Generally speaking, there’s no such thing as a single best ML model. In fact, there’s
a proof of this, in the form of the No Free Lunch theorem, which shows that no ML
approach is any better than any other when considered over every possible problem
[Wolpert, 2002]. So, your job is to find the ML model that works well for your particular
problem. There is some guidance on this. For example, you can consider the inductive
biases of ML models; that is, the kind of relationships they are capable of modelling. For
instance, linear models, such as linear regression and logistic regression, are a good choice
if you know there are no important non-linear relationships between the features in your
data, but a bad choice otherwise. Good quality research on closely related problems
may also be able to point you towards models that work particularly well. However,
a lot of the time you’re still left with quite a few choices, and the only way to work
out which model is best is to try them all. Fortunately, modern ML libraries, such as
scikit-learn [Varoquaux et al., 2015] in Python, tidymodels [Kuhn and Wickham, 2020]
in R, and MLJ [Blaom et al., 2020] in Julia, allow you to try out multiple models with
only small changes to your code, so there’s no reason not to try them all out and find
out for yourself which one works best. However, Don’t use inappropriate models, and
use a validation set, rather than the test set, to evaluate them (see Do avoid sequential
overfitting). When comparing models, Do optimise your model’s hyperparameters and
Do evaluate a model multiple times to make sure you’re giving them all a fair chance,
and Do correct for multiple comparisons when you publish your results.
9
GRU NAS
Components Attention Transformers
Optimisers Adam
Autoencoders DBN
Feedforward Dropout
ReLU 0
202 ChatGPT
Recurrent SOM LSTM
0
Pooling 201
Multi-layer Q-learning Residual
0 Capsule
Perceptron Convolution 200 blocks nets
0
Spiking neuron 199 CNN
Backpropagation Batch
0 GANs
Hebbian 198 normalisation
learning Perceptron
0 Jordan RNN
197 Hopfield RNN Inception Diffusion
McCulloch & modules models
0
Pitts neuron 196 Transfer learning Boltzmann
0 machine
195
Figure 2: See Do keep up with progress in deep learning (and its pitfalls). A rough
history of neural networks and deep learning, showing what I consider to be the mile-
stones in their development. For a far more thorough account of the field’s historical
development, take a look at Schmidhuber [2015].
some of the important developments over time. Multilayer perceptrons (MLP) and
recurrent neural networks (particularly LSTM) have been around for some time, but
have largely been subsumed by newer models such as convolutional neural networks
(CNN) [Li et al., 2021] and transformers [Lin et al., 2022]. For example, transformers
have become the go-to model for processing sequential data (e.g. natural language), and
are increasingly being applied to other data types too, such as images [Khan et al.,
2022]. A prominent downside of both transformers and deep CNNs is that they have
many parameters and therefore require a lot of data to train them. However, an option
for small data sets is to use transfer learning, where a model is pre-trained on a large
generic data set and then fine-tuned on the data set of interest [Han et al., 2021]. Larger
pre-trained models, many of which are freely shared on websites such as Hugging Face,
are known as foundation models; see Zhou et al. [2023] for a survey. Whilst powerful,
these come with their own set of pitfalls. For example, their ability to fully memorise
input data is the cause of data security and privacy concerns [Li et al., 2023]. The use
of opaque, often poorly documented, training datasets also leads to pitfalls when fitting
them into broader ML pipelines (see Do combine models (carefully) for more info) and
comparing them fairly with other ML models (see Don’t assume a bigger number means
a better model and Don’t always believe results from community benchmarks). For an
extensive, yet accessible, guide to deep learning, see Zhang et al. [2023].
10
Whilst deep learning is great for certain tasks, it is not good at everything; there are
plenty of examples of it being out-performed by “old fashioned” machine learning models
such as random forests and SVMs. See, for instance, Grinsztajn et al. [2022], who show
that tree-based models often outperform deep learners on tabular data. Certain kinds
of deep neural network architecture may also be ill-suited to certain kinds of data: see,
for example, Zeng et al. [2023], who argue that transformers are not well-suited to time
series forecasting. There are also theoretical reasons why any one kind of model won’t
always be the best choice (see Do try out a range of different models). In particular, a
deep neural network is unlikely to be a good choice if you have limited data, if domain
knowledge suggests that the underlying pattern is quite simple, or if the model needs to
be interpretable. This last point is particularly worth considering: a deep neural network
is essentially a very complex piece of decision making that emerges from interactions
between a large number of non-linear functions. Non-linear functions are hard to follow
at the best of times, but when you start joining them together, their behaviour gets very
complicated very fast. Whilst explainable AI methods (see Do look at your models) can
shine some light on the workings of deep neural networks, they can also mislead you by
ironing out the true complexities of the decision space (see Molnar et al. [2020]). For
this reason, you should take care when using either deep learning or explainable AI for
models that are going to make high stakes or safety critical decisions; see Rudin [2019]
for more on this.
11
Split data
Feature
Full data set selection
F1 F2 F3 F4 F5 F6 F7 F8 F2 F5 F6 Test set
2 4 2 8 5 8 5 2 4 5 8
1 3 1 7 2 4 7 6 3 2 4
5 5 1 1 7 7 7 3 5 7 7
3 7 1 2 6 8 2 0 7 6 8
4 3 2 6 3 8 2 3 3 3 8
5 6 3 5 3 1 6 1 6 3 1
Predictive model
Train
Training set (classifier or
regression)
F2 F5 F7
4 5 5
F1
2
F2
4
F3
2
F4
8
F5
5
F6
8
F7
5
F8
2
3) Same features
1 3 1 7 2 4 7 6
1) Split data used for test set
5 5 1 1 7 7 7 3
3 7 1 2 6 8 2 0
4 3 2 6 3 8 2 3
F2 F5 F7
5 6 3 5 3 1 6 1
7 6 2
Predictive model
Train
Training set 3 3 2 (classifier or
6 3 6
regression)
2) Select features
using training set
Cross-validation Cross-validation
Full data set iteration 1 iteration 2
F1 F2 F3 F4 F5 F6 F7 F8 F2 F5 F6 F2 F5 F6 F7
2 4 2 8 5 8 5 2 4 5 8 4 5 8 5
1 3 1 7 2 4 7 6 3 2 4 3 2 4 7
5 5 1 1 7 7 7 3 F2 F5 F6 5 7 7 5 7 7 7 F2 F5 F6 F7
3 7 1 2 6 8 2 0 7 6 8 7 6 8 2
4 3 2 6 3 8 2 3 3 3 8 3 3 8 2
5 6 3 5 3 1 6 1 6 3 1 6 3 1 6
3 6 1 7 4 8 7 0 6 4 8 6 4 8 7
1
1
5
4
1
3
6
2
2
6
6
6
1
6
0
3
5
4
2
6
6
6
Independent 5
4
2
6
6
6
1
6 7 2 8 4 7 2 2 7 4 7 feature 7 4 7 2
5 3 2 1 5 8 7 1 3 5 8 selection for 3 5 8 7
5 3 3 5 3 3 8
each iteration
3 5 1 1 3 3 8 5
5 5 2 7 7 6 1 0 5 7 6 5 7 6 1
2 4 1 8 2 8 5 4 4 2 8 4 2 8 5
4 6 3 5 4 7 7 1 6 4 7 6 4 7 7
Figure 3: See Do be careful where and how you do feature selection. [top] Data leakage
due to carrying out feature selection before splitting off the test data (outlined in red),
causing the test set to become an implicit part of model training. [centre] How it should
be done. [bottom] When using cross-validation, it’s important to carry out feature
selection independently for each iteration, based only on the subset of data (shown in
blue) used for training during that iteration.
12
Tanks Not tanks
of these hyperparameters significantly effect the performance of the model, and there
is generally no one-size-fits-all. That is, they need to be fitted to your particular data
set in order to get the most out of the model. Whilst it may be tempting to fiddle
around with hyperparameters until you find something that works, this is not likely
to be an optimal approach. It’s much better to use some kind of hyperparameter
optimisation strategy, and this is much easier to justify when you write it up. Basic
strategies include random search and grid search, but these don’t scale well to large
numbers of hyperparameters or to models that are expensive to train, so it’s worth
using tools that search for optimal configurations in a more intelligent manner. See
Bischl et al. [2023] for further guidance. It is also possible to use AutoML techniques
to optimise both the choice of model and its hyperparameters, in addition to other parts
of the machine learning pipeline — see Barbudo et al. [2023] for a review.
2
There is some debate about whether this actually happened: see https://www.gwern.net/Tanks.
13
an illustration). An ML model that uses such spurious correlations to perform classifica-
tion would appear to be very good, in terms of its metric scores, but would not work in
practice. More complex data tends to contain more of these spurious correlations, and
more complex models have more capacity to overfit spurious correlations. This means
that spurious correlations are a particular issue for deep learning, where approaches such
as regularisation (see Do keep up with progress in deep learning (and its pitfalls)) and
data augmentation (see Do make sure you have enough data) can help mitigate against
this. However, spurious correlations can occur in all data sets and models, so it is al-
ways worth looking at your trained model to see whether it’s responding to appropriate
features within your data — see Do look at your models.
14
4.2 Don’t do data augmentation before splitting your data
Data augmentation (see Do make sure you have enough data) can be a useful technique
for balancing datasets and boosting the generality and robustness of ML models. How-
ever, it’s important to do data augmentation only on the training set, and not on data
that’s going to be used for testing. Including augmented data in the test set can lead to
a number of problems. One problem is that the model may overfit the characteristics of
the augmented data, rather than the original samples, and you won’t be able to detect
this if your test set also contains augmented data. A more critical problem occurs when
data augmentation is applied to the entire data set before it is split into training and test
sets. In this scenario, augmented versions of training samples may end up in the test set,
which in the worst case can lead to a particularly nefarious form of data leakage in which
the test samples are mostly variants of the training samples. For an interesting study of
how this problem affected an entire field of research, see Vandewiele et al. [2021].
3
Though Hosseini et al. [2020] suggested “over-hyping”, from over fitting of hyper-parameters.
15
Train… Tweak… Tweak…
…
Test Test Test
set set set
Test set
Predictive
Training set Training
model N
…
Validation Validation Test
set set set
Predictive
Training set Training
model N
Figure 5: See Do avoid sequential overfitting. [top] Using the test set repeatedly during
model selection results in the test set becoming an implicit part of the training process.
[bottom] A validation set should be used instead during model selection, and the test
set should only be used once to measure the generality of the final model.
data for each model trained. Cross-validation (CV) is a particularly popular way of
doing this, and comes in numerous flavours [Arlot et al., 2010], most of which involve
splitting the data into a number of folds. When doing CV, it is important to be aware
of any dependencies within the data and take these into account. Failure to do so can
result in data leakage. For instance, in medical datasets, it is commonplace to have
multiple data points for a single subject; to avoid data leakage, these should be kept
together within the same fold. Time series data is particularly problematic for CV;
see Don’t ignore temporal dependencies in time series data for a discussion of how to
handle this. If you’re carrying out hyperparameter optimisation, then you should use
16
nested cross-validation (also known as double cross-validation), which uses an extra
loop inside the main cross-validation loop to avoid overfitting the test folds. If some of
your data classes are small, then you may need to do stratification, which ensures each
class is adequately represented in each fold. In addition to looking at average perfor-
mance across multiple evaluations, it is also standard practice to provide some measure
of spread or confidence, such as the standard deviation or the 95% confidence interval.
17
F1 F2 F3 F4 F5 Class Predicted Correct?
2 4 2 8 5 A A TRUE
1 3 1 7 2 A A TRUE Number of correct
5 5 1 1 7 A A TRUE classifications
3 7 1 2 6 A Always A TRUE
5
4 3 2 6 3 A
predict A TRUE
Accuracy = 50%
5 6 3 5 3 B A FALSE 10
3 6 1 7 4 B class A A FALSE
1 5 1 6 2 B A FALSE Total number of
1 4 3 2 6 B A FALSE classifications
6 7 2 8 4 B A FALSE
2 4 2 8 5 A A TRUE
1 3 1 7 2 A A TRUE Number of correct
5 5 1 1 7 A A TRUE classifications
3 7 1 2 6 A Always A TRUE
9
4 3 2 6 3 A
predict A TRUE
Accuracy = = 90%
5 6 3 5 3 A A TRUE 10
3 6 1 7 4 A class A A TRUE
1 5 1 6 2 A A TRUE Total number of
1 4 3 2 6 A A TRUE classifications
6 7 2 8 4 B A FALSE
Figure 6: See Do choose metrics carefully. The problem with using accuracy as a
performance metric on imbalanced data. Here, a dummy model which always predicts
the same class label has an accuracy of 50% or 90% depending on the distribution of
class labels within the data.
18
and the model may not operate fairly when exposed to users from other ethnicities.
However, unfairness can also come from other sources, including subconscious bias during
data preparation and the inductive biases of the model. Regardless of the source, it is
important to understand any resulting biases, and ideally take steps to mitigate against
them (e.g. applying data augmentation to minority samples — see Do make sure you
have enough data). There are many different fairness metrics, so part of the puzzle is
working out which are most relevant to your modelling context; see Caton and Haas
[2024] for a review.
19
5000
4000
3000
2000
1000
0
-1000
-2000
time
1
0.75
5000
4000
3000
2000 Train Test
1000
0
-1000
-2000
time
1.25
1
0.75 Train Test
0.5
0.25
0
time
Figure 7: See Don’t ignore temporal dependencies in time series data. [top] A time
series is scaled to the interval [0, 1] before splitting off the test data (shown in red). This
could allow the model to infer that values will increase in the future, causing a potential
look ahead bias. [bottom] Instead, the data should be split before doing scaling, so that
information about the range of the test data can’t leak into the training data.
differences in performance may be due to this. If the datasets had different degrees of
class imbalance, then the difference in accuracy could merely reflect this (see Do choose
metrics carefully). If they used different data sets entirely, then this may account for
even large differences in performance. Another reason for unfair comparisons is the
failure to carry out the same amount of hyperparameter optimisation (see Do optimise
your model’s hyperparameters) when comparing models; for instance, if one model has
default settings and the other has been optimised, then the comparison won’t be fair.
For these reasons, and others, comparisons based on published figures should always
20
be treated with caution. To be sure of a fair comparison between two approaches, you
should freshly implement all the models you’re comparing, optimise each one to the
same degree, carry out multiple evaluations (see Do evaluate a model multiple times),
and then use statistical tests (see Do use statistical tests when comparing models) to
determine whether the differences in performance are significant. A further complication
when comparing foundation models (see Do keep up with progress in deep learning (and
its pitfalls)) it that the original training data is often unknown; consequently it may be
impossible to ensure that the test set is independent of the training data, and therefore
a fair basis for comparison.
21
of each model, which you can get by using cross-validation or repeated resampling (or,
if your training algorithm is stochastic, multiple repeats using the same data). The test
then compares the two resulting distributions. Student’s T test is a common choice
for this kind of comparison, but it’s only reliable when the distributions are normally
distributed, which is often not the case. A safer bet is Mann-Whitney’s U test, since this
does not assume that the distributions are normal. For more information, see Raschka
[2020] and Carrasco et al. [2020]. Also see Do correct for multiple comparisons and Do
be careful when reporting statistical significance.
22
in performance is significant. This is particularly the case where foundation models
(see Do keep up with progress in deep learning (and its pitfalls)) are used, since it is
possible that their training data included the test sets from community benchmarks.
See Paullada et al. [2021] for a wider discussion of issues surrounding the use of shared
datasets. Also see Do report performance in multiple ways.
6.1 Do be transparent
First of all, always try to be transparent about what you’ve done, and what you’ve
discovered, since this will make it easier for other people to build upon your work. In
particular, it’s good practice to share your models in an accessible way. For instance,
if you used a script to implement all your experiments, then share the script when you
23
publish the results. This means that other people can easily repeat your experiments,
which adds confidence to your work. It also makes it a lot easier for people to compare
models, since they no longer have to reimplement everything from scratch in order to
ensure a fair comparison. Knowing that you will be sharing your work also encourages
you to be more careful, document your experiments well, and write clean code, which
benefits you as much as anyone else. It’s also worth noting that issues surrounding
reproducibility are gaining prominence in the ML community, so in the future you may
not be able to publish work unless your workflow is adequately documented and shared —
for example, see Pineau et al. [2021]. Checklists (Do use a machine learning checklist) are
useful for knowing what to include in your workflow. You might also find experiment
tracking frameworks, such as MLflow [Chen et al., 2020], useful for recording your
workflow.
24
multiple data sets may not be independent, and may have similar biases. There’s also
the issue of quality: and this is a particular issue in deep learning datasets, where the
need for quantity of data limits the amount of quality checking that can be done. So, in
short, don’t overplay your findings, and be aware of their limitations.
25
Figure 8: See Do look at your models. Using saliency maps to analyse vision-based
deep learning models. Imagine these two maps (in red) were generated for the image
shown in the centre, for two different deep learning models trained on the kind of tank
recognition data mentioned in Do avoid learning spurious correlations. Darker colours
indicate features that are of greater importance to the model, so the model on the left
(which predominantly focuses on the components of the tank) is likely to generalise much
better than the one on the right (which predominantly focuses on the background of the
image).
the importance of different parts of an input image — see Figure 8 for an illustrative
example. Grad-CAM is a popular technique for generating these, but there are plenty
of other methods too. For non-vision transformers, a common approach is to visualise
attention weights. See Dwivedi et al. [2023] for a survey of XAI techniques, and Ali et al.
[2023] for a discussion of the limitations of current approaches. Whilst XAI techniques
can give you useful insights into a model’s behaviour, it’s important to bear in mind
that they are unlikely to tell you exactly what a model is doing. This is particularly
the case for deep learning models (see Don’t assume deep learning will be the best
approach), whose complexity makes their behaviour inherently difficult to analyse. For
complex models, ablation studies Meyes et al. [2019] can also be useful. This involves
successively removing parts of the model to see what is important, and can result in a
simpler model which is more amenable to analysis.
26
7 Final thoughts
ML is becoming an important part of people’s lives, yet the practice of ML is arguably
in its infancy. There are many easy-to-make mistakes that can cause an ML model to
appear to perform well, when in reality it does not. In turn, this has the potential to
misinform when these models are published, and the potential to cause harm if these
models are ever deployed. This guide describes the most common of these mistakes,
and also touches upon more general issues of good practice in ML, such as fairness,
transparency and the avoidance of bias. It also offers advice on avoiding these pitfalls.
However, new threats continue to emerge as new approaches to ML are developed, and
it is therefore important for users of ML to remain vigilant. This is the nature of a
fast-moving research area — the theory of how to do ML almost always lags behind
the practice, practitioners will always disagree about the best ways of doing things, and
what we think is correct today may not be correct tomorrow.
Acknowledgements
Many thanks to everyone who gave me feedback on the draft manuscript, to everyone
who has since sent me suggestions for revisions and new content, and to the editor and
peer reviewers of the version published in Patterns.
References
Where available, preprint URLs are also included for papers that are not open access.
27
R. Barbudo, S. Ventura, and J. R. Romero. Eight years of automl: categorisation,
review and trends. Knowledge and Information Systems, 65(12):5097–5149, 2023.
https://doi.org/10.1007/s10115-023-01935-1.
R. A. Betensky. The p-value requires context, not a threshold. The American Statisti-
cian, 73(sup1):115–117, 2019. https://doi.org/10.1080/00031305.2018.1529624.
J. Carrasco, S. Garcı́a, M. Rueda, S. Das, and F. Herrera. Recent trends in the use of
statistical tests for comparing swarm and evolutionary computing algorithms: Practi-
cal guidelines and a critical review. Swarm and Evolutionary Computation, 54:100665,
2020. https://doi.org/10.1016/j.swevo.2020.100665.
S. Caton and C. Haas. Fairness in machine learning: A survey. ACM Computing Surveys,
56(7):1–38, 2024. https://doi.org/10.1145/3616865.
28
workshop on data management for end-to-end machine learning, pages 1–4, 2020.
https://doi.org/10.1145/3399579.3399867.
X. Dong, Z. Yu, W. Cao, Y. Shi, and Q. Ma. A survey on ensemble learning. Frontiers of
Computer Science, 14(2):241–258, 2020. https://doi.org/10.1007/s11704-019-8208-z.
X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, J. Qiu, Y. Yao, A. Zhang, L. Zhang,
et al. Pre-trained models: Past, present and future. AI Open, 2:225–250, 2021.
https://doi.org/10.1016/j.aiopen.2021.08.002.
29
G. Iglesias, E. Talavera, Á. González-Prieto, A. Mozo, and S. Gómez-Canaval. Data
augmentation techniques in time series domain: a survey and taxonomy. Neural Com-
puting and Applications, 35(14):10123–10145, 2023. https://doi.org/10.1007/s00521-
023-08459-3.
S. Kapoor and A. Narayanan. Leakage and the reproducibility crisis in
machine-learning-based science. Patterns, 4(9):100804, 2023. ISSN 2666-3899.
https://doi.org/10.1016/j.patter.2023.100804.
S. Kapoor, E. M. Cantrell, K. Peng, T. H. Pham, C. A. Bail, O. E. Gundersen,
J. M. Hofman, J. Hullman, M. A. Lones, M. M. Malik, P. Nanayakkara, R. A.
Poldrack, I. D. Raji, M. Roberts, M. J. Salganik, M. Serra-Garcia, B. M. Stew-
art, G. Vandewiele, and A. Narayanan. REFORMS: Consensus-based recommen-
dations for machine-learning-based science. Science Advances, 10(18):eadk3452, 2024.
https://doi.org/10.1126/sciadv.adk3452.
P. Karande, B. Gallagher, and T. Y.-J. Han. A strategic approach to machine learning for
material science: How to tackle real-world challenges and avoid pitfalls. Chemistry of
Materials, 34(17):7650–7665, 2022. https://doi.org/10.1021/acs.chemmater.2c01333.
S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and
M. Shah. Transformers in vision: A survey. ACM computing surveys
(CSUR), 54(10s):1–41, 2022. https://doi.org/10.1145/3505244 (preprint:
https://doi.org/10.48550/arXiv.2101.01169).
D. Kreuzberger, N. Kühl, and S. Hirschl. Machine learning operations
(MLOps): Overview, definition, and architecture. IEEE access, 2023.
https://doi.org/10.1109/ACCESS.2023.3262138.
M. Kuhn and H. Wickham. Tidymodels: a collection of packages for modeling and
machine learning using tidyverse principles, 2020. https://www.tidymodels.org.
H. Li, Y. Chen, J. Luo, Y. Kang, X. Zhang, Q. Hu, C. Chan, and Y. Song. Privacy
in large language models: Attacks, defenses and future directions. Preprint at arXiv,
2023. https://doi.org/10.48550/arXiv.2310.10383.
Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou. A survey of convolutional neural net-
works: analysis, applications, and prospects. IEEE transactions on neural networks
and learning systems, 2021. https://doi.org/10.1109/TNNLS.2021.3084827 (preprint:
https://doi.org/10.48550/arXiv.2004.02806).
T. Liao, R. Taori, I. D. Raji, and L. Schmidt. Are we learning yet? a meta review
of evaluation failures across machine learning. In Thirty-fifth Conference on Neural
Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
https://openreview.net/forum?id=mPducS1MsEK.
T. Lin, Y. Wang, X. Liu, and X. Qiu. A survey of transformers. AI Open, 2022.
https://doi.org/10.1016/j.aiopen.2022.10.001.
30
M. A. Lones. Avoiding machine learning pitfalls. Patterns, 2024.
https://doi.org/10.1016/j.patter.2024.101046.
C. Rudin. Stop explaining black box machine learning models for high stakes
decisions and use interpretable models instead. Nature Machine Intelli-
gence, 1(5):206–215, 2019. https://doi.org/10.1038/s42256-019-0048-x (preprint:
https://doi.org/10.48550/arXiv.1811.10154).
31
D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary,
M. Young, J.-F. Crespo, and D. Dennison. Hidden technical debt in machine
learning systems. Advances in neural information processing systems, 28:2503–2511,
2015. https://papers.nips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-
Paper.pdf.
B. Van Giffen, D. Herhausen, and T. Fahse. Overcoming the pitfalls and perils of algo-
rithms: A classification of machine learning biases and mitigation methods. Journal of
Business Research, 144:93–106, 2022. https://doi.org/10.1016/j.jbusres.2022.01.076.
Z. Wang, P. Wang, K. Liu, P. Wang, Y. Fu, C.-T. Lu, C. C. Aggarwal, J. Pei, and
Y. Zhou. A comprehensive survey on data augmentation. Preprint at arXiv, 2024.
https://doi.org/10.48550/arXiv.2405.09591.
32
S. Whalen, J. Schreiber, W. S. Noble, and K. S. Pollard. Navigating the pitfalls of
applying machine learning in genomics. Nature Reviews Genetics, 23(3):169–181, 2022.
https://doi.org/10.1038/s41576-021-00434-9.
A. Zhang, Z. C. Lipton, M. Li, and A. J. Smola. Dive into deep learning. Cambridge
University Press, 2023. https://d2l.ai.
C. Zhou, Q. Li, C. Li, J. Yu, Y. Liu, G. Wang, K. Zhang, C. Ji, Q. Yan, L. He, et al.
A comprehensive survey on pretrained foundation models: A history from bert to
chatgpt. Preprint at arXiv, 2023. https://doi.org/10.48550/arXiv.2302.09419.
J.-J. Zhu, M. Yang, and Z. J. Ren. Machine learning in environmental research: common
pitfalls and best practices. Environmental Science & Technology, 57(46):17671–17689,
2023. https://doi.org/10.1021/acs.est.3c00026.
33