0% found this document useful (0 votes)
25 views24 pages

Gradient Boosted Trees Explained

The document discusses Gradient Boosted Trees, a machine learning technique that builds predictive models through an ensemble of weak learners, typically decision trees. It outlines the importance of data mining, various data mining techniques, and the principles of boosting, particularly focusing on how Gradient Boosting minimizes loss functions using gradient descent. The pros and cons of Gradient Boosted Trees are also highlighted, emphasizing their effectiveness and sensitivity to overfitting.

Uploaded by

cemisouth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views24 pages

Gradient Boosted Trees Explained

The document discusses Gradient Boosted Trees, a machine learning technique that builds predictive models through an ensemble of weak learners, typically decision trees. It outlines the importance of data mining, various data mining techniques, and the principles of boosting, particularly focusing on how Gradient Boosting minimizes loss functions using gradient descent. The pros and cons of Gradient Boosted Trees are also highlighted, emphasizing their effectiveness and sensitivity to overfitting.

Uploaded by

cemisouth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Gradient boosted trees

Dr. Geetha Kuntoji


Assistant Professor

Department of Civil Engineering

BMS College of Engineering


Bengaluru-19

28 Feb 2025 (10.30am to 5.00pm)


Department of Civil Engineering NIE College of Engineering Mysore
Data Mining : It is a process of extracting patterns from data. They should
be:
Valid: holding on to new data with some certainity
Novel: being non-obvious to the system.
Useful: should be possible to act on the item
Understandable: Humans should be able to interpret the pattern.

Also known as Knowledge Discovery in Databases (KDD).


Data Mining might mean:

Statistics Visualization Artificial


Intelligence

Information Knowledge-
M a c hi ne L ea r ni ng
Retreiva I based systems

Knowledge Pattern And so on....


acquisition Recognition
What's needed?

imp
%soSuitable data
irmig
lame
Computing power
** Data mining software

. Someone who knows both


the nature of data and the Reason, theory or hunch
.
software tools.
Typical Data Mining and KDD have
widespread applications.
Some examples include: Marketing

applications of
Data Mining
Financial services
and KDD Health care

And so on....

Some basic techniques

r Predictive model: It basically describes what will happen in the future, rather
predicts by by
analyzing the given current data. It uses statistical analysis, machine learning
algorithms and other forecast techniques to predict what might happen in the
[Link] is not accurate as it is essentially just a prediction into the future using
L the data and the given stastistical/Machine Learning techniques. Eg- Performance
r Analysis.
Descriptive model: It basically gives a vision into the past and tells what exactly happened in
the past. It involves Data Aggregation and Data [Link] is accurate as it describes exactly
what happened in the past. Eg- Sentiment Analysis.
L
r 1
Prescriptive model: This is realtively new field in Data [Link] is a step above
0 predictive and descriptive model. It basically provides a viable solution to the
problem in hand and the
impact of considering a solution on future [Link] is still an evolving technique.
Eg- Google self driving car.
Some basic techniques

Predictive Descriptive
Regression Clustering

Classification Association
rules and variants

Collaborative
Filtering Deviation
detection
Key data mining tas s


Classification: mapping Regression: mapping data Clustering: Grouping

data into predefined item to a real valued similar data together into

groups or classes. prediction variable. clusters.


Key learning tasks in Machine Learning
4-ftmeik.
Unsupervised learning: Data given is not
Supervised learning: A set of well-labled
labelled ie. only input variables are given
data is given with defined inputs and
with no corresponding output variables.
outputs variables (training data) and the
The algorithms find patterns and draw
algorithms learn to predict the output
inferences from the given data. This is
from the input data.
"pure Data Mining".

Semi-supervised: Some data is labeled

but most of it is unlabeled and a mixture

of supervised and unsupervised

techniques can be used.


Some basic Data Mining Methods

Genetic
Cluster/Nearest
Decision Trees Neural Networks Algorithms/Evolutionary
Neighbour
Computing

ayesien Networks Statistics Hybrids


We are interested in Gradient boosted trees.

Gradient
boosted trees
We would use Rapidminer (possibly Python?)
Gradient boosted trees

 Decision Trees
We will discuss a bit about decision trees first.
A decision tree is a tree where each node represents a feature(attribute), each
link(branch) represents a decision(rule) and each leaf represents an
outcome(categorical or continues value).
A decision tree takes a set of input features and splits input data recursively based
on those features.
The processes are repeated until some stop condition is met. Ex- Depth of tree, no
more information gain possible etc.
Gradient boosted trees

 Decision Trees have been there for a long time and have also known to suffer from bias and variance.

We have a large bias with simple trees and large variance with complex trees.
Ensemble methods combine several decision trees to produce better predictive
performance rather than utilizing a single decision tree.
The main principle behind the ensemble model is that a group of weak learners
come together to form a strong learner.
A few ensemble methods : Bagging, Boosting
 We will see each of them.
Gradient boosted trees

Bagging

It's used when our goal is to reduce the variance of the decision tree.
Here the idea is to take a subset of data from training sample chosen randomly
with replacement.
Now, each collection of subset data is used to train their decision trees.
Thus we end up with ensemble of different models and their average is much more
robust than a single decision tree,which is much more robust in Predictive
Analysis.
Random Forest is an extension of Bagging.
Gradient boosted trees

Random Forest

It is basically a collection or ensemble of model of numerous decision trees. A collection of


trees is generally called forest.
It is also a bagging technique with a key difference, it takes a subset of features at each
split , and prune the trees with a stopping criteria for node splits.
The tree is grown to the largest.
The above steps are repeated and the prediction is given based on the aggregation of
predictions from n number of trees.
Used for both classification and regression.
It handles higher dimensionality data and missing values well and maintains accuracy, but
doesnt give precise values for the regression model as the final prediction is based on the
mean predictions from subset trees.
Gradient boosted trees

 Boosting

Boosting refers to a family of learners which convert weak learners to strong learners.

 It learns sequentially from the errors from a prior random sample(in our case, a tree).

The weak learners are trained sequentially each trying to correct its predecessor.

The early learners fit simple models to the data and then analyze the data for errors.

All the weak learners with their higher accuracy of error (only slighty less than guessing,0.5) are combined in some way to get a strong
classifier,with a higher accuracy.

 When an input is misclassified by a hypothesis, its weight is increased so that next hypothesis is more likely to classify it correctly.

By combining the whole set at the end, the weak learners are converted into better performing model.
Gradient boosted trees

*
*
Start from a weak The result is strong We train an algorithm, A model is built on a
Types of boosting AdaBoost: short for
classifier and learn to classifier built by say Decision tree on a subset of data and
Adaptive boosting.
linearly combine them so boosting of weak model, whose all features predictions are made on
that the error is reduced. classifiers.
have been given equal the whole dataset,and
weights. errors are calculated by

the predictions and

actual values.
Gradient boosted trees

Adaboost

While creating the next model, higher weights are given to the data points which were
predicted incorrectly ie. misclassitied.
Weights can be determined using the error value, ie. Higher the error, more is the
weight associated to the observation.
This process is repeated until the error function does not change, or the maximum limit of
the estimators is reached.
 Its used for both classfication and regression problem,mostly decision stamps are used with
Adaboost, but any machine learning algorithm, if it accepts weight on training data set can
be used a base learner.
 One of the applications of Adaboost is face recognition systems.
Gradient boosted trees

Types of Boosting

Gradient Boosting

We will cover this in detail now.


There are other implementations of Gradient boosting like XGBoost and Light
GB.
Gradient boosted trees

Gradient Boost

It's also a machine learning technique which produces which produces a


prediction model in the form of an ensemble of weak prediction models, typically
decision trees.
Thus, they may be referred as Gradient boosted trees.
Like other boosting methods, it builds a model in a sequential or stage-wise
fashion.
Gradient boosted trees

We shall now see some maths behind it.


The objective of any supervised learning algorithm is to define a loss function and minimize it.
We have mean square error defined as:

We want our loss function(MSE) in our predictions be minimum using gradient descent and updating our
predictions based on a learning rate.
Gradient boosted trees

We will see what is learning rate.


Learning rates are the hypermeters which controls how much we are adjusting the weights of our network with
respect to the loss gradient. The learning rate affects how quickly our model can converge to a local minima (aka.
arrive at the best accuracy).
The relationship is given by the formula: new weight = existing weight learning rate * gradient In gradient
boosted trees, we use the following learning rate:

We basically update the predictions such that the sum of our residuals is close to zero(or minimum) and the
predicted values are sufficiently close to the actual values.
Learning rates are so tuned so as to prevent the overfitting which the gradient boosted trees are prone to.
Gradient boosted trees

In Gradient boosted trees, models are sequentially trained, and each model minimizes
the loss function (y = ax + b + e, e needs special attention as it is an error term) of the
whole system using Gradient descent method, as explained earlier.
The learning procedure consecutively fits new models to provide a more accurate
estimate of response variable.
The principle idea behind this algorithm is to create new base learners, which can be
maximally corelated with negative gradient of the loss function, associated with the whole
ensemble.
Pros of Gradient boosted trees: Fast, easy to tune, not sensitive to scale (features can be a
mix of continuous and categorical data), good performance, lots of software available(well
supported and tested)
Cons: Sensitive to overfitting and noise (should always cross validate)

You might also like