0% found this document useful (0 votes)
5 views

learning and types of learning

The document discusses various learning theories including Behaviorism, Cognitivism, Constructivism, Humanism, and Connectivism, emphasizing how each theory approaches the learning process differently. It also covers machine learning concepts such as Instance-Based Learning, Model-Based Learning, Similarity-Based Learning, and Regression Analysis, explaining their methodologies and applications. Additionally, the document includes important questions and exercises related to these concepts.

Uploaded by

cajewen566
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

learning and types of learning

The document discusses various learning theories including Behaviorism, Cognitivism, Constructivism, Humanism, and Connectivism, emphasizing how each theory approaches the learning process differently. It also covers machine learning concepts such as Instance-Based Learning, Model-Based Learning, Similarity-Based Learning, and Regression Analysis, explaining their methodologies and applications. Additionally, the document includes important questions and exercises related to these concepts.

Uploaded by

cajewen566
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

MODULE-3

Chapter-1
Learning and types of learning

1.1 What is learning?

Learning can be defined as any relatively permanent change in


behaviour or behavioural potential produced by experience. Changes
that occur due to practice and experience, and are relatively permanent,
are a component of learning.
Its distinguishing features are:
(i) Learning always involves some kind of experience. For
instance, a child gets lost at a place on leaving the hands of the
parents, would learn not to leave the hand of elders the next time.
(ii) Behavioural changes that occur due to learning are relatively
permanent and are different from temporary behavioural changes
caused by habituation, drugs or fatigue. For example, feeling tired
after studying is a temporary change and does not involve learning.
(iii) Learning is an inferred process that involves a series of
psychological events. It is also different from a performance.

Behaviorism

Founded by John B. Watson (but widely associated with Ivan Pavlov


and B.F. Skinner), Behaviorism is the idea that — like Bandura’s Social
Learning Theory — children learn by observing the behavior of others,
whether adults and authority figures or friends and peers their own age.
In Behaviorism, the learner’s mind is a “blank slate” ready to absorb
knowledge — and repetition and reinforcement play a key role in
communicating with students.

For instance, the teacher will use either negative or positive


reinforcement, which respectively means subtracting (“negative”) or
adding (“positive”) something, in order to reinforce the desired behavior
(or discourage an undesired behavior). An example of positive
reinforcement might be rewarding or praising a behavior like
volunteering during group discussions.

Cognitivism

Introduced during the middle of the 20th century, Cognitivism shifts


away from Behaviorism to place a heavier emphasis on the internal
thoughts of the observer, as opposed to merely observing others’
behavior and responding to stimuli. In contrast to Behaviorism,
Cognitivism holds that learning chiefly takes place while the student is
working to break down and organize new information in their mind.

Journaling is frequently suggested as a helpful classroom exercise that


uses the principles of Cognitivism. We’ll explore some additional ways
of applying Cognitivism and other learning theories at the end of this
guide.

Constructivism

According to Constructivist Learning Theory, or CLT, students learn


new information by building upon — or in other words, constructing —
knowledge they’ve already gained. This represents a more active
approach to learning, as opposed to an approach like Behaviorism,
where students arguably take a more passive role in learning.

Cognitive Constructivism is associated with Jean Piaget, while Social


Constructivism is linked to pioneer Lev Vygotsky.

Humanism
Founded by pioneers like Carl Rogers, James F. T. Bugental, and
Abraham Maslow (whose famous “Hierarchy of Needs” you’re likely
already familiar with), Humanist Learning Theory (HLT) is a learner-
centric approach to education. Humanist Learning Theory places a
heavier emphasis on the learner themselves — and their untapped
potential — rather than the methods of learning or the materials being
taught. Built on the premise that humans are fundamentally good and
will act appropriately if their basic needs are met, HLT prioritizes
meeting the unique emotional and academic needs of each learner so
that they are empowered to take greater control over their own
education.

Connectivism
Connectivism, as we mentioned above, has been called a learning theory
for the 21st century. But, other than its relatively recent introduction as
a theory of learning, what makes Connectivism so useful and relevant to
the modern student — and educator?

Critically, Connectivist Learning Theory makes effective use of


technology, which is an essential tool for learning — particularly among
Generation Z students and future generations. Connectivism also places
a strong emphasis on the ability to find and sift through information in
order to conduct reliable research. Some examples of a Connectivist
approach to teaching might be to have your students write a blog or
launch a podcast together — activities that merge technology with
group and community interaction.

Instance Based Learning


The Machine Learning systems which are categorized as instance-based
learning are the systems that learn the training examples by heart and then
generalizes to new instances based on some similarity measure. It is called
instance-based because it builds the hypotheses from the training instances.
It is also known as memory-based learning or lazy-learning (because they
delay processing until a new instance must be classified). The time complexity
of this algorithm depends upon the size of training data. Each time whenever
a new query is encountered, its previously stores data is examined. And
assign to a target function value for the new instance.
The worst-case time complexity of this algorithm is O (n), where n is the
number of training instances. For example, If we were to create a spam filter
with an instance-based learning algorithm, instead of just flagging emails that
are already marked as spam emails, our spam filter would be programmed to
also flag emails that are very similar to them. This requires a measure of
resemblance between two emails. A similarity measure between two emails
could be the same sender or the repetitive use of the same keywords or
something else.
Advantages:
1. Instead of estimating for the entire instance set, local approximations
can be made to the target function.
2. This algorithm can adapt to new data easily, one which is collected as
we go .
Disadvantages:
1. Classification costs are high
2. Large amount of memory required to store the data, and each query
involves starting the identification of a local model from scratch.
Some of the instance-based learning algorithms are :
1. K Nearest Neighbor (KNN)
2. Self-Organizing Map (SOM)
3. Learning Vector Quantization (LVQ)
4. Locally Weighted Learning (LWL)
5. Case-Based Reasoning
What is Model-Based Machine Learning?
Hundreds of learning algorithms have been developed in the field of machine
learning. Scientists typically select from among these algorithms to answer
specific issues. Their options are frequently restricted by their expertise with these
systems. In this classical/traditional machine learning framework, scientists are
forced to make some assumptions to employ an existing algorithm.

 The model-based learning in machine learning is a technique that tries


to generate a custom solution for each new challenge
MBML’s purpose is to offer a single development framework that facilitates
the building of a diverse variety of custom models. This paradigm evolved as a
result of a significant confluence of three main ideas:

 Factor graphs
 Bayesian perspective,
 Probabilistic Programming
The essential principle is that in the form of a model, all assumptions about the
issue domain are made clear. Model-based deep learning is just a collection of
assumptions stated in a graphical manner.

Factor Graphs

The usage of PGM- Probabilistic Graphical Models, particularly factor graphs, is


the pillar of MBML. A PGM is a graph-based diagrammatic representation of the
joint probability distribution across all random variables in a model.
They are a form of PGM with round nodes and square nodes representing variable
probability distributions (factors), and vertices expressing conditional
relationships between nodes. They offer a broad framework for simulating the
combined dispersion of a set of random variables.

In factor graphs, we consider implicit parameters as random variables and


discover their probability distributions throughout the network using Bayesian
inference techniques. Inference/learning is just the product of factors across a
subset of the graph’s variables. This makes it simple to develop local message
forwarding algorithms.NG. CI/CD. MONITORINGre more fragile than you
think. All based on our open-source core.

Bayesian Methods

The first essential concept allowing this new machine learning architecture
is Bayesian inference/learning. Latent/hidden parameters are represented in
MBML as random variables with probability distributions. This provides for a
consistent and rational approach to quantifying uncertainty in model parameters.
Again when the observed variables in the model are locked to their values, the
Bayes’ theorem is used to update the previously assumed probability
distributions.

In contrast, the classical ML framework assigns model parameters to average


values derived by maximizing an objective function. Bayesian inference on big
models with millions of variables is accomplished similarly, but in a more
complicated way, employing the Bayes’ theorem. This is because Bayes’ theory
is an accurate inference approach that is intractable when applied to huge datasets.
The rise in the processing capacity of computers over the last decade has enabled
the research and innovation of algorithms that can scale to enormous data sets.

Probabilistic Programming

Probabilistic programming (PP) is a breakthrough in computer science in which


programming languages are now created to compute with uncertainty in addition
to logic. Current programming languages can already handle random variables,
variable restrictions, and inference packages. You may now express a model-
based reinforcement learning of your problem concisely with a few lines of code
using a PP language. So an inference engine is invoked to produce inference
procedures to solve the problem automatically.
Model-Based ML Developmental Stages
It consists of three rules-based models in machine learning:

 Describe the Model: Using factor graphs, describe the


process that created the data.
 Condition on Reported Data: Make the observed variables
equal to their known values.
 Backward reasoning is used to update the prior distribution
across the latent constructs or parameters. Estimate the
Bayesian probability distributions of latent constructs based
on observable variables.
Similarity Based Machine Learning
Similarity is a machine learning method that uses a nearest
neighbor approach to identify the similarity of two or more
objects to each other based on algorithmic distance functions. In
order for similarity to operate at the speed and scale of machine
learning standards, two critical capabilities are required – high-
speed indexing and metric and non-metric distance functions. As
a method, similarity is different than:
 Neural Networks which create vector nodes to predict an
outcome
 Decision Trees which create multiple trees that branch down
to predictions
 Deep Learning which uncovers hidden layers underneath
artificial neural networks
 Natural Language Processing (NLP) which mimics human
language in conversation
All of these methods fall under the machine learning umbrella of
Artificial Intelligence. Similarity, as a method, along with neural
networks and decision trees, all predict future outcomes. Deep
learning and NLP do not. However, only one method can
provide an explanation behind a prediction, at a local prediction
level. In other words, the factors driving one prediction are
different than the next prediction as revealed by the algorithm.
That one method is Similarity.
Only one method can provide an explanation behind a
prediction, at a local prediction level. In other words, the
factors driving one prediction are different than the next
prediction as revealed by the algorithm. That one method is
Similarity.

Once you have these factors, by prediction, you can execute real
time interactions with tremendous insight into the context of each
one, as the prediction factors reveal all of the key characteristics
associated with the prediction by data element or factor.
Why one customer is calling to cancel service vs. another, or
make a purchase vs. another, can be driven by completely
different factors. Therefore, it is now possible to treat each
individual interaction, driven by a similarity based machine
learning prediction differently, by leveraging the weighted
factors to drive the selection of one offer and treatment vs.
another.
Similarity creates uniquely valuable outputs and handles certain
applications that other alternative machine learning methods
can’t. Similarity reveals what’s behind the mind of the machine
to humans so humans can decide what actions to take. It
provides powerful insights and capabilities that, through
transparency, engender trust and understanding.
Regression Analysis in Machine learning
Regression analysis is a statistical method to model the
relationship between a dependent (target) and independent
(predictor) variables with one or more independent variables.
More specifically, Regression analysis helps us to understand
how the value of the dependent variable is changing
corresponding to an independent variable when other
independent variables are held fixed. It predicts continuous/real
values such as temperature, age, salary, price, etc.
We can understand the concept of regression analysis using the
below example:
Example: Suppose there is a marketing company A, who does
various advertisement every year and get sales on that. The
below list shows the advertisement made by the company in the
last 5 years and the corresponding sales:
Now, the company wants to do the advertisement of $200 in the
year 2019 and wants to know the prediction about the sales
for this year. So to solve such type of prediction problems in
machine learning, we need regression analysis.
Regression is a supervised learning technique which helps in
finding the correlation between variables and enables us to
predict the continuous output variable based on the one or more
predictor variables. It is mainly used for prediction, forecasting,
time series modeling, and determining the causal-effect
relationship between variables.
In Regression, we plot a graph between the variables which best
fits the given datapoints, using this plot, the machine learning
model can make predictions about the data. In simple
words, "Regression shows a line or curve that passes through
all the datapoints on target-predictor graph in such a way that
the vertical distance between the datapoints and the regression
line is minimum." The distance between datapoints and line tells
whether a model has captured a strong relationship or not.

IMPORTANT QUESTIONS
1.Explain Learning and types of learning

2. What is Similarity based learning. Give some examples and


write the differences between Instance based learning & Model
based learning.

3 Solve the following dataset and predict the class using Find-S
algorithm

4. Write A* search algorithm and solve the following graph

5.Draw the neat diagram for machine learning process and explain.

6.Explain similarity based learning and regression based leraning.

7.Explain learning and its types.


8. Write nearest centroid classifier algorithm. Solve the following
dataset and predict the class. The given test instance is (9,4)

9.Consider the following dataset. The given test instance (7.6,60,8) and
assigned k=3. Use Weighted k-NN algorithm and determine the class
S.No CGPA Project Result
Asesessme
nt
1 9.2 85 8 Pass

2 8 80 7 Pass

3 8.5 81 8 Pass

4 6 45 5 Fail

5 6.5 50 4 Fail

6 8.2 72 7 Pass

10. Explain the need for machine learning

11. Mention H function categories and solve the following with


H Value.

1 2 3 1 2 3
5 8 6 5 6
7 4 7 8 4
12. Solve the following table using Nearest centroid Classifier. The
given test instance is (6,5)
X Y Class
3 1 B
5 2 B
4 3 B
7 6 A
6 7 A
8 5 A

13. Write Find-S algorithm. Solve the following Training dataset using
algorithm.

14. Consider the following dataset. The given test instance (7.6,60,8) and
assigned k=3. Use Weighted k-NN and determine the class.
S.No CGPA Result
Asesess Project
ment
1 9.2 85 8 Pass

2 8 80 7 Pass

3 8.5 81 8 Pass

4 6 45 5 Fail

5 6.5 50 4 Fail

6 8.2 72 7 Pass

You might also like