0% found this document useful (0 votes)
0 views

Machine Learning Unit1

Machine Learning Unit1

Uploaded by

Vinoth Kumar M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
0 views

Machine Learning Unit1

Machine Learning Unit1

Uploaded by

Vinoth Kumar M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 38
FF LL UNIT | Syllabus Review of Linear Algebra for machine learning; Introduction and motivation for machine learning; Examples of machine learning applications, Vapnik-Chervonenkis (VC) dimension, Probably Approximately Correct (PAC) learning, Hypothesis spaces, Inductive bias, Generalization, Bias variance trade-off. Contents 1.1. Review of Linear Algebra for Machine Leaming 1.2 Introduction and Motivation for Machine Learning 1.3. Types of Machine 1.4: Examples of Machine Leaming Applications 1.5 Vapnik-Chervonenkis (YC) Dimension | 1.6 Probably Approximately Correct (PAC) Leaming 1.7, Hypothesis Spaces 1.8 Inductive Bias 1.9 Bias Variance Trade-Off 1.10 Two Marks Questions with Answers oN lachine Leeming chine Learning Mai Hi Review of Linear Algebra for is the study of linear combinations. It is the study of vector © Linear algebra i qj some mappings that are required to perform the linea lines and pare cs cludes vectors, matrices and linear functions. It is the study transformations none and its transformation properties. y of linear sets went anid of Vectors and linear functions. Linear algebra js about « Linear algebra ts ‘That is, using arithmetic on columns of numbers called linear eas of numbers called matrices, to create new columns and arrays vectors ani of numbers. ; is the study of lines and planes, vector-spaces and-mappings that «Linear algebra re required for linear transforms. «The general linear equation is represented as, lta = B apy + aX where , ‘a = Represents the coefficients x = Represents the unknowns b = Represents the constant * Formally, a vector space is a set of vectors which is closed” tundér ‘addition and multiplication by real numbers. A subspace is a subset of a vector space which is a vector space itself, e.g. the plane z = 0 is a subspace of R? : + Ifall vectors in a vector space may be expréssed as linear combinations Of Vp Me then vj,...¥, span the space. . au basis is a set of linearly independent vectors which span” the ‘space! The ension of a space is the # of "degrees of freédom" Of the space; it is the ‘umber of vectors in any basis for the space. : * A basis is a maximal set of linearly i ind ini t of spanning vectors, 'Y independent vectors and a minimal sel . aoe at _orthogoal if their dot product is 0. An orthogonal basis consists tors. 4 it eee orthonormal basis consists of orthogonal vectors of unit * Functions of several variables are often Presented in one line such as, _ f(y) = 3x+5y * Vector Addition : Numbers + Both 3 and 5 are numbers and so is 3 +5 TECHNICAL PUBLICATIONS® - an up-thust for knowledge y penne bearing fe Introduction to Machine Leaming 7 1) (0) “4 Sl 3evectors :}1/+]1] ~ |> oo i) «Polynomials : If p(x) = 1+x—2x? +3x3 and ats) = = x4+3x?-3x3+x4 then their sum p(s) + (9) 15 the new polynomial eas +x4 + Power series : If age +8 +..and rah 2a 84. then 1 f(x) + BOX) = it px? FT x44... is also a power series. x , Functions : If fs) = e® and g(x) = e™ then their sum f(x) + g(x) is the new function 2 cosh x. fl Transposes and Inner Products + A collection of variables may be treated as a single entity by writing them as a vector. For example, the three variables x;, x» and x3 may be,written as the vector + Vectors can be written as column vectors where the Variables go down the page or as row vectors where the variables go across the page. . a turn a column vector into a row vector we usé the angpose operator =1%1)X9/X3] * The transpose operator also turns row vectors into column vectors. We now define the inner product of two vectors ya , xTy = fey xg/x3]|¥2 |! f 1 LYB. = xqyatx2y2 +X3¥3 or 3 = Dxiyi i ‘hich is geen to be a sealar, The outer product of to vectors produces @-matrix x1 ryt = foxy fley2 99) Introduction t© Machine Lean, ng Machine Leaming mns. The ij" entry of a matrix is the . TO’ ‘. qth *, . © An N x M matrix has N the it” OW Given a matrix A, the i entry is Written jth column of matrix the i" row beco: entry on the j'" © se tanpose r toa Mes the lying as Ay. When app sth column. That is, if ay 412 ay 423 coperato} a3 A = [421 ay, ag2 933 Then ay) a2 431 AT = Jay 422 432 ay3 423 433. , hee ‘A matrix is symmetric if Ay = Aji- Another way to say this is that, for symmetric matrices, A= AT. / / Two matrices can be multiplied if the number of columns in the first matrix equals the number of rows in the second. Multiplying A, an N x M matrix, by B, an M x K matrix, results in C, an N x K matrix. The jj!" entry in C is the inner product between the i'* row in A and the j* column in B. + Example : 137 2 234 eo ea)- [2 2 2] Siena 64 75 87 30 Given two matrices A and B we note that (ABT = BTAT aaa CEN mri 4-2 elle cloud 22 NPM BS ly Aho Find C = AB by inner product method, TECHNICAL PUBLICATIONS’ @n up-thrust for knowledge eaming 1-5 Introduction to Machine Leaming : Here the matrix C will also be 2 y 9 with 6 on B =f} e nen, 4 {3 -1] [S]-12-2-u4 6 cn = 2 SI q|*- 1253523, 4 2 | ]--2+ e109 --1 = {SM S12|_f1l 14 ce [2 |-[2 “sal ma Outer Product « In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, matrix, tion sl! 2 8 8 then their outer product is an n x m More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product and can be used to define the tensor algebra. The outer product contrasts with : 2) The dot product, which takes a pair of coordinate vectors as input and produces ‘a scalar. b) The Kronecker product, which takes a pair of matrices as input and produces a block matrix. Let A and B be m x n and n x p matrices respectively. The product C = AB is the matrix C = ay Bye +as2Ba. +as3By +..-+aenBre That is, C is the m x p matrix given by the sum of all the m x p outer product matrices obtained from multiplying each column of A times the corresponding Tow of B. Properties of an outer product : 1. The result of an outer product is m x n rectangular matrix. 2 The outer product is not commutative. That is, u® v#v@u © TECHNICAL PUBLICATIONS. - an up-thrust for knowledge Introduction to Machine Le Machine Learning 1-6 earning 3. Multiply the second vector v with the resultant product w = u ® v gives a vector of the first factor u scaled by the square norm of the second factor v. That is, wy = uv? uly? _ SEER oy te matrices 4~ ie ‘s| and B= E 5] Find C= AB by outer product method Solution : , a C = aspBy +a42B2e {3 {6 a+ ‘1 v -4 1 2) [-7 2 . (a “(as =| nu a i “sl EREY inverse Given a matrix X its inverse X7} is defined by the properties x7x =I xx = 1 f 1 where I is the identity matrix. The inverse of a diagonal matrix’ with’ entties di is another diagonal matrix with entries 1/dj. This satisties the definition of an inverse, 4/00] [1/4 01-0 1 0x0]! 01 0//0 1 o}=|o10 00 6] 0.0 1/6 00°21)" ) If X has no inverse, we say X is singular or non-invertible. Properties of the Inverse : 1. If A is a square matrix and B is the inverse of A, then A is the inverse of B, since AB = I= BA. So we have the identity(A7y1=A. j 2. Notice that B!A~1AB = B~NB = I = ABB~1A~!, 50(AB)) = B1A-1 “TECHNICAL PUBLICATIONS® - an up-thrust for knowledge i ve ong te7 Introduction to Machine Leaming we gigen Values and Eigen Vectors sre eigen Vectors X and eigen values 2 of ’ paisann xn matrix, then x ig Beye rewritten as (A ~ 2) x = @ matrix A satisfy > Ax = x ann x 1 vector 0, where gince x is required to be nonzero, the Cigen values must satisfy det(A - 21) = which s called the characteristic equatin, * MUM Satisfy det(A = 2 , solving it for values of 2 gives the eigen values of mi « When AX = XX for some X ¥ 0, we call such an X an eigen vector of the matrix A. qhe eigen vectors of A are associated to an eigen value. Hence, if 24 is an eigen eigen vector as X. Note again that in X must be nonzero. and 2, is a constant. The equation can Tis the n x n identity matrix. atrix A, value of A and AX = 24X, we can label this order to be an eigen vector, There is also a geometric significance to eigen vectors. When you have a nonzero vector which, when multiplied by a matrix results in another vector which is parallel to the first or equal to 0, this vector is called an eigen vector of the matrix. This is the meaning when the vectors are in R". Cn Find all eigen values and their eigen space for (3 -2 aie) Solution : 3-2) ,f1 07)_[3 -2) [2 0 AM = Tr of Mo al"h ol-lo 2 _ fa-a -2 Vy 2 The characteristic equation is det(A-Al) = (3-2)(-A) - (-2)=0 23242 = 0 (.-1)0,-2) = 0 We find eigen values dy = Lag =2 i i . For Ay = 1, We next find eigen vectors associated with each eigen value. For 2 be aoe AIEEE SE] 2: thrust for knowledge TIONS” - 8n up-t TECHNICAL PUBLICA Introduction to Machine Leeming 1-8 Machine Leaming 4 -1 6 is an eigen ¥% =/2 1 6 Given that 2 i an eigen value for A ao Find a basis of its eigen space. Solution : waa 6) pa Pa ye aca=|2 1-2 6|-]2 4 6/7]0 0° > -1 8-2] [2 -1 6} [0 9 2 ‘Therefore, (A -2IX = 0 becomes 2xy-xp F6x3 = OOF XB = 2x +6X3, where we select 41 and x3 as free variables only to avoid fractions. Solution set in parametric form is x x 1 0 & = | xy [=| 2x1 +6x3 |= xa] 2 4x3] 6 x3 x3 0 1 A basis for the eigen space : 1 0 ty = |2 and ity =|6 0 1 GERRY Fina ait eigen values for 5-2 6 -1 0 3 -8 0 Atl oo 5-4, 05.0 31.5) Solution : . 5-2 -2 6 -1 acy =| 9 3-4 -8 0 0 0 5-2 4 0 0 1 1-24 3-A -8 0 det (A-Al) = (5-A)det] 0 5-2 4 Oo 1 1-2 TECHNICAL'PUBLICATIONS® - an up-thrust for knowledge y vate jeaming to Introduction to Machine Learning = (5-2)(3-2) det (i 4 11-2 = 6-2G-HUS-20-2)-4} = 0 ‘here are 4 roots : (6-2) = OARS Q-2) = 0 A=3 gend-H-4 = OF 22 -6441=0 6436-4 on SEE saa ema For the given matrix A, find out eigen values and eigen vectors -6 3 a-[3 5 Solution = . -6 3 1 0 wae ff EY _ [-6 3]_[ 0 “|5 5} [0 iI _ [6-2 3 “1 5 eA = (-6-46-%) - OP) = 42-45 = (4=6.22)(24+7.22) na Singular Values and Singular Vectors ctors of a square or rectangular matrix A * A singular value and pair of singular ve dv so that are a nonnegative scalar 6 and two nonzero vectors U an Ay = ow AHy = ov al translation of the German “eigenwert.” A like “own value” or "characteristic value,” Jar value" relates to thé distance between The term “eigen value" is @ partial complete translation would be something but these are rarely used. The term “sinB™ a matrix and the set of singular matrices in up-thrust for knowleds TECHNICAL PUBLICATIONS a PERRO 0 Introduction to Machine Leeming qe1 Machine Leaming é role in situations where the matrix is a it fant : + Eigen values play an imputvpace_ onto itself. Systems of linear ordinary transformation from one aa differential equations are the primary ex: q respond to frequencies of vibration or critical values of amples. © The values of A can co} stability parameters or energy levels of atoms. ; / © Singular values play an important role where the matrix is a tfansformation from ‘one vector space to a different vector space, possibly with a different dimension. Systems of over-or underdetermined algebraic equations are the primary examples, * The definitions of eigen vectors and singular vectors do not specify their normalization. An eigen vector x or a pair of singular vectors u and v, can be scaled by any nonzero factor without changing any other important properties. Eigen vectors of symmetric matrices are usually normalized to have Euclidean length equal to one, |x = 1. On the other hand, the eigen vectors of nonsymmetric matrices often have different normalizations in different contexts. Singular vectors are almost always normalized to have Euclidi 2 = Ilp = 1. You can still multiply ej ae lean length equal to 1 without changing their lengths. igen vectors or pairs of singular Disa diagonal matrix of singular values, 1-11 Introduction to Machine Leaming Nn for Machine Learning ing (ML) is a sub-fi pine Learning ( Sub-field of Artific: 7 , Med Yeveloping computational. 4, Artificial Intelligence (AI) which concerns with eories of learning and building learning chines: 8 instruction and practice. It is also gcovery of new facts and theories through E : observation and experiment. Machine Learning Definition ; A computer pro; is sai . ram is said to learn from experience E with respect to some ba pec class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. + Machine learning is programming computers to optimize a performance criterion using example data or past experience, ‘pplication of machine learning methods to large databases is called data mining, + itis very hard to write programs that. solve problems like recognizing a human face, We do not know what program to write because we don't know how our brain does it. Instead of writing a program by hand, it is possible to collect lots of examples that specify the correct output for a given input. + A machine learning algorithm then takes these examples and produces a program that does the job. The program produced by the learning algorithm may look very different from a typical hand-written program. It may contain millions of numbers. lf we do it right, the program works for new cases as well as the ones we trained iton. Main goal of machine learning is to devise learning algorithms that do the leaming automatically without human intervention or assistance. The machine learning paradigm can be viewed as “programming by example." Another goal is ‘0 develop computational. models of human learning process and perform computer simulations. * The goal of machine learning is to build computer systems that can adapt and leam from their experience. * Algorithm is used to solve a problem on computer. An algorithm is a sequence of ‘struction. It should carry out to transform the input to output. For example, for addition of four numbers is carried out by giving four number as input to the ort and output i um of al four numbers. For the same tsk there may be Various algorithms. It is interested to find the most efficient one, requiring the fast number of instructions or memory oF both — " Lor some tasks, however, we do not have an algorithm. © ist for knowl TECHNICAL PUBLICATIONS” ~ an up-thrust for knowledge y i X i » § ij j t Introduction to Machine Lea, ng Machine Leaming How Machines Learn ? * Machine learning typically foll 1, Training : A training set 0" representation of the newly rules. Jes are checked and, if necessary, additional training i, 2, Validation : The Saal test data are used, but instead, a human exper, given. Sometimes additi some other automatic knowledge - based component may validate the rules; © "tester is often called the opponent, i | The role of the — may be used. 4 in responding to some new situation. 3. Application : The rules are use! New situation ases of ML. Application hases : | £ correct behavior is analyzed ang some ledge is stored. This is some form o, lows three P’ f examples ©! learnt know Fig. 1.2.1 shows phi New knowledge Training Existing knowledge Validation Test data Fig. 1.2.1 Phases of ML EEA why Machine Learning is Important ? * Machine leaming algorithms can. generalizing from examples, figure out how to perform important tasks by * Machine learning provides busine rovided wit res being ae a insights into their organizations. This adaptive technology is na ¥ global enterprises to gain a competitive edge fachine learni i . " ae Gane eens discover the telationships between the variables of a ” output and hidden) from direct samples of the system. the reasons ; ss insight and intelligence. Decision makers are * Following are some of 1. Some tasks cannot y examples. Fo be defi les. For exam R ee He lefined well, except by : TE CHNICAL PUBLICATIONS® - an Up-thrust for knowledge iS yt eveemind. 1-13 Introduction to Machine Learning jonships and correlati A g,, RelationshiP ations can be hidden within large amounts of data. To 7 blems, machi . goive these Pro ine learnt ta ; these relationships. ing and data mining may be able to find , Human designers often produce machines that do not work as well as desired jn the environments in which they are used 4. The amount of knowledge available about certain tasks might be t00 large for explicit encoding by humans. i Environments change time to time. 4, New knowledge about tasks is constantly being discovered by humans: + Machine learning also helps us find solutions of many problems in computer vision, speech recognition and robotics. Machine learning uses the theory of qtatistics in building mathematical models, because the core task is making inference from a sample, + Leaning is used when : Human expertise does not exist (navigating on Mars), 1 2, Humans are unable to explain their expertise (speech recognition) 3, Solution changes in time (routing on a computer network) 4, Solution needs to be adapted to particular cases (user biometrics) Ingredients of Machine Learning The ingredients of machine learning are as follows : 1 Tasks : The problems that can be solved with machine learning: A task is an abstract representation of a problem. The standard methodology in machine learning is to learn one task at a time. Large problems are broken into small, reasonably independent sub-problems that are learned separately and then recombined. Predictive tasks perform inference on the current data in order to make Predictions. Descriptive tasks characterize the general properties of the data in the database. 2 Models : The output of machine learning. Different models are geometric models, probabilistic models, logical models, grouping and grading. * The model-based approach seeks to create modified solution tailored to each new application, Instead of having to transform Your problem to fit some standard algorithm, in model-based machine learning you design the algorithm precisely to fit your problem. * Model is just made up of set of assumptions, expressed in a precise mathematical form. These assumptions include the number and types of variables in the problem TECHNICAL PUBLICATIONS® = an up-thrust for knowledge Machine Learning y 1-14 Intrductin (0 Machine Leeming vhat the effect of changin, iables affect each other, and what the Bing one domain, which variables variable. variable is on another varial / eke itd veo models are classified as : Geometric P a Machine learning sa and logical model. horses of machine learning. A good feature representation jg Features : The worl s hh performance in any machine learning task. Be ory an initial set of measured data and builds derived ative, non redundant, facilitating the subsequent central to achieving hi Feature extraction starts from values intended to be inform jamming and generalization steps. Feature selection is a process that chooses a subset of features fro i ature selection is a pl m the’ orig fee ‘0 that the feature space is optimally reduced according to a certain atures: s criterion. EE] Types of Machine + Learning is essential for unknown environments, ie. when designer lacks the omniscience. Learning simply means incorporating information from the training examples into the system. * Leaming is any change in a system that allows it to perform better the second time on repetition of the same task or on another task drawn from the same Population. One part of learning is acquiring knowledge and new information; and the other part is problem-solving, * Supervised and Unsupervised Learning are the different types of machine learning iethods. A computational learning model should be clear about the following aspects : Leamer : Who or what. is doing th leami : a 8 the learning. For example : Program or Domain : What is being leamed ? Goal : Why the leaming is done ? ew aa = 2 = & g 3 = By : The algorithmic framework to be used. 5. 6. Information Source ane . 7 am a The information (training data) the program uses for 7. Trainit t ng scenario : The description of the leaming Process. fying representation of what is being experienced. Learn means to get ee knowledge of by study, experience or being taught. TECHNICAL pj )p, eaming 1-95 Machine learning scientific evelopment of the algorithm th, ‘mpirical data, Such as form sen, discipline concemed with the design and at ‘St allows computers to evolve behavi sue ‘Ors data or database, Machine leaming is usually divideg int i . o unsupervised learning. © fVo main types : Supervised learning and iors based on. Learning 2 do Machine wy ‘stand and imy ici 1, Tounderstand and improve efficiency of human learning, 2 Discover new things or structure that is unknown to humans (Example : Data mining). 3. Fill in skeletal or incomplete Specifications about a domain. 131] Supervised Learning + Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. The task of the supervised learner is to predict the output behavior of a system for any set of input values, after an initial training phase. Supervised learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs are usually provided by an external teacher. Human learning is based on the past experiences. A computer does not have experiences. A computer system learns from data, which represent some “past experiences” of an application domain. attribute, e.g., approve or not-approved and high-risk or low risk. The task is commonly called : Supervised learning, classification or inductive learning. Training data includes both the input and the desired results. For some examples the correct results (targets) are known and are given in input to the model during the learning process. The construction of a proper training, validation and test set is crucial. These methods are usually fast and accurate. Have to be able to generalize : give the correct results when new data are given in input without knowing a priori the target. Supervised learning is the machine learning task of inferring a function from Supervised training data. The training data consist of a set of training examples. In Supervised learning, each example is a pair consisting of an input object and a desired output value. TECHNICAL PUBLICATIONS® + an up-thrust for knowledge To learn a target function that can be used to predict the values of a discrete class - ag Introduction to Machine Learning | Machine Leaming ing data and produces an ; 5 the traini orithm analyze ression function. Fig, 13 © A supervised learning alg' aie sfored fanetion, which is called a classifier oF g shows supervised leaming process. teaming ‘Training algorithm: data ~ Testing Training Fig. 1.3.1 Supervised learning process * The learned model helps the system to perform task better as compared to no learning. + Each input vector requires a corresponding target vector. Training Pair = (Input Vector, Target Vector) * Fig. 1.3.2 shows input vector. Actual output Neural network. Error signal generate Fig, 1.3.2 Input vector ich some input vectors are collected and the deviation fee ot oe * computed by the net-work is observed : wer ji . aoe according to the magnitude Fe — is measured. The weights are learning algorithm. crror in the way defined by the © Supervised learning denotes a method in whi and presented to the network, The output co error correction. The perceptron Ira; which use reinforcement or learning with reinforcement, : TECHNICAL PUBLICATIONS® . an upsthrgy for knowie. owledge r ' perfor we reaming 1-47 Introduction to Machine Learning caer to solve a given proble ; eared m of supervised learning, following steps are 1 Find out the type of training examples Collect a training set. 2 mine the input fe: 7 3, Deter put feature representation of the learned function. 4 ermine the structure . vont. of the learned function and corresponding learning Complete the design and then run the learning algorithm on the collected training set. 6. Evaluate the accuracy of the leamed function. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set. fa Unsupervised Learning The model is not provided with the correct results during the training. It can be used to cluster the input data in classes on the basis of their statistical properties only. Cluster significance and labeling. ‘The labeling can be carried out even if the labels are only available for a small number of objects representative of the desired classes. All similar inputs patterns are grouped together as clusters. If matching pattern is not found, a new cluster is formed. There is no error feedback. External teacher is not used and is based upon only Jocal information. It is also referred to as self-organization. 4 because they do not need a teacher or super-visor {0 They are called unsupervise data is required to start the label a set of training examples. Only the original analysis. In contrast to supervised learning, unsupervised or self-organized learning does the training session, the neural network not require an external teacher. During # ; aud : receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. hms aim to lear rapidly and can be used in Unsupervi ing algoritl ppervined esa ey ing is frequently employed for data clustering, real-time, Unsupervised learn) feature extraction etc. SOR ee ee ee TECHNICAL PUBLICATIONS” - 6” up-thrust for knowledge qd aP S Introduction 'o Machine Leaming fachine Learning 4. recording learning by Zurada is typically etworks. An associative memory networks jg into the networks stable states. f learning calle tive memory ™ ; eral idea patterns i « Another mode of | employed for associa designed by recording se¥ EEEI Difference between Sur siliniciananens Sr. No. i Desired output is not given. 1. Desired output is given. supervised learning ad more Tt is possible to learn larger and more mn larger a é 2 Ibis not possible to lear i complex models with unsupervised ervised P! complex models than with sup ‘carning. 3, Use training data to infer model No training data is used. The target output is not presented to the 4 ¢ pattern that is used to train the network fs 08 network. network is associated with an output pattern, 5. Trying to predict a function from labeled data. "Try to detect interesting relations in data. 6 Supervised learning requires that the target For unsupervised learning typically either | Variable is well defined and that a sufficient the target variable is unknown or has only number of its values are given. been recorded for too small a number of 7. Example : Optical character recognition. Example : Find a face in an image. 8. We can test our model. We can not test our model. 9. Supervised learning is also called classification, EEX] semi-supervised Learning en ‘ A eee ee both labeled and unlabeled data to improve toon ig. The goal is to learn a predictor that predicts future test data i 'e predictor learned from the labeled training data alone. * Semi-supervised learnin, is moti it F better and cheaper. SB ‘tivated by its practical value in learning faster, In many real world appli applications, it i unlabeled data « ‘plications, it is relatively easy to acquire a large amount of * For example, do * cuments can om sarecleee ee be crawled from the Web, images can be obtained : ras, ai their corresponding nee - speech can be collected from broadcast. However, ¥ for the prediction task, such as sentiment orientation, TECHN ICAL PUBLICATIONS® - an up-thrust for knowedk 190 , ming 1 galt oe Introduction to Machine Learning surasion detection and phonetic tronscy . ‘ipt, oft i i moxpensive ADOFAOLY experiments ipt, often requires slow human annotation ae : many practical learning domains, there is a large supply of unlabeled data but * ed labeled data, which : Fite aoe dees Mi Ce expensive to generate. For example : Text processing, ee xing, bioinformatics etc, ’ gemi-supervised Learning makes use of both labeled and unlabeled data for training, typically a small amount of labeled data with a large amount of unlabeled data. When unlabeled data is used in conjunction with a small amount of labeled data, it can produce considerable improvement in learning accuracy. , semi-supervised learning sometimes enables predictive model testing at reduced cost. , semi-supervised classification : Training on labeled data exploits additional unlabeled data, frequently resulting in a more accurate classifier. « Semi-supervised clustering : Uses small amount of labeled data to aid and bias the clustering of unlabeled data. [EG Reinforced Learnings « User will get immediate feedback in supervised learning and no feedback from unsupervised learning. But in the reinforced learning, you will get delayed scalar feedback. « Reinforcement learning is learning what to do and how to map situations to actions. The learner is not told which actions to take. Fig. 1.3.3 shows concept of reinforced learning. z | | ST Si: | ENVIRONMENT * Reinforced learning is 5 deals with agents that must sense and act upon Fig. 1.3.3 Reinforced learning their environment. It e 7 . combines classical Artificial Intelligence and machine learning techniques. It allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance. Simple reward feedback is required for the agent to Team its behavior; this is known as the reinforcement signal. Situation |_| Reward Action Ss Tt a TECHNICAL PUBLICATIONS® - an up-thrust for knowledge dq PE TEEELE 1-20 Introduction to Machine Learning Machine Leaming ement learning j, t distinguishing features of reinforc re * Two most important di trial-and-error and delayed reve n agent can improve its’performance }; . ii gorithms al . . 7 + With reife pene we the environment. This environmental feedback ig using the feedback i called the reward signal. oo. ; ulated experience, the agent needs to learn which action ro take in * Based on accum in order to obtain a desired long term goal. Essential ly actions * a given situation in 0 ds need to reinforced. Reinforcement learning has d to long term rewar ei on — Sih emul theory, Markov decision processes and game theory, conni Example of reinforcement learning : A mobile robot decides whether it should . © a 7m ‘ a a new room in search of more trash to collect or start trying to find its way back to its battery recharging station. It makes its decision based on how quickly and easily it has been able to find the recharger in the past. EEESI Elements of Reinforcement Learning + Reinforcement learning elements are as follows : 1. Policy 2. Reward function 3. Value function 4. Model of the environment + Fig. 1.34 shows elements of RL. * Policy : Policy defines the learning agent behavior for given time period, It is a mapping from perceived states of the fee i ei environment to actions to be taken when in ‘Avironment those states. * Reward function : Reward function is used

You might also like