Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
0 views
Machine Learning Unit1
Machine Learning Unit1
Uploaded by
Vinoth Kumar M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Machine Learning Unit1 For Later
Download
Save
Save Machine Learning Unit1 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
0 views
Machine Learning Unit1
Machine Learning Unit1
Uploaded by
Vinoth Kumar M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Machine Learning Unit1 For Later
Carousel Previous
Carousel Next
Save
Save Machine Learning Unit1 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 38
Search
Fullscreen
FF LL UNIT | Syllabus Review of Linear Algebra for machine learning; Introduction and motivation for machine learning; Examples of machine learning applications, Vapnik-Chervonenkis (VC) dimension, Probably Approximately Correct (PAC) learning, Hypothesis spaces, Inductive bias, Generalization, Bias variance trade-off. Contents 1.1. Review of Linear Algebra for Machine Leaming 1.2 Introduction and Motivation for Machine Learning 1.3. Types of Machine 1.4: Examples of Machine Leaming Applications 1.5 Vapnik-Chervonenkis (YC) Dimension | 1.6 Probably Approximately Correct (PAC) Leaming 1.7, Hypothesis Spaces 1.8 Inductive Bias 1.9 Bias Variance Trade-Off 1.10 Two Marks Questions with AnswersoN lachine Leeming chine Learning Mai Hi Review of Linear Algebra for is the study of linear combinations. It is the study of vector © Linear algebra i qj some mappings that are required to perform the linea lines and pare cs cludes vectors, matrices and linear functions. It is the study transformations none and its transformation properties. y of linear sets went anid of Vectors and linear functions. Linear algebra js about « Linear algebra ts ‘That is, using arithmetic on columns of numbers called linear eas of numbers called matrices, to create new columns and arrays vectors ani of numbers. ; is the study of lines and planes, vector-spaces and-mappings that «Linear algebra re required for linear transforms. «The general linear equation is represented as, lta = B apy + aX where , ‘a = Represents the coefficients x = Represents the unknowns b = Represents the constant * Formally, a vector space is a set of vectors which is closed” tundér ‘addition and multiplication by real numbers. A subspace is a subset of a vector space which is a vector space itself, e.g. the plane z = 0 is a subspace of R? : + Ifall vectors in a vector space may be expréssed as linear combinations Of Vp Me then vj,...¥, span the space. . au basis is a set of linearly independent vectors which span” the ‘space! The ension of a space is the # of "degrees of freédom" Of the space; it is the ‘umber of vectors in any basis for the space. : * A basis is a maximal set of linearly i ind ini t of spanning vectors, 'Y independent vectors and a minimal sel . aoe at _orthogoal if their dot product is 0. An orthogonal basis consists tors. 4 it eee orthonormal basis consists of orthogonal vectors of unit * Functions of several variables are often Presented in one line such as, _ f(y) = 3x+5y * Vector Addition : Numbers + Both 3 and 5 are numbers and so is 3 +5 TECHNICAL PUBLICATIONS® - an up-thust for knowledgey penne bearing fe Introduction to Machine Leaming 7 1) (0) “4 Sl 3evectors :}1/+]1] ~ |> oo i) «Polynomials : If p(x) = 1+x—2x? +3x3 and ats) = = x4+3x?-3x3+x4 then their sum p(s) + (9) 15 the new polynomial eas +x4 + Power series : If age +8 +..and rah 2a 84. then 1 f(x) + BOX) = it px? FT x44... is also a power series. x , Functions : If fs) = e® and g(x) = e™ then their sum f(x) + g(x) is the new function 2 cosh x. fl Transposes and Inner Products + A collection of variables may be treated as a single entity by writing them as a vector. For example, the three variables x;, x» and x3 may be,written as the vector + Vectors can be written as column vectors where the Variables go down the page or as row vectors where the variables go across the page. . a turn a column vector into a row vector we usé the angpose operator =1%1)X9/X3] * The transpose operator also turns row vectors into column vectors. We now define the inner product of two vectors ya , xTy = fey xg/x3]|¥2 |! f 1 LYB. = xqyatx2y2 +X3¥3 or 3 = Dxiyi i ‘hich is geen to be a sealar, The outer product of to vectors produces @-matrix x1 ryt = foxy fley2 99)Introduction t© Machine Lean, ng Machine Leaming mns. The ij" entry of a matrix is the . TO’ ‘. qth *, . © An N x M matrix has N the it” OW Given a matrix A, the i entry is Written jth column of matrix the i" row beco: entry on the j'" © se tanpose r toa Mes the lying as Ay. When app sth column. That is, if ay 412 ay 423 coperato} a3 A = [421 ay, ag2 933 Then ay) a2 431 AT = Jay 422 432 ay3 423 433. , hee ‘A matrix is symmetric if Ay = Aji- Another way to say this is that, for symmetric matrices, A= AT. / / Two matrices can be multiplied if the number of columns in the first matrix equals the number of rows in the second. Multiplying A, an N x M matrix, by B, an M x K matrix, results in C, an N x K matrix. The jj!" entry in C is the inner product between the i'* row in A and the j* column in B. + Example : 137 2 234 eo ea)- [2 2 2] Siena 64 75 87 30 Given two matrices A and B we note that (ABT = BTAT aaa CEN mri 4-2 elle cloud 22 NPM BS ly Aho Find C = AB by inner product method, TECHNICAL PUBLICATIONS’ @n up-thrust for knowledgeeaming 1-5 Introduction to Machine Leaming : Here the matrix C will also be 2 y 9 with 6 on B =f} e nen, 4 {3 -1] [S]-12-2-u4 6 cn = 2 SI q|*- 1253523, 4 2 | ]--2+ e109 --1 = {SM S12|_f1l 14 ce [2 |-[2 “sal ma Outer Product « In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, matrix, tion sl! 2 8 8 then their outer product is an n x m More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product and can be used to define the tensor algebra. The outer product contrasts with : 2) The dot product, which takes a pair of coordinate vectors as input and produces ‘a scalar. b) The Kronecker product, which takes a pair of matrices as input and produces a block matrix. Let A and B be m x n and n x p matrices respectively. The product C = AB is the matrix C = ay Bye +as2Ba. +as3By +..-+aenBre That is, C is the m x p matrix given by the sum of all the m x p outer product matrices obtained from multiplying each column of A times the corresponding Tow of B. Properties of an outer product : 1. The result of an outer product is m x n rectangular matrix. 2 The outer product is not commutative. That is, u® v#v@u © TECHNICAL PUBLICATIONS. - an up-thrust for knowledgeIntroduction to Machine Le Machine Learning 1-6 earning 3. Multiply the second vector v with the resultant product w = u ® v gives a vector of the first factor u scaled by the square norm of the second factor v. That is, wy = uv? uly? _ SEER oy te matrices 4~ ie ‘s| and B= E 5] Find C= AB by outer product method Solution : , a C = aspBy +a42B2e {3 {6 a+ ‘1 v -4 1 2) [-7 2 . (a “(as =| nu a i “sl EREY inverse Given a matrix X its inverse X7} is defined by the properties x7x =I xx = 1 f 1 where I is the identity matrix. The inverse of a diagonal matrix’ with’ entties di is another diagonal matrix with entries 1/dj. This satisties the definition of an inverse, 4/00] [1/4 01-0 1 0x0]! 01 0//0 1 o}=|o10 00 6] 0.0 1/6 00°21)" ) If X has no inverse, we say X is singular or non-invertible. Properties of the Inverse : 1. If A is a square matrix and B is the inverse of A, then A is the inverse of B, since AB = I= BA. So we have the identity(A7y1=A. j 2. Notice that B!A~1AB = B~NB = I = ABB~1A~!, 50(AB)) = B1A-1 “TECHNICAL PUBLICATIONS® - an up-thrust for knowledgei ve ong te7 Introduction to Machine Leaming we gigen Values and Eigen Vectors sre eigen Vectors X and eigen values 2 of ’ paisann xn matrix, then x ig Beye rewritten as (A ~ 2) x = @ matrix A satisfy > Ax = x ann x 1 vector 0, where gince x is required to be nonzero, the Cigen values must satisfy det(A - 21) = which s called the characteristic equatin, * MUM Satisfy det(A = 2 , solving it for values of 2 gives the eigen values of mi « When AX = XX for some X ¥ 0, we call such an X an eigen vector of the matrix A. qhe eigen vectors of A are associated to an eigen value. Hence, if 24 is an eigen eigen vector as X. Note again that in X must be nonzero. and 2, is a constant. The equation can Tis the n x n identity matrix. atrix A, value of A and AX = 24X, we can label this order to be an eigen vector, There is also a geometric significance to eigen vectors. When you have a nonzero vector which, when multiplied by a matrix results in another vector which is parallel to the first or equal to 0, this vector is called an eigen vector of the matrix. This is the meaning when the vectors are in R". Cn Find all eigen values and their eigen space for (3 -2 aie) Solution : 3-2) ,f1 07)_[3 -2) [2 0 AM = Tr of Mo al"h ol-lo 2 _ fa-a -2 Vy 2 The characteristic equation is det(A-Al) = (3-2)(-A) - (-2)=0 23242 = 0 (.-1)0,-2) = 0 We find eigen values dy = Lag =2 i i . For Ay = 1, We next find eigen vectors associated with each eigen value. For 2 be aoe AIEEE SE] 2: thrust for knowledge TIONS” - 8n up-t TECHNICAL PUBLICAIntroduction to Machine Leeming 1-8 Machine Leaming 4 -1 6 is an eigen ¥% =/2 1 6 Given that 2 i an eigen value for A ao Find a basis of its eigen space. Solution : waa 6) pa Pa ye aca=|2 1-2 6|-]2 4 6/7]0 0° > -1 8-2] [2 -1 6} [0 9 2 ‘Therefore, (A -2IX = 0 becomes 2xy-xp F6x3 = OOF XB = 2x +6X3, where we select 41 and x3 as free variables only to avoid fractions. Solution set in parametric form is x x 1 0 & = | xy [=| 2x1 +6x3 |= xa] 2 4x3] 6 x3 x3 0 1 A basis for the eigen space : 1 0 ty = |2 and ity =|6 0 1 GERRY Fina ait eigen values for 5-2 6 -1 0 3 -8 0 Atl oo 5-4, 05.0 31.5) Solution : . 5-2 -2 6 -1 acy =| 9 3-4 -8 0 0 0 5-2 4 0 0 1 1-24 3-A -8 0 det (A-Al) = (5-A)det] 0 5-2 4 Oo 1 1-2 TECHNICAL'PUBLICATIONS® - an up-thrust for knowledgey vate jeaming to Introduction to Machine Learning = (5-2)(3-2) det (i 4 11-2 = 6-2G-HUS-20-2)-4} = 0 ‘here are 4 roots : (6-2) = OARS Q-2) = 0 A=3 gend-H-4 = OF 22 -6441=0 6436-4 on SEE saa ema For the given matrix A, find out eigen values and eigen vectors -6 3 a-[3 5 Solution = . -6 3 1 0 wae ff EY _ [-6 3]_[ 0 “|5 5} [0 iI _ [6-2 3 “1 5 eA = (-6-46-%) - OP) = 42-45 = (4=6.22)(24+7.22) na Singular Values and Singular Vectors ctors of a square or rectangular matrix A * A singular value and pair of singular ve dv so that are a nonnegative scalar 6 and two nonzero vectors U an Ay = ow AHy = ov al translation of the German “eigenwert.” A like “own value” or "characteristic value,” Jar value" relates to thé distance between The term “eigen value" is @ partial complete translation would be something but these are rarely used. The term “sinB™ a matrix and the set of singular matrices in up-thrust for knowleds TECHNICAL PUBLICATIONS a PERRO0 Introduction to Machine Leeming qe1 Machine Leaming é role in situations where the matrix is a it fant : + Eigen values play an imputvpace_ onto itself. Systems of linear ordinary transformation from one aa differential equations are the primary ex: q respond to frequencies of vibration or critical values of amples. © The values of A can co} stability parameters or energy levels of atoms. ; / © Singular values play an important role where the matrix is a tfansformation from ‘one vector space to a different vector space, possibly with a different dimension. Systems of over-or underdetermined algebraic equations are the primary examples, * The definitions of eigen vectors and singular vectors do not specify their normalization. An eigen vector x or a pair of singular vectors u and v, can be scaled by any nonzero factor without changing any other important properties. Eigen vectors of symmetric matrices are usually normalized to have Euclidean length equal to one, |x = 1. On the other hand, the eigen vectors of nonsymmetric matrices often have different normalizations in different contexts. Singular vectors are almost always normalized to have Euclidi 2 = Ilp = 1. You can still multiply ej ae lean length equal to 1 without changing their lengths. igen vectors or pairs of singular Disa diagonal matrix of singular values,1-11 Introduction to Machine Leaming Nn for Machine Learning ing (ML) is a sub-fi pine Learning ( Sub-field of Artific: 7 , Med Yeveloping computational. 4, Artificial Intelligence (AI) which concerns with eories of learning and building learning chines: 8 instruction and practice. It is also gcovery of new facts and theories through E : observation and experiment. Machine Learning Definition ; A computer pro; is sai . ram is said to learn from experience E with respect to some ba pec class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. + Machine learning is programming computers to optimize a performance criterion using example data or past experience, ‘pplication of machine learning methods to large databases is called data mining, + itis very hard to write programs that. solve problems like recognizing a human face, We do not know what program to write because we don't know how our brain does it. Instead of writing a program by hand, it is possible to collect lots of examples that specify the correct output for a given input. + A machine learning algorithm then takes these examples and produces a program that does the job. The program produced by the learning algorithm may look very different from a typical hand-written program. It may contain millions of numbers. lf we do it right, the program works for new cases as well as the ones we trained iton. Main goal of machine learning is to devise learning algorithms that do the leaming automatically without human intervention or assistance. The machine learning paradigm can be viewed as “programming by example." Another goal is ‘0 develop computational. models of human learning process and perform computer simulations. * The goal of machine learning is to build computer systems that can adapt and leam from their experience. * Algorithm is used to solve a problem on computer. An algorithm is a sequence of ‘struction. It should carry out to transform the input to output. For example, for addition of four numbers is carried out by giving four number as input to the ort and output i um of al four numbers. For the same tsk there may be Various algorithms. It is interested to find the most efficient one, requiring the fast number of instructions or memory oF both — " Lor some tasks, however, we do not have an algorithm. © ist for knowl TECHNICAL PUBLICATIONS” ~ an up-thrust for knowledge y i X i » § ij j tIntroduction to Machine Lea, ng Machine Leaming How Machines Learn ? * Machine learning typically foll 1, Training : A training set 0" representation of the newly rules. Jes are checked and, if necessary, additional training i, 2, Validation : The Saal test data are used, but instead, a human exper, given. Sometimes additi some other automatic knowledge - based component may validate the rules; © "tester is often called the opponent, i | The role of the — may be used. 4 in responding to some new situation. 3. Application : The rules are use! New situation ases of ML. Application hases : | £ correct behavior is analyzed ang some ledge is stored. This is some form o, lows three P’ f examples ©! learnt know Fig. 1.2.1 shows phi New knowledge Training Existing knowledge Validation Test data Fig. 1.2.1 Phases of ML EEA why Machine Learning is Important ? * Machine leaming algorithms can. generalizing from examples, figure out how to perform important tasks by * Machine learning provides busine rovided wit res being ae a insights into their organizations. This adaptive technology is na ¥ global enterprises to gain a competitive edge fachine learni i . " ae Gane eens discover the telationships between the variables of a ” output and hidden) from direct samples of the system. the reasons ; ss insight and intelligence. Decision makers are * Following are some of 1. Some tasks cannot y examples. Fo be defi les. For exam R ee He lefined well, except by : TE CHNICAL PUBLICATIONS® - an Up-thrust for knowledgeiS yt eveemind. 1-13 Introduction to Machine Learning jonships and correlati A g,, RelationshiP ations can be hidden within large amounts of data. To 7 blems, machi . goive these Pro ine learnt ta ; these relationships. ing and data mining may be able to find , Human designers often produce machines that do not work as well as desired jn the environments in which they are used 4. The amount of knowledge available about certain tasks might be t00 large for explicit encoding by humans. i Environments change time to time. 4, New knowledge about tasks is constantly being discovered by humans: + Machine learning also helps us find solutions of many problems in computer vision, speech recognition and robotics. Machine learning uses the theory of qtatistics in building mathematical models, because the core task is making inference from a sample, + Leaning is used when : Human expertise does not exist (navigating on Mars), 1 2, Humans are unable to explain their expertise (speech recognition) 3, Solution changes in time (routing on a computer network) 4, Solution needs to be adapted to particular cases (user biometrics) Ingredients of Machine Learning The ingredients of machine learning are as follows : 1 Tasks : The problems that can be solved with machine learning: A task is an abstract representation of a problem. The standard methodology in machine learning is to learn one task at a time. Large problems are broken into small, reasonably independent sub-problems that are learned separately and then recombined. Predictive tasks perform inference on the current data in order to make Predictions. Descriptive tasks characterize the general properties of the data in the database. 2 Models : The output of machine learning. Different models are geometric models, probabilistic models, logical models, grouping and grading. * The model-based approach seeks to create modified solution tailored to each new application, Instead of having to transform Your problem to fit some standard algorithm, in model-based machine learning you design the algorithm precisely to fit your problem. * Model is just made up of set of assumptions, expressed in a precise mathematical form. These assumptions include the number and types of variables in the problem TECHNICAL PUBLICATIONS® = an up-thrust for knowledgeMachine Learning y 1-14 Intrductin (0 Machine Leeming vhat the effect of changin, iables affect each other, and what the Bing one domain, which variables variable. variable is on another varial / eke itd veo models are classified as : Geometric P a Machine learning sa and logical model. horses of machine learning. A good feature representation jg Features : The worl s hh performance in any machine learning task. Be ory an initial set of measured data and builds derived ative, non redundant, facilitating the subsequent central to achieving hi Feature extraction starts from values intended to be inform jamming and generalization steps. Feature selection is a process that chooses a subset of features fro i ature selection is a pl m the’ orig fee ‘0 that the feature space is optimally reduced according to a certain atures: s criterion. EE] Types of Machine + Learning is essential for unknown environments, ie. when designer lacks the omniscience. Learning simply means incorporating information from the training examples into the system. * Leaming is any change in a system that allows it to perform better the second time on repetition of the same task or on another task drawn from the same Population. One part of learning is acquiring knowledge and new information; and the other part is problem-solving, * Supervised and Unsupervised Learning are the different types of machine learning iethods. A computational learning model should be clear about the following aspects : Leamer : Who or what. is doing th leami : a 8 the learning. For example : Program or Domain : What is being leamed ? Goal : Why the leaming is done ? ew aa = 2 = & g 3 = By : The algorithmic framework to be used. 5. 6. Information Source ane . 7 am a The information (training data) the program uses for 7. Trainit t ng scenario : The description of the leaming Process. fying representation of what is being experienced. Learn means to get ee knowledge of by study, experience or being taught. TECHNICAL pj )p,eaming 1-95 Machine learning scientific evelopment of the algorithm th, ‘mpirical data, Such as form sen, discipline concemed with the design and at ‘St allows computers to evolve behavi sue ‘Ors data or database, Machine leaming is usually divideg int i . o unsupervised learning. © fVo main types : Supervised learning and iors based on. Learning 2 do Machine wy ‘stand and imy ici 1, Tounderstand and improve efficiency of human learning, 2 Discover new things or structure that is unknown to humans (Example : Data mining). 3. Fill in skeletal or incomplete Specifications about a domain. 131] Supervised Learning + Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. The task of the supervised learner is to predict the output behavior of a system for any set of input values, after an initial training phase. Supervised learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs are usually provided by an external teacher. Human learning is based on the past experiences. A computer does not have experiences. A computer system learns from data, which represent some “past experiences” of an application domain. attribute, e.g., approve or not-approved and high-risk or low risk. The task is commonly called : Supervised learning, classification or inductive learning. Training data includes both the input and the desired results. For some examples the correct results (targets) are known and are given in input to the model during the learning process. The construction of a proper training, validation and test set is crucial. These methods are usually fast and accurate. Have to be able to generalize : give the correct results when new data are given in input without knowing a priori the target. Supervised learning is the machine learning task of inferring a function from Supervised training data. The training data consist of a set of training examples. In Supervised learning, each example is a pair consisting of an input object and a desired output value. TECHNICAL PUBLICATIONS® + an up-thrust for knowledge To learn a target function that can be used to predict the values of a discrete class - agIntroduction to Machine Learning | Machine Leaming ing data and produces an ; 5 the traini orithm analyze ression function. Fig, 13 © A supervised learning alg' aie sfored fanetion, which is called a classifier oF g shows supervised leaming process. teaming ‘Training algorithm: data ~ Testing Training Fig. 1.3.1 Supervised learning process * The learned model helps the system to perform task better as compared to no learning. + Each input vector requires a corresponding target vector. Training Pair = (Input Vector, Target Vector) * Fig. 1.3.2 shows input vector. Actual output Neural network. Error signal generate Fig, 1.3.2 Input vector ich some input vectors are collected and the deviation fee ot oe * computed by the net-work is observed : wer ji . aoe according to the magnitude Fe — is measured. The weights are learning algorithm. crror in the way defined by the © Supervised learning denotes a method in whi and presented to the network, The output co error correction. The perceptron Ira; which use reinforcement or learning with reinforcement, : TECHNICAL PUBLICATIONS® . an upsthrgy for knowie. owledger ' perfor we reaming 1-47 Introduction to Machine Learning caer to solve a given proble ; eared m of supervised learning, following steps are 1 Find out the type of training examples Collect a training set. 2 mine the input fe: 7 3, Deter put feature representation of the learned function. 4 ermine the structure . vont. of the learned function and corresponding learning Complete the design and then run the learning algorithm on the collected training set. 6. Evaluate the accuracy of the leamed function. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set. fa Unsupervised Learning The model is not provided with the correct results during the training. It can be used to cluster the input data in classes on the basis of their statistical properties only. Cluster significance and labeling. ‘The labeling can be carried out even if the labels are only available for a small number of objects representative of the desired classes. All similar inputs patterns are grouped together as clusters. If matching pattern is not found, a new cluster is formed. There is no error feedback. External teacher is not used and is based upon only Jocal information. It is also referred to as self-organization. 4 because they do not need a teacher or super-visor {0 They are called unsupervise data is required to start the label a set of training examples. Only the original analysis. In contrast to supervised learning, unsupervised or self-organized learning does the training session, the neural network not require an external teacher. During # ; aud : receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. hms aim to lear rapidly and can be used in Unsupervi ing algoritl ppervined esa ey ing is frequently employed for data clustering, real-time, Unsupervised learn) feature extraction etc. SOR ee ee ee TECHNICAL PUBLICATIONS” - 6” up-thrust for knowledge qd aP SIntroduction 'o Machine Leaming fachine Learning 4. recording learning by Zurada is typically etworks. An associative memory networks jg into the networks stable states. f learning calle tive memory ™ ; eral idea patterns i « Another mode of | employed for associa designed by recording se¥ EEEI Difference between Sur siliniciananens Sr. No. i Desired output is not given. 1. Desired output is given. supervised learning ad more Tt is possible to learn larger and more mn larger a é 2 Ibis not possible to lear i complex models with unsupervised ervised P! complex models than with sup ‘carning. 3, Use training data to infer model No training data is used. The target output is not presented to the 4 ¢ pattern that is used to train the network fs 08 network. network is associated with an output pattern, 5. Trying to predict a function from labeled data. "Try to detect interesting relations in data. 6 Supervised learning requires that the target For unsupervised learning typically either | Variable is well defined and that a sufficient the target variable is unknown or has only number of its values are given. been recorded for too small a number of 7. Example : Optical character recognition. Example : Find a face in an image. 8. We can test our model. We can not test our model. 9. Supervised learning is also called classification, EEX] semi-supervised Learning en ‘ A eee ee both labeled and unlabeled data to improve toon ig. The goal is to learn a predictor that predicts future test data i 'e predictor learned from the labeled training data alone. * Semi-supervised learnin, is moti it F better and cheaper. SB ‘tivated by its practical value in learning faster, In many real world appli applications, it i unlabeled data « ‘plications, it is relatively easy to acquire a large amount of * For example, do * cuments can om sarecleee ee be crawled from the Web, images can be obtained : ras, ai their corresponding nee - speech can be collected from broadcast. However, ¥ for the prediction task, such as sentiment orientation, TECHN ICAL PUBLICATIONS® - an up-thrust for knowedk 190, ming 1 galt oe Introduction to Machine Learning surasion detection and phonetic tronscy . ‘ipt, oft i i moxpensive ADOFAOLY experiments ipt, often requires slow human annotation ae : many practical learning domains, there is a large supply of unlabeled data but * ed labeled data, which : Fite aoe dees Mi Ce expensive to generate. For example : Text processing, ee xing, bioinformatics etc, ’ gemi-supervised Learning makes use of both labeled and unlabeled data for training, typically a small amount of labeled data with a large amount of unlabeled data. When unlabeled data is used in conjunction with a small amount of labeled data, it can produce considerable improvement in learning accuracy. , semi-supervised learning sometimes enables predictive model testing at reduced cost. , semi-supervised classification : Training on labeled data exploits additional unlabeled data, frequently resulting in a more accurate classifier. « Semi-supervised clustering : Uses small amount of labeled data to aid and bias the clustering of unlabeled data. [EG Reinforced Learnings « User will get immediate feedback in supervised learning and no feedback from unsupervised learning. But in the reinforced learning, you will get delayed scalar feedback. « Reinforcement learning is learning what to do and how to map situations to actions. The learner is not told which actions to take. Fig. 1.3.3 shows concept of reinforced learning. z | | ST Si: | ENVIRONMENT * Reinforced learning is 5 deals with agents that must sense and act upon Fig. 1.3.3 Reinforced learning their environment. It e 7 . combines classical Artificial Intelligence and machine learning techniques. It allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance. Simple reward feedback is required for the agent to Team its behavior; this is known as the reinforcement signal. Situation |_| Reward Action Ss Tt a TECHNICAL PUBLICATIONS® - an up-thrust for knowledge dq PE TEEELE1-20 Introduction to Machine Learning Machine Leaming ement learning j, t distinguishing features of reinforc re * Two most important di trial-and-error and delayed reve n agent can improve its’performance }; . ii gorithms al . . 7 + With reife pene we the environment. This environmental feedback ig using the feedback i called the reward signal. oo. ; ulated experience, the agent needs to learn which action ro take in * Based on accum in order to obtain a desired long term goal. Essential ly actions * a given situation in 0 ds need to reinforced. Reinforcement learning has d to long term rewar ei on — Sih emul theory, Markov decision processes and game theory, conni Example of reinforcement learning : A mobile robot decides whether it should . © a 7m ‘ a a new room in search of more trash to collect or start trying to find its way back to its battery recharging station. It makes its decision based on how quickly and easily it has been able to find the recharger in the past. EEESI Elements of Reinforcement Learning + Reinforcement learning elements are as follows : 1. Policy 2. Reward function 3. Value function 4. Model of the environment + Fig. 1.34 shows elements of RL. * Policy : Policy defines the learning agent behavior for given time period, It is a mapping from perceived states of the fee i ei environment to actions to be taken when in ‘Avironment those states. * Reward function : Reward function is used
(5) to hold for above average fitness schemata. * Going one stage further we can estimate in estimate the number of eed er of schema present at t + 1 f i where nis the size of the population {(S) is the fitness of the schema > {is the fitness of the population pride eee SSLa arming Ue ye ot St Introduction to Machine Learning 7 f(s) 1 = OS) F— ott} fave fg BE average fitness of the population. $ ; 4 particular schema stays a constant, c, above the average we can say ever more if about the effects of reproduction (eg * € fav aot) = oS, ens tve) ave = (S,t) (+o) . setting t= 0 06,1) = OS, NA+ + Notice that the number of schema rises exponentially. 172] sample Error and True Error The sample error (err.(h)) of h with respect to target function (f) and data sample 6) is the proportion of examples h misclassifies. The sample test error is the mean error over the test sample. 12 errg(h) = 5 DLO) hoa) é& The true error of hypothesis h with respect to target function f and distribution D is the probability that h will misclassify an instance drawn at random according to D. Enh) = E [L(f@), hed] An errorp (h) is the true error of hypothesis h wil and data distribution D. It is the probability h will misclassify an instance drawn at random according to D. An error,(h) is the sample erro fand data sample set S. It is the propo! th respect to the target function f respect to the target function x of hypothesis h with es in S that h misclassifies. rtion of exampl Wl inductive Bias m will converge toward the true target concept * The Candidate-Elimination algorithi g examples and provided its initial hypothesis Provided it is given accurate trainin space contains the target concept * What if the target concept is not containe Can we avoid this difficulty by Using Possible hypothesis ? din the hypothesis space ? a hypothesis space that includes every TECHNICAL PUBLICATIONS” = an up-thrust for knowledgeduction to Machine Leamni Machine Leaming 1-32 Intro ing + How does the size of this hypothesis space influence the ability of the algorithm to generalize to unobserved instances ? othesis space influence the number of training * How does the size of the hyp examples that must be observed ? we restricted the Because of this nctive target concepts suc hypothesis space to include only restriction, the hypothesis space is has "Sky = Sunny «In EnjoySport example, conjunctions of attribute values. unable to represent even simple disjut or Sky = Cloudy.” Tree Sram] Wind Water ‘Forecast. Enjoy Sport | Example © Sky Air Temp Humidity 1 Sunny Warm Normal _ Strong, Cool Change YES 2 Cloudy Warm Normal _ Strong, Cool Change YES Rainy | Warm Normal Cool Change no | «From first two examples : $2 : , Warm, Normal, Strong, Cool, Change> « This is inconsistent with third examples, and there are no hypotheses consistent with these three examples PROBLEM : We have biased the learner to consider only conjunctive hypotheses. We require a more expressive hypothesis space. * The obvious solution to the problem of assuring that the target concept is in the hypothesis space H is to provide a hypothesis space capable of representing every teachable concept. Inductive Bias - Fundamental Property of Inductive Inference : * A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. * Inductive Leap : A learner should be able to generalize training data using prior assumptions in order to classify unseen instances. * The generalization is known as inductive leap and our prior assumptions are the inductive bias of the learner. * Inductive Bias (prior assumptions) of Candidate-Elimination algorithm is that the target concept can be represented by a conjunction of attribute values, the target concept is contained in the hypothesis space and training examples are correct. Inductive Bias - Formal Definition * Consider a concept learning algorithm L for the set of instances X. Let c be an arbitrary concept defined over X, and let De = (
} be an arbitrary set of training examples of c. TECHNICAL PUBLICATIONS® - an up-thrust for knowledgey ine LO3! yet ming 1-33 . Introduction to Machine Learning | Let Lexie) denote the classification assigned to the instance x; by L after training on the data De. , The inductive bias of L is) any minimal set of assertions B such that for any target concept ¢ and corresponding training examples D, the following formula holds. (Wx; ANB DE*XiIZ (Xi, Del en vee Learning Algorithms : " ROTE-LEARNER : Learning corresponds simply to storing each observed training 4 example in memory. Subsequent instances are classified by looking them up in memory. If the instance is found in memory, the stored classification is returned. Otherwise, the system refuses to classify the new instance. Inductive Bias : No inductive bias CANDIDATE-ELIMINATION : New instances are classified only in the case where al members of the current version space agree on the classification. Otherwise, the system refuses to classify the new instance, Inductive Bias : the target concept can be represented in its hypothesis space. FINDS : This algorithm, described earlier, finds the most specific hypothesis consistent with the training examples. It then uses this hypothesis to classify all cubsequent instances. Inductive Bias: the target concept can be represented in its hypothesis space, and all instances are negative instances unless the opposite is entailed by its other knowledge. [El Bias Variance Trade-Off « In the experimental practice we observe an important phenomenon called the bias variance dilemma. In supervised learning, the class value assigned by the learning model built based on the training data may differ from the actual class value. This error in learning can be of two types, errors due to ‘bias’ and error due to ‘variance’ Fig. 19.1 shows bias-variance trade off. Give two classes of hypothesis (eg: linear models and k-NNs) to fit to some training data set, we observe that the more flexible hypothesis class has a low bias term but a higher variance term. If we have parametric family of hypothesis, then We can increases the flexibility of the hypothesis but we still observe the increase of variance. TECHNICAL PUBLICATIONS® - an up-thrust for knowledgeIntroduction to Maching Loam v 0 jigh variance Low variance Low bias, High bias Fig. 1.9.1 Bias-variance trade off * The bias-variance-dilemma is the problem of simultaneously minimizing two sources of error that prevent supervised learning algorithm from generalizing beyond their training set : / ; 1. The bias is error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs. 2. The variance is error from sensitivity to small fluctuations in the training set High variance can cause overfitting : modeling the random noise in the training data, rather than the intended outputs, In order to reduce the model error, bias or the variance, as the nois, the designer can aim at reducing either the € components is irreducible. . e 8 5 2 = g g 8 ie 8 mplexity, its bias is likely to diminish. However, # mples is kept fixed, the Parametric identification of the to another. This will increase the variance _ : ance, Stior to the increase in variant © At one Stage, the decrease in bias will be ing 1 sh ‘ould ™mplex. Conversely, to decrease th? Warning that the mode Not be too ¢ 0) variance term, the designer h; i as ify i Specific training set. This simaye eeu itsry jaming rine LE 1-35 Introduction to Machine Learning Price Price Price Size ‘Size (o) (c) Fig. 1.9.2 plain the above Fig. 1.9.2 (a), (b) and (c) olution + | Given Fig. 19.2 is related to overfitting and underfitting. underfiting (High bias and low variance) : «A statistical model or a machine learning algorithm is when it cannot capture the underlying trend of the data. «It usually happens when we have less data to build an accurate model and also when we try to build a linear model with a non-linear data. s said to have underfitting Price Price Size Size Size Oy + Bx 69 + 04x + OX" High bais (underfit) _ High bais (underfit) Fig. 1.9.3 Op + yx + OyX+ 8x7 0,37 High variance (overfit) model are too easy and flexible to * In such cases the rules of the machine learning le the model will probably make a be applied on such minimal data and therefor lot of wrong predictions. * Underfitting can be avoided by using more data an feature selection. d also reducing the features by 0 ‘erfitting (High variance and low bias) + * A statistical model is said to be overfitted, when we train it with a lot of data. * When a model gets trained with so much of data, it starts learning from the noise and inaccurate data entries in our data set. TECHNICAL PUBLICATIONS” = an up-thrust for knowledge6 Introduction to Maching Learning c 1-3 Machine Leaming . use of too many tegorize the data correctly, becat Y etait, t cal * Then the model does not and noise. he non-parametric and non-linear ee Decatice ing are the non Seana eee Seal ring algorithms have more frees lng he these types of machine learning hey can really build unrealistic Models, t and therefore # ; I based on the datase ithm if we have linea se d overfitting is using a linear algorithm if “ ae data or jon to avoi ing decision trees, of ite parmnetes like the maximal depth if we are using using ERD Two Marks Questions with Answers “Qa Define learning. : ; ; Ans. : Learning is a phenomenon and process which has manifestations of various Lear j : sepecs. Leaming process incdes gaining of nev symbolic ‘sowedge ind development of cognitive skills through instruction and practice. It is also discovery of new facts and theories through observation and experiment. Q2 Define machine learning. Ans. : A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E, Q3 What is an influence of information theory on machine learning 7 Ans. : Information theory is measures of entropy and information content. Minimum description length approaches to learning, Optimal codes and their relationship to optimal training sequences for encoding a hypothesis. 4 What is meant by target function of a learning program ? Ans, : Target function is a method for solving a problem that an Al algorithm parses its training data to find. Once an algorithm finds its target function, that function can be used to predict results, The function can then be used to find output data related to inputs for real problems where, unlike training sets, outputs are not included. Q5 Define useful Perspective on machine learning, Ans. : One useful perspective on machine learning is that it involves searching a very large space of possible hypotheses to determi i mine one that best fi a any prior knowledge held by the learner Sees Q6 Describe the issues in machine learning 2 Ans. : Issues of machine leaming are as follows ; * What leaming algorithms to be used ? * How much training data is sufficient ? __* When and how Prior knowledge can guide the leami: iB process ? TECHNICAL PUBLICATIONS® ~ 87 up-thrust for knowledgeIntroduction to Machine Learning at is the best strategy w what 38 for choosing a ne, the best way to reduce ‘xt training experience ? the leami 1 proximation problems ? learning task to one or more function a at a tons Sac i jon tree is a tree wl A a i“ where each node represents a feature(attribute), each jinkbranch) Fepresents a decision(rule) and each leaf represents an gutcome(categorical or continues value), , Adecision tree or a classification tree is a tree in which each internal node is Jabeled with an input feature. The arcs coming from a node labeled with a feature are labeled with each of the possible values of the feature. gg What are the nodes of decision tree 7 ; + A decision tree has two kinds of nodes 1, Each leaf node has a class label, determined by majority vote of training examples reaching that leaf. 2. Each internal node is a question on features. It branches out according to the answers. + Decision tree learning is a method for approximating discrete-valued target functions. The learned function is represented by a decision tree 3 Why tree pruning useful in decision tree induction 7 ins.: When a decision tree is built, many of the branches will reflect anomalies in the tsining data due to noise or outliers. Tree pruning methods address this problem of wverfitting the data. Such methods typically use statistical measures to remove the least rable branches, {0 What is tree pruning 7 ‘ns: Tree pruning attempts to identify and remove such branches, with the goal of ‘proving classification accuracy on unseen data Qu What is RULE POST-PRUNING ? 6 It is method for finding high accuracy hypotheses. * Rule post-pruning involves the following step: 1. Infer decision tree from training set 2 Convert tree to rules - one rule per branch TECHNICAL PUBLICATIONS® - an up-thrust for knowledge reteset‘ n ” Ce ae ishit he different foe eee Jes allows distinguishing among # Contexts i, Ans. : © Converting to mu which a decision node is used. Machine Leaming Wnfromction t© Maching beg A ns that result in improveg Sting, te timated accuracy and consider them in i econd 3. Prune each rule by removing Pr accuracy s by their est sr joottithe prone’ pave unseen instances ree to rules before pruning ? thi sequence when class Jes removes the distinction Caren tests that 0c * Converting to rules s pence near the A of the tree and those that occur near * Converting to rules improves readability. Rules are often easier jg, ie understand Q.13 Define probably approximately correct learning. Ans. : A concept class C is said to be PAC learnable using a hypothesis clas exists a learning algorithm L such that for all concepts in C, for distributions D on an instance space X, Ve, 60 < ¢ 6 < 1) Example oracle, produces, with probability at least (1 - SH if there all instance ), L, when given access to the 8), a hypothesis h from with error no more than & Q.14 What is inductive learning ? Ans. must select an output hyp D= The rr ha + In inductive learning, the learner is given a hypothesis space H from which it othesis and a set of training examples - {(x1, £0) +AXq £Xq ))} where f(x; is the target value for the instance Xp desired output of the 1 ‘ earner is a hypothesis h from H that is consistent with these training examples, ood | | |
You might also like
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
From Everand
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
Mark Manson
4/5 (6128)
Principles: Life and Work
From Everand
Principles: Life and Work
Ray Dalio
4/5 (627)
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
From Everand
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
Brene Brown
4/5 (1148)
Never Split the Difference: Negotiating As If Your Life Depended On It
From Everand
Never Split the Difference: Negotiating As If Your Life Depended On It
Chris Voss
4.5/5 (933)
The Glass Castle: A Memoir
From Everand
The Glass Castle: A Memoir
Jeannette Walls
4/5 (8215)
Grit: The Power of Passion and Perseverance
From Everand
Grit: The Power of Passion and Perseverance
Angela Duckworth
4/5 (631)
Sing, Unburied, Sing: A Novel
From Everand
Sing, Unburied, Sing: A Novel
Jesmyn Ward
4/5 (1253)
The Perks of Being a Wallflower
From Everand
The Perks of Being a Wallflower
Stephen Chbosky
4/5 (8365)
Shoe Dog: A Memoir by the Creator of Nike
From Everand
Shoe Dog: A Memoir by the Creator of Nike
Phil Knight
4.5/5 (860)
Her Body and Other Parties: Stories
From Everand
Her Body and Other Parties: Stories
Carmen Maria Machado
4/5 (877)
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
From Everand
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Ben Horowitz
4.5/5 (361)
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
From Everand
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
Margot Lee Shetterly
4/5 (954)
Steve Jobs
From Everand
Steve Jobs
Walter Isaacson
4/5 (2923)
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
From Everand
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
Ashlee Vance
4.5/5 (484)
The Emperor of All Maladies: A Biography of Cancer
From Everand
The Emperor of All Maladies: A Biography of Cancer
Siddhartha Mukherjee
4.5/5 (277)
A Man Called Ove: A Novel
From Everand
A Man Called Ove: A Novel
Fredrik Backman
4.5/5 (4972)
Angela's Ashes: A Memoir
From Everand
Angela's Ashes: A Memoir
Frank McCourt
4.5/5 (444)
Brooklyn: A Novel
From Everand
Brooklyn: A Novel
Colm Toibin
3.5/5 (2061)
The Art of Racing in the Rain: A Novel
From Everand
The Art of Racing in the Rain: A Novel
Garth Stein
4/5 (4281)
The Yellow House: A Memoir (2019 National Book Award Winner)
From Everand
The Yellow House: A Memoir (2019 National Book Award Winner)
Sarah M. Broom
4/5 (100)
The Little Book of Hygge: Danish Secrets to Happy Living
From Everand
The Little Book of Hygge: Danish Secrets to Happy Living
Meik Wiking
3.5/5 (447)
Yes Please
From Everand
Yes Please
Amy Poehler
4/5 (1987)
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
From Everand
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
Gilbert King
4.5/5 (278)
The World Is Flat 3.0: A Brief History of the Twenty-first Century
From Everand
The World Is Flat 3.0: A Brief History of the Twenty-first Century
Thomas L. Friedman
3.5/5 (2283)
Bad Feminist: Essays
From Everand
Bad Feminist: Essays
Roxane Gay
4/5 (1068)
The Outsider: A Novel
From Everand
The Outsider: A Novel
Stephen King
4/5 (1993)
The Woman in Cabin 10
From Everand
The Woman in Cabin 10
Ruth Ware
3.5/5 (2641)
A Tree Grows in Brooklyn
From Everand
A Tree Grows in Brooklyn
Betty Smith
4.5/5 (1936)
The Sympathizer: A Novel (Pulitzer Prize for Fiction)
From Everand
The Sympathizer: A Novel (Pulitzer Prize for Fiction)
Viet Thanh Nguyen
4.5/5 (125)
Team of Rivals: The Political Genius of Abraham Lincoln
From Everand
Team of Rivals: The Political Genius of Abraham Lincoln
Doris Kearns Goodwin
4.5/5 (1912)
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
From Everand
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
Dave Eggers
3.5/5 (692)
Wolf Hall: A Novel
From Everand
Wolf Hall: A Novel
Hilary Mantel
4/5 (4074)
On Fire: The (Burning) Case for a Green New Deal
From Everand
On Fire: The (Burning) Case for a Green New Deal
Naomi Klein
4/5 (75)
Rise of ISIS: A Threat We Can't Ignore
From Everand
Rise of ISIS: A Threat We Can't Ignore
Jay Sekulow
3.5/5 (143)
Fear: Trump in the White House
From Everand
Fear: Trump in the White House
Bob Woodward
3.5/5 (830)
Manhattan Beach: A Novel
From Everand
Manhattan Beach: A Novel
Jennifer Egan
3.5/5 (901)
John Adams
From Everand
John Adams
David McCullough
4.5/5 (2543)
The Light Between Oceans: A Novel
From Everand
The Light Between Oceans: A Novel
M L Stedman
4.5/5 (790)
Machine Learning Unit3
PDF
No ratings yet
Machine Learning Unit3
26 pages
FML Unit5
PDF
No ratings yet
FML Unit5
21 pages
Machine Learning Unit2
PDF
No ratings yet
Machine Learning Unit2
31 pages
FML Unit4
PDF
No ratings yet
FML Unit4
14 pages
FML Unit3
PDF
No ratings yet
FML Unit3
18 pages
FML Unit2
PDF
No ratings yet
FML Unit2
13 pages
Programming Languages Unit - V
PDF
No ratings yet
Programming Languages Unit - V
23 pages
Programming Languages Unit - III
PDF
No ratings yet
Programming Languages Unit - III
18 pages
Study All Questions Fully. Dont Leave Choice
PDF
No ratings yet
Study All Questions Fully. Dont Leave Choice
3 pages
The Unwinding: An Inner History of the New America
From Everand
The Unwinding: An Inner History of the New America
George Packer
4/5 (45)
Little Women
From Everand
Little Women
Louisa May Alcott
4/5 (105)
The Constant Gardener: A Novel
From Everand
The Constant Gardener: A Novel
John le Carré
3.5/5 (109)
Related titles
Click to expand Related Titles
Carousel Previous
Carousel Next
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
From Everand
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
Principles: Life and Work
From Everand
Principles: Life and Work
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
From Everand
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
Never Split the Difference: Negotiating As If Your Life Depended On It
From Everand
Never Split the Difference: Negotiating As If Your Life Depended On It
The Glass Castle: A Memoir
From Everand
The Glass Castle: A Memoir
Grit: The Power of Passion and Perseverance
From Everand
Grit: The Power of Passion and Perseverance
Sing, Unburied, Sing: A Novel
From Everand
Sing, Unburied, Sing: A Novel
The Perks of Being a Wallflower
From Everand
The Perks of Being a Wallflower
Shoe Dog: A Memoir by the Creator of Nike
From Everand
Shoe Dog: A Memoir by the Creator of Nike
Her Body and Other Parties: Stories
From Everand
Her Body and Other Parties: Stories
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
From Everand
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
From Everand
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
Steve Jobs
From Everand
Steve Jobs
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
From Everand
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
The Emperor of All Maladies: A Biography of Cancer
From Everand
The Emperor of All Maladies: A Biography of Cancer
A Man Called Ove: A Novel
From Everand
A Man Called Ove: A Novel
Angela's Ashes: A Memoir
From Everand
Angela's Ashes: A Memoir
Brooklyn: A Novel
From Everand
Brooklyn: A Novel
The Art of Racing in the Rain: A Novel
From Everand
The Art of Racing in the Rain: A Novel
The Yellow House: A Memoir (2019 National Book Award Winner)
From Everand
The Yellow House: A Memoir (2019 National Book Award Winner)
The Little Book of Hygge: Danish Secrets to Happy Living
From Everand
The Little Book of Hygge: Danish Secrets to Happy Living
Yes Please
From Everand
Yes Please
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
From Everand
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
The World Is Flat 3.0: A Brief History of the Twenty-first Century
From Everand
The World Is Flat 3.0: A Brief History of the Twenty-first Century
Bad Feminist: Essays
From Everand
Bad Feminist: Essays
The Outsider: A Novel
From Everand
The Outsider: A Novel
The Woman in Cabin 10
From Everand
The Woman in Cabin 10
A Tree Grows in Brooklyn
From Everand
A Tree Grows in Brooklyn
The Sympathizer: A Novel (Pulitzer Prize for Fiction)
From Everand
The Sympathizer: A Novel (Pulitzer Prize for Fiction)
Team of Rivals: The Political Genius of Abraham Lincoln
From Everand
Team of Rivals: The Political Genius of Abraham Lincoln
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
From Everand
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
Wolf Hall: A Novel
From Everand
Wolf Hall: A Novel
On Fire: The (Burning) Case for a Green New Deal
From Everand
On Fire: The (Burning) Case for a Green New Deal
Rise of ISIS: A Threat We Can't Ignore
From Everand
Rise of ISIS: A Threat We Can't Ignore
Fear: Trump in the White House
From Everand
Fear: Trump in the White House
Manhattan Beach: A Novel
From Everand
Manhattan Beach: A Novel
John Adams
From Everand
John Adams
The Light Between Oceans: A Novel
From Everand
The Light Between Oceans: A Novel
Machine Learning Unit3
PDF
Machine Learning Unit3
FML Unit5
PDF
FML Unit5
Machine Learning Unit2
PDF
Machine Learning Unit2
FML Unit4
PDF
FML Unit4
FML Unit3
PDF
FML Unit3
FML Unit2
PDF
FML Unit2
Programming Languages Unit - V
PDF
Programming Languages Unit - V
Programming Languages Unit - III
PDF
Programming Languages Unit - III
Study All Questions Fully. Dont Leave Choice
PDF
Study All Questions Fully. Dont Leave Choice
The Unwinding: An Inner History of the New America
From Everand
The Unwinding: An Inner History of the New America
Little Women
From Everand
Little Women
The Constant Gardener: A Novel
From Everand
The Constant Gardener: A Novel