![]() |
CIS 472/572Machine Learning |
![]() |
Machine learning is about building predictive or descriptive models automatically from data. There are two central challenges in machine learning. The first is generalizing from data to future situations. A model that performs very well on past data may nonetheless perform very poorly on unseen examples, a phenomenon known as "overfitting". The second challenge is building models efficiently, especially in cases where the dataset is very large and the patterns are complex. We will cover standard methodologies, models, and algorithms for machine learning. Specific topics include decision trees, instance-based learning, linear classifiers, probabilistic classifiers, support vector machines, deep learning, model ensembles, and learning theory.
Thien Huu Nguyen, [email protected]
Hal Daume III, A Course in Machine Learning 0.99, 2015 (Online!)
Tom Mitchell, Machine Learning, 1997.
Kevin Murphy, Machine Learning: A Probabilistic Perspective, 2012.
Hastie, Tibshirani, and Friedman, The Elements of Statistical Learning, 2009 (Online!)
Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning, 2016 (Online!)
Pedro Domingos’ video lectures on YouTube
This course covers standard methodologies, models, and algorithms for machine learning. Specific topics include decision trees, instance-based learning, linear classifiers, probabilistic classifiers, support vector machines, model ensembles, and learning theory. Especially, we will spend a large amount of time for deep learning, the recent approach to machine learning that has achieved very high performance for tasks in different application domains (e.g., computer vision, natural language processing).
Upon successful completion of the course, students will be able to:
Upon successful completion of the course, students will have acquired the following skills:
Dates | Topics | Resources |
---|---|---|
Mar 30, Apr 1 | Introduction (slides), Decision Trees (slides) | CIML 1; Mitchell, Ch. 3, 8; ESL 9.2; Murphy 16.2, 1; Domingos, week 2 |
Apr 6, 8 | Inductive Learning (slides), Nearest Neighbor (slides) | CIML 2, 3 (skip k-means); Mitchell, Ch 8; Domingos, week 3 |
Apr 13, 15 | Perceptron (slides), Linear Regression (slides), Logistic Regression (slides) | CIML 4, 8, Averaged Perceptron Note |
Apr 20, 22 | Kernel Methods, SVMs (slides) | CIML 11, Murphy, chapter 14 (mainly 14.5) |
Apr 27, 29 | Neural Networks (slides) | DL 6, Notes: a brief probability review and linear algebra and matrix calculus review from Stanford University |
May 4, 6 | Deep Learning (slides) | DL 6 |
May 11, 13 | Review and Midterm (Midterm will be on Thursday, May 13) (slides) | |
May 18, 20 | Deep Learning (continued) (slides) | DL 6 |
May 25, 27 | Convolutional Neural Networks (slides) | DL 9 |
Jun 1, 3 | Optimization and Initialization, Recurrent Neural Networks (slides, slides) | DL 8, 10 |
Assignment 1 (written): Link (posted on April 7), due date: April 15 at 11:55pm.
Assignment 2 (programming): Link (posted on April 16), due date: April 29 at 11:55pm.
Assignment 3 (programming): Link (posted on April 30), due date: May 12 at 11:55pm.
Assignment 4 (programming): Link (posted on May 23), due date: June 3 at 11:55pm.
Final Project Proposal Due: May 7 (11:55 pm)
Final paper due: Finals week (June 10, 11:55pm)
Helpful links
Stanford ML projects (also look at the previous years)
You are encouraged to explore deep learning with PyTorch in this class.
This REMOTE course will be taught entirely using Zoom, Canvas, and Piazza.
Grading will be based on the following criteria:
Percentage | Component |
50 | written and programming assignments (evenly distributed) |
20 | midterm exam |
30 | final project |
  A   |  A+ >= 97.00  |  A 93.34-96.90  |  A- 90.00-93.33  |
  B   |  B+ 86.67-89.99  |  B 83.34-86.66  |  B- 80.00-83.33  |
  C   |  C+ 76.67-79.99  |  C 73.34-76.66  |  C- 70.00-73.33  |
  D   |  D+ 66.67-69.99  |  D 63.34-66.66  |  D- 60.00-63.33  |
  F   |  F 0.00-59.99  |   |   |