Machine Learning: a Basic Toolkit Lorenzo Rosasco, - Universita’ di Genova - Istituto Italiano di Tecnologia August 2015 - BMM Summer School
Machine Learning Intelligent Systems Data Science Intro
ML Desert Island Compilation An introduction to essential Machine Learning: • Concepts • Algorithms
•Local methods PART I • Bias-Variance and Cross Validation • Regularization I: Linear Least Squares PART II •Regularization II: Kernel Least Squares Morning PART III • Variable Selection : OMP •Dimensionality Reduction: PCA PART IV •Matlab practical session Afternoon
PART I •Local methods •Bias-Variance and Cross Validation GOAL: Investigate the trade-off between stability and fitting starting from simple machine learning approaches
The goal of supervised learning is to find an underlying input-output relation f ( x n ew ) ∼ y, given data. The data, called training set , is a set of input-output pairs, The data, called training set , is a set of n input-output pairs, S = { ( x 1 , y 1 ) , . . . , ( x n , y n ) } . Each pair is called an example. We consider the approach to machine learning
+1 y 1 x 1 x p . . . . . . . . . 1 1 Y n = . X n = . . . . . . . . . . . . . . . . . y n x 1 . . . . . . . . . x p n n − 1
v ? v Local Methods : Nearby points have similar labels Nearest Neighbor Given an input ¯ x , let i 0 = arg x � x i k 2 i =1 ,...,n k ¯ min and define the nearest neighbor (NN) estimator as ˆ f (¯ x ) = y i 0 . How does it work?
Recommend
More recommend