CS 445 Introduction to Machine Learning Logistic Regression - - PowerPoint PPT Presentation

cs 445 introduction to machine learning logistic
SMART_READER_LITE
LIVE PREVIEW

CS 445 Introduction to Machine Learning Logistic Regression - - PowerPoint PPT Presentation

CS 445 Introduction to Machine Learning Logistic Regression Instructor: Dr. Kevin Molloy Review Linear regression Finding the weights to assign to a polynomial so that the resulting line minimizes the "loss". ( ! ,


slide-1
SLIDE 1

CS 445 Introduction to Machine Learning Logistic Regression

Instructor: Dr. Kevin Molloy

slide-2
SLIDE 2

Review

Linear regression

Finding the weights to assign to a polynomial so that the resulting line minimizes the "loss". ℎ(𝑦!, 𝑦",… 𝑦#) = 𝑥$ + 𝑥!𝑦!+ . . +𝑥#𝑦# ℎ 𝑦 = 𝑥%𝑦 This function h(x) (hypothesis function) makes a real valued prediction (regression).

Linear Regression 𝑀 𝑥 =

! " ∑ #!,$! ∈& 𝑧' − 𝑥(𝑦' "

slide-3
SLIDE 3

Approach for Linear Regression

Linear Regression 𝑀 𝑥 =

! ") ∑ #!,$! ∈& 𝑧' − 𝑥(𝑦' "

Optimize (find the min) of the loss function using the derivatives:

𝜖L(w) 𝜖w! = 1 N )

"#$..&

𝑦'

())(y' − w+x')

𝜖L(w) 𝜖w$ = 1 N )

"#$..&

(y" − w+x")

slide-4
SLIDE 4

Linear Regression Algorithm

1.

Make predictions using current w and compute loss

2.

Compute derivative and update w's

3.

When loss change is a little STOP, otherwise, go back to 1.

slide-5
SLIDE 5

Logistic Regression

World's WORST algorithm name

X X X X O O O O

Transform linear regression into a classification algorithm h(x) >= 0.5, predict y = 1 (X class) h(x) < 0.5, predict y = 0 () class)

slide-6
SLIDE 6

Map Function to Values Between 0 and 1

Sigmoid (z) =

! !" ##$

1 1 + 𝑓$%%&

slide-7
SLIDE 7

Different Loss Function

Linear Regression 𝑀 𝑥 =

! ") ∑ #!,$! ∈& 𝑧' − 𝑥(𝑦' "

1 1 + 𝑓$%%&

slide-8
SLIDE 8

Cost Function for Linear Regression

Loss(h(x), y) = − log 𝑔

, 𝑦

𝑗𝑔 𝑧 = 1 − log 1 − 𝑔

, 𝑦

𝑗𝑔 𝑧 = 0

slide-9
SLIDE 9

Cost Function for Linear Regression

Loss(h(x), y) = − log 𝑔

, 𝑦

𝑗𝑔 𝑧 = 1 − log 1 − 𝑔

, 𝑦

𝑗𝑔 𝑧 = 0

f(x) = 1, then Cost = 0 (since (-log(1) = 0) When y = 1: f(x) = 0 , then the loss (or penalty) will be very large.

slide-10
SLIDE 10

Cost Function for Linear Regression

Loss(h(x), y) = − log 𝑔

, 𝑦

𝑗𝑔 𝑧 = 1 − log 1 − 𝑔

, 𝑦

𝑗𝑔 𝑧 = 0

f(x) = 0, then Cost = 0 (since (-log(1 –f(x)) = 0) When y = 0: f(x) = , then the loss (or penalty) will be very large.

slide-11
SLIDE 11

Logistic Regression Loss

Loss(h(x), y) = − log 𝑔

, 𝑦

𝑗𝑔 𝑧 = 1 − log 1 − 𝑔

, 𝑦

𝑗𝑔 𝑧 = 0 Loss(h(x), y) = ∏)#-

.

𝑄 𝑧 = 1 𝑦))/! 𝑦 𝑄 𝑧 = 0 𝑦))- 0 /!