Logistic Regression Machine Learning CS5824/ECE5424 Bert Huang Virginia Tech
Outline Review conditional probability and classification Linear parameterization and logistic function Gradient descent Other optimization methods Regularization
Classification and Conditional Probability Discriminative learning: maximize p(y x) Naive Bayes: learn p(x y) and p(y) Today: parameterize p(y x) and directly learn p(y x)
Parameterizing p(y x) p(y x) :=f f : R d! [0, ] f (x) := + exp( w > x)
Parameterizing p(y x) p(y x) :=f f : R d! [0, ] f (x) := + exp( w > x)
Logistic Function (x) = + exp( x)
Logistic Function (x) = + exp( x) lim x! (x) = lim x! + exp( x) = =.0 (0) = + exp( 0) = + = 0.5 lim x! (x) = lim x! + exp( x) = 0.0
From Features to Probability f (x) := + exp( w > x)
<latexit sha_base64="dvgyh+ritwwzhcoashstxgl2os=">aaadznicjvhdihmxgms/vmr7nyvefembfg6wpmvfcehqvvxksvtrsltbdk0kwbokmg5bvbesc38kf8dn/ae/atheedqvibyghc76tl8mjsrycmpvwvbtxsbt7fvo/eu7+z29h7cgzbhjvmzqcxftyohea8eppwim5zkooxn8frdqz9/4sykru5hkfgbpgmleseoegry+jonz60f/oznb/jtesyxhwvlmd/rfvvwhnuy8hnw6udzjqgd+bads9ejcvwce+bzsiktsfflgy9woyjkxdl+d0untbcurtxeyamzdsnl4u0qvacjqjoz7gxfyeizxhiflkxw9qmwg4gjbgrlevenueuzzvm65n0pfzxcdtzy3wr8zdmjngjjlwk8zfcdjkprfzl2nzlcxf7xsvjvgkzkn6r+dsmbgrmqy4erdnwhje8xafzmg0fccabpwgpkjpdpwgxcdwxguwbyppl/duunoknpbzecvnmohacomqvxcnfvhvd3/lhyv6sofpkanphlazlkivjuimgwraln84ezbjhrrgxauxmy46/7fl62xo0fwin8hzi270sht+fnu8/ldfv42eon3uqhf6jy7re3sceoihn8gtyd9obv2o7dqer5ftw4flechwqsa/gx99hm5</latexit> Likelihood Function y 2 {, +} p w (y x) := ( ( + exp( w > x)) if y =+ ( + exp(+w > x)) if y = ( + exp( w > x)) = + exp( w > x) = + exp( w > x) + exp( w > x) + exp( w > x) = exp( w > x) + exp( w > x) = + exp( w > x) exp( w > x) =(+exp(w > x)) = exp( w > x) + exp( w > x) exp( w > x)
<latexit sha_base64="dvgyh+ritwwzhcoashstxgl2os=">aaadznicjvhdihmxgms/vmr7nyvefembfg6wpmvfcehqvvxksvtrsltbdk0kwbokmg5bvbesc38kf8dn/ae/atheedqvibyghc76tl8mjsrycmpvwvbtxsbt7fvo/eu7+z29h7cgzbhjvmzqcxftyohea8eppwim5zkooxn8frdqz9/4sykru5hkfgbpgmleseoegry+jonz60f/oznb/jtesyxhwvlmd/rfvvwhnuy8hnw6udzjqgd+bads9ejcvwce+bzsiktsfflgy9woyjkxdl+d0untbcurtxeyamzdsnl4u0qvacjqjoz7gxfyeizxhiflkxw9qmwg4gjbgrlevenueuzzvm65n0pfzxcdtzy3wr8zdmjngjjlwk8zfcdjkprfzl2nzlcxf7xsvjvgkzkn6r+dsmbgrmqy4erdnwhje8xafzmg0fccabpwgpkjpdpwgxcdwxguwbyppl/duunoknpbzecvnmohacomqvxcnfvhvd3/lhyv6sofpkanphlazlkivjuimgwraln84ezbjhrrgxauxmy46/7fl62xo0fwin8hzi270sht+fnu8/ldfv42eon3uqhf6jy7re3sceoihn8gtyd9obv2o7dqer5ftw4flechwqsa/gx99hm5</latexit> Likelihood Function y 2 {, +} p w (y x) := ( ( + exp( w > x)) if y =+ ( + exp(+w > x)) if y = =(+exp( yw > x)) ny L(w) = ( + exp( y i w > x i )) i= log L(w) = log( + exp( y i w > x i )) i=
Likelihood Function log L(w) = log( + exp( y i w > x i )) i= ŵ argmin w i= log( + exp( y i w > x i )) := nll(w) negative log likelihood r w nll = y i x i i= + exp( y i w > x i )
<latexit sha_base64="+g8etahcwk5tcp0vtdlfp8jlaxm=">aaadqxicfzhpaxnbfmcn669af6v69diyhlsssksfvrqkxsrthaytznnldjkbhti/lpm3tckwf5x49+6f4e3emyjopiskjfxbmf++n/d4m++lheawwvblk7hx89btot3t+/df/dwuxvn8ynvpafsqlxq5iwllgmu2aa4chzwgezkkthponb89mlzizx6hgwbrtjmle845sat5j2fob7olaltbw/ikpzhwpbmujiodoeul6ex+d6xgxedhulhopzeqy6wpoe7bueorjwyc57oiaecdpd8j+uay8kajgdfatr8lo62m8rsutaevxnphfbywcsqap4jv23fpwuholezy0etfjlmjtxxjhz97z4wzbfxrgjfuaouj0tqfth2mjjdbq6w2r2oqy3+hyqnzm5hjqiibkxr5hqwugdsur4/h3dakyuefoyb7b2caez9m8ataa2kzzf8l9v6cis3lvsbjnthk3ju7bsqq3ful9b4mofqkykau5iyiesqcreumcggtt3ggydccmnbuozxmvk/v8qzyu/9ojqijffyct+9kofftjvhl5vr+fnqjnqisi9bodonfoca0qrz/rd/ql/q4+bv+db8h3y9sgdq8qwsr/pwdnrytya==</latexit> <latexit sha_base64="qh792ymfy03oc8th9ca82zg9k=">aaade3icfzfnixnbeiy749e6fmx6kuxcnkfw4wkircw4eu8rbdzxcjeoadtk2nsh0n3jdkwzs/w7v/wpig30av/xs7scmlglwj65x2qqo6qtjdcyrj+6gsxll+5em3r+vanm7du3+nu3d2prqchtxiy09t5kakdumukog0smbukueknbc8pn3yj0w+ggxbywvm2qrcc7qw0n3xt59rgnxqqqs+h9vtnyqob9yntfmu8vekhf07ne7nlyimmou3rpeymm3v44cjugmyjqry+0czjsdd7he8nlbrq5zm6nordaccusci6h3o5lbwxjmzafkzeakxdjqvlktr96z0izy/3rsbt3tajiyrmfsn2mypi7i2xp/othrv6grivmz8ev0ewjopn5g7jsujr0ouw6ery4youxjfvhv0f5zizj6ee+srltf8lzvmhix3tuwxxymbvwd0dtwkv7v3beyu8sebgkaynvczsvaldv7epwdi0djngucbccixqvs2vn6vy2f+rpk/90qolk94ux48h0znb+ozp7+bu/4tcp88ih0skwfkglwih2rioplivpef5gfwifgufam+nqcgnbbmhlml4ptvgzacrq==</latexit> ŵ r w nll = argmin w i= i= log( + exp( y i w > x i )) := nll(w) + exp( y i w > x i ) r w + exp( y i w > x i ) 0 exp( y i w > x i )y i x i exp( yi w > x i )y i x i r w nll = i= + exp( y i w > x i ) exp( yi w > x i ) + + exp( yi w > x i ) = y i x i = y i x i i= + exp( y i w > x i ) i= + exp( y i w > x i ) r w nll = y i x i = ( p w (y i x i )) y i x i := g i= + exp( y i w > x i ) i=
Gradient Descent 3 2.5 w t+! w t t g t 2.5 e.g., t = t 0.5 0 t = p t 0.5 0 0.5.5 2 t = Image from Murphy Ch. 8
Gradient Descent 3 2.5 w t+! w t t g t 2.5 e.g., t = t 0.5 0 t = p t 0.5 0 0.5.5 2 t = Image from Murphy Ch. 8
Line Search t = argmin nll(w g) nll(w g)
Line Search 3 exact line searching 2.5 t = argmin nll(w g) 2.5 0.5 nll(w g) 0 0.5 0 0.5.5 2 Image from Murphy Ch. 8
Momentum Heavy ball method s t g t + t s t w t w t + t s t Momentum term retains velocity from previous steps Avoids sharp turns
Second-Order Optimization Newton s method Approximate function with quadratic and minimize quadratic exactly Requires computing Hessian (matrix of second derivatives) Various approximation methods (e.g., L-BFGS)
Regularization ny L(w) =p(w) ( + exp( y i w > x i )) posterior i= p(w) =N 0, I prior log L(w) = 2 w > w + log( + exp( y i w > x i )) neg log posterior i= (regularized nll) r w nll = w y i x i gradient + exp( y i w > x i ) i=
Summary Review conditional probability and classification Linear parameterization and logistic function Gradient descent Other optimization methods Regularization