BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning Reading Group @ CUED
SUMMARY:Stochastic optimization and adaptive learning rate
s - Yingzhen Li (University of Cambridge)\; Mark R
owland
DTSTART;TZID=Europe/London:20151126T143000
DTEND;TZID=Europe/London:20151126T160000
UID:TALK62020AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/62020
DESCRIPTION:Stochastic optimization is prevalent in modern mac
hine learning\, and the main purpose of this talk
is to understand why it works. We will first brief
ly recap the history of stochastic approximation m
ethods\, starting from the famous Robbins and Monr
o paper. Then we will introduce the cost function
minimization problem in machine learning context a
nd show you how to prove the convergence of stocha
stic gradient descent to a local optima. We procee
d the proof in three steps: continuous gradient de
scent\, discrete gradient descent\, and stochastic
gradient descent. However the conditions for lear
ning rates presented in the proof is not necessary
. So in the second part of the talk we will discus
s popular adaptive learning rates\, and in particu
lar we will do a short tutorial on online learning
\, to give people intuitions on the regret bounds.
Finally we will have a live demo session on compa
ring different learning rates.
LOCATION:Engineering Department\, CBL Room 438
CONTACT:Yingzhen Li
END:VEVENT
END:VCALENDAR