Escaping Saddle Points Faster with Stochastic Momentum

By Jun-Kun Wang, Chi-Heng Lin, and Jacob Abernethy SGD with stochastic momentum (see Figure 1 below) has been the de facto training algorithm in nonconvex optimization and deep learning. It has been widely adopted for training neural nets in various applications. Modern techniques in computer vision (e.g.[1,2]), speech recognition (e.g. [3]), natural language processing (e.g. … Continue reading Escaping Saddle Points Faster with Stochastic Momentum

Learning to Cooperate in Multi-Agent Environments

By Jiachen Yang Over the years, human intelligence has evolved to work within a shared environment with other humans to do more than play Atari games or solve Rubik’s cubes alone in our rooms. The presence of other people demands our ability to handle a wide spectrum of complex interactions — we cooperate with colleagues … Continue reading Learning to Cooperate in Multi-Agent Environments

ICLR 2018 accepted papers and ML@GT

The list of accepted papers at ICLR 2018 was released last week and Machine Learning at Georgia Tech (ML@GT) had a strong presence. Out of 935 submissions, 23 oral and 314 conference papers were accepted (roughly 36%). We are pleased to announce that Georgia Tech had 10 conference papers this year, with 1 of them … Continue reading ICLR 2018 accepted papers and ML@GT