Learning Machines: Neurocomputing Explained by Chris Rozell

Welcome to Learning Machines, where we chat with faculty members from the Machine Learning Center at Georgia Tech (ML@GT) about their main research area and what the future holds for their field. Today we talked with Chris Rozell, a professor in the School of Electrical and Computer Engineering and ML@GT faculty member, where he explained what exactly neurocomputing… Continue reading Learning Machines: Neurocomputing Explained by Chris Rozell

Learning Machines: Natural Language Processing Explained with Diyi Yang

Welcome to Learning Machines, where we’ll talk with faculty members from the Machine Learning Center at Georgia Tech (ML@GT) about their main research area and the future of their work. Today we talked with Diyi Yang, an assistant professor in ML@GT and the School of Interactive Computing. Yang’s lab, Social and Language Technologies (SALT), combines… Continue reading Learning Machines: Natural Language Processing Explained with Diyi Yang

New Algorithm Follows Human Intuition to Make Visual Captioning More Grounded

Annotating and labeling datasets for machine learning problems is an expensive and time-consuming process for computer vision and natural language scientists. However, a new deep learning approach is being used to decode, localize, and reconstruct image and video captions in seconds, making the machine-generated captions more reliable and trustworthy. To solve this problem, researchers at… Continue reading New Algorithm Follows Human Intuition to Make Visual Captioning More Grounded

ML@GT to Present Nine Papers at Competitive Machine Learning Conference

The International Conference on Machine Learning (ICML) received nearly 5,000 submissions for its 2020 conference and accepted 1,088 papers. Machine Learning Center at Georgia Tech (ML@GT) researchers authored nine accepted papers. The papers explore topics like privacy, semantics in predictive agents, data science, and artificial intelligence. One paper, Boosting Frank-Wolfe by Chasing Gradients, proposes a new state-of-the-art algorithm for constrained… Continue reading ML@GT to Present Nine Papers at Competitive Machine Learning Conference

New Study Explores Sentiment Around Electric Vehicles, Leading to Faster Government Response and More Infrastructure

Policy makers have long been wanting to improve infrastructure needed for the adoption of electric vehicles (EV), but with mass amounts of unstructured data they have been unable to determine how charging stations are performing and where more need to be added, according to a recent study from Georgia Institute of Technology researchers. Researchers in… Continue reading New Study Explores Sentiment Around Electric Vehicles, Leading to Faster Government Response and More Infrastructure

Tech Designed to Help Assistive Robots Work More Closely with Patients

Artificially intelligent (AI) systems are continuously improving their understanding of physical human activities such as running, jumping, and biking. Yet, much of people’s lives are spent resting in bed.  Researchers at the Georgia Institute of Technology and Stanford University have developed an AI-enabled smart bed and synthetic data set to study people at rest. The… Continue reading Tech Designed to Help Assistive Robots Work More Closely with Patients

Georgia Tech Researchers Presenting Work Virtually at Top AI Conference Due to COVID-19

Due to the rapid spread of coronavirus (COVID-19) and resulting travel restrictions, Georgia Tech students and faculty will now be presenting their research virtually at the International Conference on Learning Representations (ICLR), one of the biggest artificial intelligence (AI) conferences in the world, April 25 through 30. With 17 papers to present, researchers will create a… Continue reading Georgia Tech Researchers Presenting Work Virtually at Top AI Conference Due to COVID-19

Working Towards Explainable and Data-efficient Machine Learning Models via Symbolic Reasoning

By Yuan Yang In recent years, we have experienced the success of modern machine learning (ML) models. Many of them have led to unprecedented breakthroughs in a wide range of applications, such as AlphaGo beating a world champion human player or the introduction of autonomous vehicles. There has been a continuous effort, both from industry… Continue reading Working Towards Explainable and Data-efficient Machine Learning Models via Symbolic Reasoning

Explaining Machine Learning Models for Natural Language

By Sarah Wiegreffe and Yuval Pinter Natural language processing (NLP) is the study of how computers learn to represent and make decisions about human communication in the form of written text. This encompasses many tasks, including automatically classifying documents, using machines to translate between languages, or designing algorithms for writing creative stories.  Many state-of-the-art systems… Continue reading Explaining Machine Learning Models for Natural Language

Escaping Saddle Points Faster with Stochastic Momentum

By Jun-Kun Wang, Chi-Heng Lin, and Jacob Abernethy SGD with stochastic momentum (see Figure 1 below) has been the de facto training algorithm in nonconvex optimization and deep learning. It has been widely adopted for training neural nets in various applications. Modern techniques in computer vision (e.g.[1,2]), speech recognition (e.g. [3]), natural language processing (e.g.… Continue reading Escaping Saddle Points Faster with Stochastic Momentum