Learning to Cluster

“Can machines categorize new things by learning how to group similar things together?” The following describes work by Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira, which will be presented at the 2018 International Conference on Learning Representations (ICLR) in Vancouver. Read the paper here. Clustering is the task of partitioning data into groups, so that […]

Read More Learning to Cluster

Convergence of Value Aggregation for Imitation Learning

The following describes work by Ching-An Cheng and Byron Boots, which was awarded Best Paper at the Further details and proofs are available at The 21st International Conference on Artificial Intelligence and Statistics (AISTATS). The paper can be found here: https://arxiv.org/abs/1801.07292. Learning to make sequential decisions is a fundamental topic in designing automatic agents with artificial intelligence. […]

Read More Convergence of Value Aggregation for Imitation Learning

From Object Interactions to Fine-grained Video Understanding

Video understanding tasks such as action recognition and caption generation are crucial for various real-world applications in surveillance, video retrieval, human behavior understanding, etc. In this work, we present a generic recurrent module to detect relationships and interactions between arbitrary object groups for fine-grained video understanding. Our work is applicable to various open domain video […]

Read More From Object Interactions to Fine-grained Video Understanding

The Minds of the New Machines | Research Horizons | Georgia Tech’s Research News

Georgia Tech’s Research Horizons Magazine has done a very nice write-up of the ML@GT center, featuring many of our research projects. Machine learning has been around for decades, but the advent of big data and more powerful computers has increased its impact significantly — ­moving machine learning beyond pattern recognition and natural language processing into a […]

Read More The Minds of the New Machines | Research Horizons | Georgia Tech’s Research News

Embodied Question Answering

Embodied Question Answering is a new AI task where an agent is spawned at a random location in a 3D environment and asked a question (“What color is the car?”). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question (“orange”).

Read More Embodied Question Answering

Visualizing Deep Learning Models at Facebook

This post summarizes the latest joint research between researchers at Georgia Tech and  Facebook on using visualization to make sense of deep learning models, published at IEEE VIS’17, a top visualization conference. While powerful deep learning models have significantly improved prediction accuracy, understanding these models remains a big challenge. Deep learning models are more difficult […]

Read More Visualizing Deep Learning Models at Facebook