Seminar by Nathan Silberman on “TF-Slim: A Lightweight Library for Defining, Training and Evaluating Complex Models in TensorFlow” Thursday Sep 7 2017, 4:30 pm – 5:45 pm in Clough 144

ML@GT Seminar and Guest Speaker for CS 7643 Deep Learning

Title: TF-Slim: A Lightweight Library for Defining, Training and Evaluating Complex Models in TensorFlow
Speaker: Nathan Silberman
Date/Time: Thursday Sep 7 2017, 4:30 pm – 5:45 pm Location: Clough 144

pastedImageAbstract: TF-Slim is a TensorFlow-based library with various components. These include modules for easily defining neural network models with few lines of code, routines for training and evaluating such models in a highly distributed fashion and utilities for creating efficient data loading pipelines. Additionally, the TF-Slim Image Models library provides many commonly used networks (ResNet, Inception, VGG, etc) that make replicating results and creating new networks using existing components simple and straightforward. I will discuss some of the design choices and constraints that guided our development process as well as several high-impact projects in the medical domain that utilize most or all components of the TF-Slim library.

Bio: Nathan Silberman is the Lead Deep Learning Scientist at 4Catalyzer where he works on a variety of healthcare related projects. His machine learning interests include semantic segmentation, detection and reinforcement learning and how to best apply these areas to high-impact areas in the medical world. Prior to joining 4Catalyzer, Nathan was a researcher at Google where among various projects, he co-wrote TensorFlow-Slim, which is now a major component of the TensorFlow library. Nathan received his Ph.D. in 2015 from New York University under Rob Fergus and David Sontag.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.