Visualizing Deep Learning Models at Facebook

This post summarizes the latest joint research between researchers at Georgia Tech and  Facebook on using visualization to make sense of deep learning models, published at IEEE VIS’17, a top visualization conference.

activis-figure-intro

While powerful deep learning models have significantly improved prediction accuracy, understanding these models remains a big challenge. Deep learning models are more difficult to interpret than most existing machine learning models, because of its nonlinear structures and huge number of parameters. Thus, in practice, people often use them as “black boxes”, which could be detrimental because when the models do not perform satisfactorily, users would not understand the causes or know how to fix them.

Visualization has recently become a popular means for interpreting such complex deep learning models. Data visualization and visual analytics help people make sense of data and discover insights by effectively transforming abstract data into meaningful visual representations and making it interactive. Deep learning models can be visualized by presenting intermediate data produced from models (e.g., activation, weights) or revealing relationships between datasets and results from models. With such visualization, users can better understand why and how the models work to produce results for their datasets. There are several visualization tools developed and available, including TensorBoard and Embedding Projector by Google’s Big Picture Group, Deep Visualization ToolBox, and so on.

You can check out our survey paper here.

Despite the increasing interest in visualization for deep learning interpretation, the complexity of large-scale models and datasets used in industry, like Facebook, pose unique design challenges. For example, in designing tools for real-world deployment, it is a high priority that the tools be flexible and scalable, adapting to the wide variety of models and datasets used. These observations motivate us to design and develop ActiVis, a visual analytics system for industry-scale deep neural network models.

Participatory Design Process

To learn user’s actual needs, we have conducted participatory design sessions with over 15 Facebook engineers, researchers, and data scientists across multiple teams. From these sessions, we identified six key design challenges — for data, model, and analytics — that have not been adequately addressed by existing deep learning visualization tools. The challenges include the need to support:

  1. Diverse input data sources
  2. High data volume
  3. Complex model architecture
  4. A great variety of models
  5. Diverse subset definitions for analytics
  6. Both instance- and subset-level analyses

These challenges shape the main design goals of ActiVis.

Introducing ActiVis

Based on the design challenges we identified, we designed and developed ActiVis, a visual analytics system for deep neural network models, now deployed on Facebook’s machine learning platform. ActiVis’s main contributions include:

  • A novel visual representation that unifies instance- and subset-level inspections of neuron activation, facilitating comparison of activation patterns for multiple instances.
  • An interface that tightly integrates an overview of graph-structured complex models and local inspection of neuron activations, allowing users to explore the model at different levels of abstraction.
  • A deployed system scaling to large datasets and models.

Here’s what the ActiVis interface looks like:

activis-screenshot
ActiVis consists of multiple coordinated views: (A) The computation graph summarizes the model architecture. (B) The neuron activation panel’s matrix view displays activations for instances, subsets, and classes (at B1), and its projected view shows a 2-D t-SNE projection of the instance activations (at B2). (C) The instance selection panel displays instances and their classification results; correctly classified instances shown on the left, misclassified on the right. Clicking an instance adds it to the neuron activation matrix view.

ActiVis consists of multiple coordinated views for users to get a high-level overview of the model from which they can drill down to perform localized inspection of activations. ActiVis visualizes how neurons are activated by user-specified instances or instance subsets, to help users understand how a model derives its predictions. The subsets can be flexibly defined using data attributes, features, or output results (e.g., a set of documents that contain a particular word; a set of instances whose value for feature A is greater than 0.5), enabling model inspection from multiple angles. While many existing deep learning visualization tools support instance-level exploration (i.e., how individual instances contribute to a model’s accuracy), ActiVis is the first tool that simultaneously supports instance- and subset-level exploration. It is especially beneficial when dealing with huge datasets in industry, which may consist of millions or billions of data points. By exploring instance subsets and enabling their comparison with individual instances, users can learn how them models respond to many different slices of the data.

Deployment on Facebook’s ML Platform

We have deployed ActiVis on FBLearner Flow, Facebook’s machine learning platform. Developers who want to use ActiVis for their model can easily do so by adding only a few lines of code, which instructs their models’ training process to generate information needed for visualization. ActiVis users at Facebook (e.g., data scientists) can then train models and use ActiVis via FBLearner Flow’s web interface, without writing any additional code.

Case Studies with Potential Users at Facebook

To better understand how ActiVis may help users with their interpretation of deep neural network models, we recruited Facebook engineers and data scientists to use the latest version of ActiVis to explore text classification models relevant to their work. Here are key observations from these studies:

  1. Spot-checking models with “test cases”
    • ActiVis helps them for spot-checking models with their test cases. Engineers often have test cases for their datasets, and ActiVis helped them to check if a model works for their test cases.
  2. Graph architecture view as entry point
    • Our computation graph view was especially helpful for people who are less familiar with new deep learning models. ActiVis helps them understand the models first and dive into activation details.
  3. Debugging hints from activation patterns
    • ActiVis reveals patterns, and users were able to get some hints for further improving their models. For example, if some neurons were not activated at all, they think they may decrease the number of neurons.

Conclusions

ActiVis is a visual analytics system for deep neural network models, deployed on Facebook’s machine learning platform. From participatory design session with researchers and engineers across many teams at Facebook, we identified key design challenges. Based on them, we developed ActiVis that unifies instance- and subset-level exploration and tightly integrates model architecture and localized activation inspection. Our case studies indicate that ActiVis help users explore and understand the complex deep learning models, specifically for spot-checking models, understanding architecture, and obtaining debugging hints.

Further Information

For more information, please check out our full version of the ActiVis paper, our project webpage, demo video, and presentation slides:

Our group recently wrote a survey article on visual analytics for deep learning. If you’re interested in this area and want to learn more, please also check out our survey paper.

[Post by: Minsuk (Brian) Kahng and Polo Chau]

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.