From Object Interactions to Fine-grained Video Understanding

Video understanding tasks such as action recognition and caption generation are crucial for various real-world applications in surveillance, video retrieval, human behavior understanding, etc. In this work, we present a generic recurrent module to detect relationships and interactions between arbitrary object groups for fine-grained video understanding. Our work is applicable to various open domain video understanding problems. In this work, we validate our method on two video understanding tasks with new challenging datasets: fine-grained action recognition on Kinetics and visually grounded video captioning on ActivityNet Captions.

In the following post, we will first introduce the concept and motivation of the proposed method for human action recognition. Second, we will show how the same concept can be further extended to generate a sentence description of a video. For details of the proposed method, please refer to our paper here.

From object interactions to human action recognition

Recent approaches to video understanding have demonstrated significant improvements over public datasets such as UCF101, HMDB51, Sports1M, THUMOS, ActivityNet, and YouTube8M. They often focus on representing the overall visual scene (coarse-grained) as a sequence of inputs that are combined with temporal pooling methods, e.g. CRF, LSTM, 1D Convolution, attention, and NetVLAD. Given the state-of-the-art methods, it’s relatively easy for machine to predict playing tennis and playing basketball by relying on overall scene representation.

kira_cvpr18_1
State-of-the-art video understanding methods can easily distinguish two different human activities simply rely on the background scene representations.

However, human actions often involve complex interactions across several objects in the scene. These approaches ignore the fine-grained details of the scene and do not infer interactions between various objects in the video. For example, in the figure below, the two snapshots of video frames share similar background scene representations and the representations of the person, i.e. the difference between skiing and snowboarding is how the person interacts with ski and snowboard.

kira_cvpr18_2
The difference between human actions is how human interacts with certain objects, instead of overly rely on scene representation. For instance, the two video frames have similar scene representation but their human activities are semantically different.

A question that naturally comes with the example above is that: can this problem be solved if machines can detect the objects that are being interacted with?

The answer is No, since there can be many different possible interactions between human and common objects. For instance, precisely distinguishing the differences between dribbling basketball, dunking basketball, and shooting basketball requires the model to identify how a basketball interacts with the player. Therefore, the goal of this work is not only to detect the objects being interacted with but also identify how they are being interacted with.

We want even more than detecting pairwise object interaction!

Typically, object interaction methods (in image domain) focus on pairwise interactions (left). In this work, we efficiently model the interactions between arbitrary subgroups of objects, in which we detect the inter-object relationships in one group and attentively select the objects with significant relationships (i.e. those that serve to improve action recognition or captioning in the end) (right). We define this interaction between various groups of selected object relationships as higher-order interactions.

kira_cvpr18_3
We go beyond pairwise interactions to high-order interactions: interaction between groups of objects with inter-relationship.

Why are object interactions and temporal reasoning challenging?

We first define objects to be a certain regions in the scene that might be used to determine visual relationships and interactions. This can be a rigid object, person, or even regions from the background scene.

Unfortunately, we can only have features, not the classes of the objects

To understand the relationships/interactions between potential objects, ideally, we need to first identify what are these objects in the scene. Running the state-of-the-art object detectors will,  however, fail to successfully identify the objects because there exists a cross-domain problem. Furthermore, we are limited by the object classes that were pre-trained in a particular object detection dataset, e.g. 80 classes in MS-COCO. As a result, it is very likely that the detected objects are labeled as most common objects, like person and cars, or the object detector may miss a potential interest object completely just because it was not trained to detect it.

kira_cvpr18_4
Our objective is to efficiently model the relationships/interactions between arbitrary (groups of) objects in space and integrate with temporal reasoning.

Limited by these constraints, we can only use the feature representations obtained by Region Proposal Network (RPN). Note that we do not track the corresponding object across time since linking objects through time can be computationally expensive and may not be suitable if the video sequence is long.

As a result, we have variable-lengths of object sets residing in a high-dimensional space that span across time. Our objective is to efficiently detect higher-order interactions from these rich yet unordered object representation sets across time.

Recurrent Higher-Order Interaction (HOI)

Toward this end, we propose Recurrent Higher-Order Interaction module to dynamically select K groups of arbitrary objects with detected inter-object relationships via learnable attention mechanism. This attentive selection module uses the overall image context representation, the current set of (projected) objects, and previous object interactions to perform K attentive selections via efficient dot-product operations. The higher-order interaction between groups of selected objects is then modeled via concatenation and the following LSTM cell. Please refer to our paper for further details of the proposed method.

kira_cvpr18_5
Our proposed Recurrent Higher-Order Interaction module dynamically selects K groups of arbitrary objects with detected inter-object relationships via learnable attention mechanism.

What objects and interactions are detected?

Given the nature of the proposed method in selecting the objects for detecting their interactions, we can qualitatively show what are the objects and their interaction that are detected when predicting human actions.

kira_cvpr18_6
Qualitative analysis for action recognition on Kinetics: Tobogganing.

In the figure above, The top row indicates the original video frame with selected objects (ROIs). The edge of each bounding box of an object is weighted by their importance in making the correct action recognition. We can visualize the regions that the machine sees by setting the weights to the corresponding regions as the transparent ratio. The brighter the region is, the more important this region is. The 3rd row indicates the weight distribution of objects (30 objects in this example). The value in y-axis indicates the importance of a particular object.

In this figure, we show the proposed method correctly predicting Tobogganing.

Identifying Tobogganing essentially needs three elements: toboggan, snow scene, and a human sitting on top of the toboggan. The three key elements are accurately identified and their interactions are highlighted as we can see from t = 1 to t = 3. Note that the model is able to continue tracking the person and toboggan throughout the whole video, even though they appear extremely small towards the end of the video. We can also notice that our method completely ignore the background scene in the last several video frames as they are not informative since they can be easily confused by other 18 action classes involving snow and ice, e.g. Making snowman, Ski jumping, Skiing cross-country, Snowboarding, etc.

From Object interactions to video captioning

In the second part of the blog, we will discuss how the method proposed for modeling object interactions can be extended for generating a sentence description for a video.

kira_cvpr18_7
Video captions are composed of multiple visual relationships and interactions. We detect higher-order object interactions and use them as basis for video captioning.

Our motivation is quite straightforward. We argue that a sentence description of a scene (for images and videos) can be decomposed into several relationships components. Therefore, we hypothesize that given a set of detected object relationships and interactions, we can then composed them into a complete sentence description.

Our model efficiently explores and grounds caption generation over interactions between arbitrary subgroups of objects, the members of which are determined by a learned attention mechanism as we shown in recognizing human actions.

kira_cvpr18_8
Overview of the proposed model for video captioning.

We first attentively models object inter-relationships and discovers the higher-order interactions for a video. The detected higher-order object interactions (fine-grained) and overall image representation (coarse-grained) are then temporally attended as the visual cue for each word generation.

kira_cvpr18_9
Qualitative analysis for video captioning on ActivityNet Captions: The man is then shown on the water skiing.

The same as we show how the model focus on objects and interactions for action recognition. We can also demonstrate how the model uses the objects and interactions for generating each of the words. In the figure above, timestep t indicates the video timestep. We can see that the proposed method often focuses on the person and the wakeboard, and most importantly it highlight the interaction between the two, i.e. the person steps on the wakeboard. It then progressively generates: The man is then shown on the water skiing.

Distinguishing interactions when common objects are presented

A common problem with the state-of-the-art captioning models is that they often lack the understanding of the relationships and interactions between objects, and this is often the result of the training data bias. For instance, when the model detects both person and a horse. The caption predictions are very likely to be: A man is riding on a horse, regardless whether if this person has different types of interactions with the horse.

We are thus interested in finding out whether if the proposed method has the ability to distinguish different types of interactions when common objects are presented in the scene. In the example figure shown below, each video shares a common object in the scene – horse. We show the verb (interaction) extracted from a complete sentence as captured by our proposed method.

(a) People are riding horses.
(b) A woman is brushing a horse.
(c) People are playing polo on a field.
(d) The man ties up the calf.

kira_cvpr18_10
Our proposed method is able to distinguish different types of interactions when common objects (horse) are presented.

While all videos involve horses in the scene, our method successfully distinguishes the interactions of the human and the horse by grounding the objects as well as the interactions.

To summarize, We introduce a computationally efficient, fine-grained video understanding approach for discovering higher-order object interactions. Our work on large-scale action recognition and video captioning datasets demonstrate that learning higher-order object relationships provides high accuracy over existing methods at low computation costs. To the best of our knowledge, this is the first work of modeling object interactions on open domain large-scale video datasets.

This post is based on the following paper:

Attend and Interact: Higher-Order Object Interactions for Video Understanding. Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, Hans Peter Graf. CVPR 2018. (PDF)

[Blog post by Chih-Yao Ma, re-posted with permission from https://ghassanalregib.com/2018/03/08/object_interactions/]

1 thought on “From Object Interactions to Fine-grained Video Understanding”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.