Doctoral Oral Defense Announcement

Update [08.2014] - My dissertation and defense presentation are now available for download.
dissertation.pdf | dissertation.bib | defense.pptx | defense.pdf ]

You are cordially invited to my doctoral oral defense, which is open to the public.

Date: April 30th, 2014
Time: 9AM - 10AM
Location: Gould-Simpson, Room 1027 []
 
Candidate: Anh Xuan Tran
Committee: Paul Cohen, Mihai Surdeanu, Kobus Barnard, Ken McAllister
 
Title: Identifying Latent Attributes from Video Scenes Using Knowledge Acquired from Large Collections of Text Documents
 
Abstract:

Peter Drucker, a well-known influential writer and philosopher in the field of management theory and practice, once claimed that "the most important thing in communication is hearing what isn't said." It is not difficult to see that a similar concept also holds in the context of video scene understanding. In almost every non-trivial video scene, most important elements, such as the motives and intentions of the actors, can never be seen or directly observed, yet the identification of these latent attributes is crucial to our full understanding of the scene. That is to say, latent attributes matter.

In this work, we explore the task of identifying latent attributes in video scenes, focusing on the mental states of participant actors. We propose a novel approach to the problem based on the use of large text collections as background knowledge and minimal information about the videos, such as activity and actor types, as query context. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms, as well as their distribution weights. We develop and test several largely unsupervised information extraction models that identify the mental state labels of human participants in video scenes given some contextual information about the scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models, and improves performance over several baseline methods on two different datasets. We present an extensive analysis of our models and close with a discussion of our findings, along with a roadmap for future research.

Tucson, AZ
comments powered by Disqus