Evaluation Examples are not Equally Informative: How should that change NLP Leaderboards?
Alexander Miserlis Hoyle,
John P. Lalor,
and Jordan Boyd-Graber
Promoting Graph Awareness in Linearized Graph-to-Text GenerationAlexander Miserlis Hoyle,
and Noah A. Smith
In Findings of ACL.
Improving Neural Topic Models using Knowledge DistillationAlexander Miserlis Hoyle,
and Philip Resnik
Topic models are often used to identify human-interpretable topics to help make sense of large document collections. We use knowledge distillation to combine the best attributes of probabilistic topic models and pretrained transformers. Our modular method can be straightforwardly applied with any neural topic model to improve topic quality, which we demonstrate using two models having disparate architectures, obtaining state-of-the-art topic coherence. We show that our adaptable framework not only improves performance in the aggregate over all estimated topics, as is commonly reported, but also in head-to-head comparisons of aligned topics.
Combining Sentiment Lexica with a Multi-View Variational AutoencoderAlexander Miserlis Hoyle,
and Isabelle Augenstein
When assigning quantitative labels to a dataset, different methodologies may rely on different scales. In particular, when assigning polarities to words in a sentiment lexicon, annotators may use binary, categorical, or continuous labels. Naturally, it is of interest to unify these labels from disparate scales to both achieve maximal coverage over words and to create a single, more robust sentiment lexicon while retaining scale coherence. We introduce a generative model of sentiment lexica to combine disparate scales into a common latent representation. We realize this model with a novel multi-view variational autoencoder (VAE), called SentiVAE. We evaluate our approach via a downstream text classification task involving nine English-Language sentiment analysis datasets; our representation outperforms six individual sentiment lexica, as well as a straightforward combination thereof.
Unsupervised Discovery of Gendered Language through Latent-Variable ModelingAlexander Miserlis Hoyle,
and Ryan Cotterell
Studying the ways in which language is gendered has long been an area of interest in sociolinguistics. Studies have explored, for example, the speech of male and female characters in film and the language used to describe male and female politicians. In this paper, we aim not to merely study this phenomenon qualitatively, but instead to quantify the degree to which the language used to describe men and women is different and, moreover, different in a positive or negative way. To that end, we introduce a generative latent-variable model that jointly represents adjective (or verb) choice, with its sentiment, given the natural gender of a head (or dependent) noun. We find that there are significant differences between descriptions of male and female nouns and that these differences align with common gender stereotypes: Positive adjectives used to describe women are more often related to their bodies than adjectives used to describe men.
Citation Detected: Automated Claim Detection through Natural Language Processing
University College London, Master’s Thesis. 2018