Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in AAAI 2016
Alexander Braylan, Mark Hollenbeck, Elliot Meyerson, and Risto Miikkulainen. A general approach to knowledge transfer is introduced in which an agent controlled by a neural network adapts how it reuses existing networks as it learns in a new domain. Networks trained for a new domain can improve their performance by routing activation selectively through previously learned neural structure, regardless of how or for what it was learned. A neuroevolution implementation of this approach is presented with application to high-dimensional sequential decision-making domains. This approach is more general than previous approaches to neural transfer for reinforcement learning. It is domain-agnostic and requires no prior assumptions about the nature of task relatedness or mappings. The method is analyzed in a stochastic version of the Arcade Learning Environment, demonstrating that it improves performance in some of the more complex Atari 2600 games, and that the success of transfer can be predicted based on a high-level characterization of game dynamics. Slides, Code
Download here
Published in AIIDE 2016
Alexander Braylan and Risto Miikkulainen. A transfer learning approach is presented to address the challenge of training video game agents with limited data. The approach decomposes games into objects, learns object models, and transfers models from known games to unfamiliar games to guide learning. Experiments show that the approach improves prediction accuracy over a comparable control, leading to more efficient exploration. Training of game agents is thus accelerated by transferring object models from previously learned games. Slides, Code, MS Thesis
Download here
Published in Information Retrieval Journal
Kezban Dilek Onal, Ye Zhang, Ismail Sengor Altingovde, Md Mustafizur Rahman, Pinar Karagoz, Alex Braylan, Brandon Dang, Heng-Lu Chang, Henna Kim, Quinten McNamara, Aaron Angert, Edward Banner, Vivek Khetan, Tyler McDonnell, An Thanh Nguyen, Dan Xu, Byron C. Wallace, Maarten de Rijke, and Matthew Lease. A recent “third wave” of neural network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, work in this area is often referred to as deep learning. Recent years have witnessed an explosive growth of research into NN-based approaches to information retrieval (IR). A significant body of work has now been created. In this paper, we survey the current landscape of Neural IR research, paying special attention to the use of learned distributed representations of textual units. We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.
Download here
Published in KEG @ AAAI 2019
Alexander Braylan and Risto Miikkulainen. Game AI is difficult to program, especially as games are frequently changing due to updates from the designers and the evolving behavior of human players. It would be useful if AI agents were able to automatically learn to reason about their environment. A major part of the environment is geospatial information. An agent’s geospatial coordinates can suggest likelihoods of encountering important objects such as items or enemies, even when those objects are not in sight. Difficulties arise when these probabilities are not nicely demarcated into areas predefined and provided by the game API, creating the need to learn geospatial models automatically. This paper argues for models that divide game environments into discrete areas, proposes appropriate evaluation measures for such models, and tests a few clustering approaches on detailed creature sighting data extracted from a large number of players of a modern multi-player first-person shooter game. Two methods are shown to work better than simple baselines, demonstrating how these techniques can be used to automatically divide the game environment by its observed attributes. Slides, Code
Download here
Published in HCOMP Doctoral Consortium 2019
Alexander Braylan and Matthew Lease. Modeling annotators and their labels is useful for ensuring data quality. However, while many models have been proposed to handle binary or categorical labels, prior methods do not generalize to complex annotation tasks (e.g., open-ended text, multivariate, structured responses) without devising new models for each specific task. To obviate the need for task-specific modeling, we propose to model distances between labels, rather than the labels themselves. Our methods are agnostic as to the distance function; we leave it to the annotation task requester to specify an appropriate distance function for their task. We propose three methods, including a Bayesian hierarchical extension of multidimensional scaling.
Download here
Published in AnnoNLP @ EMNLP-IJCNLP 2019
Alexander Braylan and Matthew Lease. Modeling annotators and their labels is useful for ensuring data quality. Though many models exist for binary or categorical labels, prior methods do not generalize to complex annotation tasks (e.g., open-ended text, multivariate, structured responses) without devising new models for each specific task. To obviate the need for task-specific modeling, we propose to model distances between labels, rather than the labels themselves. Our method, a Bayesian hierarchical extension of multidimensional scaling, is agnostic as to the distance function; we leave it to the annotation task requester to specify an appropriate distance function for their task. Evaluation shows the generality and effectiveness of the model across two complex annotation tasks: multiple sequence labeling and syntactic parsing.
Download here
Published in The Web Conference 2020
Alexander Braylan and Matthew Lease. Modeling annotators and their labels is valuable for ensuring collected data quality. Though many models have been proposed for binary or categorical labels, prior methods do not generalize to complex annotations (e.g., open-ended text, multivariate, or structured responses) without devising new models for each specific task. To obviate the need for task-specific modeling, we propose to model distances between labels, rather than the labels themselves. Our models are largely agnostic to the distance function; we leave it to the requesters to specify an appropriate distance function for their given annotation task. We propose three models of annotation quality, including a Bayesian hierarchical extension of multidimensional scaling which can be trained in an unsupervised or semi-supervised manner. Results show the generality and effectiveness of our models across diverse complex annotation tasks: sequence labeling, translation, syntactic parsing, and ranking. Slides, Video, Code
Download here
Published in Knowledge Discovery & Data Mining 2021
Alexander Braylan and Matthew Lease. Human annotations are critical for training and evaluating supervised learning models, yet annotators often disagree with one another, especially as annotation tasks increase in complexity. A common strategy to improve label quality is to ask multiple annotatorsto label the sameitemand then aggregate their labels. While many aggregation models have been proposed for simple annotation tasks, how can we reason about and resolve annotator disagreement for more complex annotation tasks (e.g., continuous, structured, or high-dimensional), without needing to devise a new aggregation model for every different complex annotation task? We address two distinct challenges in this work. Firstly, how can a general aggregation model support merging of complex labels across diverse annotation tasks? Secondly, for multi-object annotation tasks that require annotators to provide multiple labels for each item being annotated (e.g., labeling named-entities in a text or visual entities in an image), how do we match which annotator label refers to which entity, such that only matching labels are aggregated across annotators? Using general constructs for merging and matching, our model not only supports diverse tasks, but delivers equal or better results than prior aggregation models: general and task-specific. Slides, Video, Code
Download here
Published in The Web Conference 2022
Alexander Braylan, Omar Alonso, and Matthew Lease. When annotators label data, a key metric for quality assurance is inter-annotator agreement (IAA): the extent to which annotators agree on their labels. Though many IAA measures exist for simple categorical and ordinal labeling tasks, relatively little work has considered more complex labeling tasks, such as structured, multi-object, and free-text annotations. Krippendorff’s 𝛼, best known for use with simpler labeling tasks, does have a distance-based formulation with broader applicability, but little work has studied its efficacy and consistency across complex annotation tasks. We investigate the design and evaluation of IAA measures for complex annotation tasks, with evaluation spanning seven diverse tasks: image bounding boxes, image keypoints, text sequence tagging, ranked lists, free text translations, numeric vectors, and syntax trees. We identify the difficulty of interpretability and the complexity of choosing a distance function as key obstacles in applying Krippendorff’s 𝛼 generally across these tasks. We propose two novel, more interpretable measures, showing they yield more consistent IAA measures across tasks and annotation distance functions. Slides, Video, Code
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.