MetaLearning

From Distributed Information and Intelligence Analysis Group

Jump to: navigation, search

Abstract

Metalearning is the formal study of best practices in machine learning and data mining. Specifically, it enables the selection of optimal learning algorithms that best fit the search space of any given problem. In the case of our proposed group, we seek to build operational prototypes that utilize a metalearning protocol of an asymptotically optimal ensemble of algorithms and a novel algorithm to enable a consistent analysis and decision capability. We envision a new level of automation in classification tasks that guarantees a consistent quality of service based on an understanding of optimal classifier topology. The applications are many, including the ability to consistently analyze sparse data sets, as well as scalably make sense of tremendous volumes of data, such as those generated by search, biotech, and defense intelligence.

Goals

The goal of this project is to develop ‘generic’ ML that does not require careful re-targeting from data set to data set. The algorithm selection step is built in and performed in an automated fashion. Our further engineering goal is to provide a highly robust general purpose classification framework for the processing of heterogeneous data sets.

Project Members

Principal Investigators:
Eugene Santos, Jr. <Eugene.Santos.Jr@Dartmouth.edu>
Chris Poulin <Chris.Poulin@Dartmouth.edu>
Paul Thompson <Paul.Thompson@Dartmouth.EDU>

Consulting Scientists:
Ben Goertzel <ben@goertzel.org>
Andras Kornai <andras@kornai.com>

Research Staff:
Hien M Nguyen <nguyenh@uww.edu>
Alex Kilpatrick <alex@tacticalinfosys.com>
Tracy Gu <Qi.Gu@Dartmouth.edu>
Brittany M Clark <ClarkBM02@uww.edu>


Disclaimer: Information on this wiki is for internal group use only! It should not be disseminated to anyone outside the group without the permission of Dr. Santos.
Views
Personal tools
Personnel
Contact us