Traditional machine learning algorithms operate under the assumption
that learning for each new task starts from scratch, thus disregarding
any knowledge they may have gained while learning in previous
domains. Naturally, if the domains encountered during learning are
related, this tabula rasa
approach would waste both data and
computer time to develop hypotheses that could have been recovered by
simply examining and possibly slightly modifying previously acquired
knowledge. Moreover, the knowledge learned in earlier domains could
capture generally valid rules that are not easily recoverable from
small amounts of data, thus allowing the algorithm to achieve even
higher levels of accuracy than it would if it starts from scratch.
The field of transfer learning, which has witnessed a great increase
in popularity in recent years, addresses the problem of how to
leverage previously acquired knowledge in order to improve the
efficiency and accuracy of learning in a new domain that is in some
way related to the original one. In particular,
our current research is focused on developing transfer learning techniques
for Markov Logic Networks (MLNs), a recently developed approach to
statistical relational learning.
Our research in the area is currently sponsored by the Defense Advanced
Research Projects Agency (DARPA) and managed by the Air Force Research
Laboratory (AFRL) under contract FA8750-05-2-0283.