The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Bias/Variance Analysis for Relational Domains

Author

J. Neville, D. Jensen

Entry type

inbook

Abstract

Bias/variance analysis [1] is a useful tool for investigating the performance of machine learning algorithms. Conventional analysis decomposes loss into errors due to aspects of the learning process with an underlying assumption that there is no variation in model predictions due to the inference process used for prediction. This assumption is often violated when collective inference models are used for classification of relational data. In relational data, when there are dependencies among the class labels of related instances, the inferences about one object can be used to improve the inferences about other related objects. Collective inference techniques exploit these dependencies by jointly inferring the class labels in a test set. This approach can produce more accurate predictions than conditional inference for each instance independently, but it also introduces an additional source of error, both through the use of approximate inference algorithms and through variation in the availability of test set information. To date, the impact of inference error on relational model performance has not been investigated.

Date

2008

Booktitle

Inductive Logic Programming

Key alpha

Neville

Pages

27-28

Publisher

Springer Berlin / Heidelberg

Series

Lecture Notes in Computer Science

Volume

4894

Publication Date

2008-00-00

BibTex-formatted data

To refer to this entry, you may select and copy the text below and paste it into your BibTex document. Note that the text may not contain all macros that BibTex supports.