Interpretable Concept-Based Classification with Shapley Values

Ignatov, Dmitry I.; Kwuida, Léonard (2020). Interpretable Concept-Based Classification with Shapley Values In: International Conference on Conceptual Structures ICCS 2020: Ontologies and Concepts in Mind and Machine. Lecture Notes in Computer Science: Vol. 12277 (pp. 90-102). Cham: Springer International Publishing 10.1007/978-3-030-57855-8_7

[img] Text
Ignatov-Kwuida2020_Chapter_InterpretableConcept-BasedClas.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (311kB) | Request a copy

Among the family of rule-based classification models, there are classifiers based on conjunctions of binary attributes. For example, JSM-method of automatic reasoning (named after John Stuart Mill) was formulated as a classification technique in terms of intents of formal concepts as classification hypotheses. These JSM-hypotheses already represent interpretable model since the respective conjunctions of attributes can be easily read by decision makers and thus provide plausible reasons for model prediction. However, from the interpretable machine learning viewpoint, it is advisable to provide decision makers with importance (or contribution) of individual attributes to classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance. To this end, we use the notion of Shapley value from cooperative game theory, also popular in machine learning. We provide the reader with theoretical results, basic examples and attribution of JSM-hypotheses by means of Shapley value on real data.

Item Type:

Book Section (Review Article)

Division/Institute:

Business School > Business Foundations and Methods

Name:

Ignatov, Dmitry I. and
Kwuida, Léonard0000-0002-9811-0747

ISBN:

978-3-030-57854-1

Series:

Lecture Notes in Computer Science

Publisher:

Springer International Publishing

Submitter:

Léonard Kwuida

Date Deposited:

06 Oct 2020 07:11

Last Modified:

21 Sep 2021 02:18

Publisher DOI:

10.1007/978-3-030-57855-8_7

Uncontrolled Keywords:

Interpretable Machine Learning · JSM hypotheses · formal concepts · Shapley values

ARBOR DOI:

10.24451/arbor.12972

URI:

https://arbor.bfh.ch/id/eprint/12972

Actions (login required)

View Item View Item
Provide Feedback