Abstract
Over the past few years, knowledge bases (KBs) like DBPedia, Freebase, and YAGO have accumulated a massive amount of knowledge from web data. Despite their seemingly large size, however, individual KBs often lack comprehensive information on any given domain. For example, over 70 percent of people on Freebase lack information on place of birth. For this reason, the complementary nature across different KBs motivates their integration through a process of aligning instances. Meanwhile, since application-level machine systems, such as medical diagnosis, have heavily relied on KBs, it is necessary to provide users with trustworthy reasons why the alignment decisions are made. To address this problem, we propose a new paradigm, explainable instance alignment (XINA), which provides user-understandable explanations for alignment decisions. Specifically, given an alignment candidate, XINA replaces existing scalar representation of an aggregated score, by decision- and explanation-vector spaces for machine decision and user understanding, respectively. To validate XINA, we perform extensive experiments on real-world KBs and show that XINA achieves comparable performance with state-of-the-arts, even with far less human effort.
Original language | English |
---|---|
Article number | 8540085 |
Pages (from-to) | 388-401 |
Number of pages | 14 |
Journal | IEEE Transactions on Knowledge and Data Engineering |
Volume | 32 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1 2020 |
Externally published | Yes |
Funding
This work is supported by IITP/MSIT grant (2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence).
Funders | Funder number |
---|---|
IITP/MSIT | 2017-0-01779 |
Keywords
- KB integration
- Knowledge base
- entity resolution
- instance alignment
- interpretability
- ontology matching