Date Published: May 22, 2019
Comments Due: July 3, 2019 (public comment period is CLOSED)
Email Questions to: xai@nist.gov
,
This short paper introduces an approach to producing explanations or justifications of decisions made in some artificial intelligence and machine learning (AI/ML) systems, using methods derived from those for fault location in combinatorial testing. We show that validation and explainability issues are closely related to the problem of fault location in combinatorial testing, and that certain methods and tools developed for fault location can also be applied to this problem. This approach is particularly useful in classification problems, where the goal is to determine an object’s membership in a set based on its characteristics. We use a conceptually simple scheme to make it easy to justify classification decisions: identifying combinations of features that are present in members of the identified class but absent or rare in non-members. The method has been implemented in a prototype tool called ComXAI, and examples of its application are given. Examples from a range of application domains are included to show the utility of these methods.
None selected
Publication:
Draft White Paper
Supplemental Material:
None available
Document History:
05/22/19: White Paper (Draft)
Security and Privacy
assurance; testing & validation
Technologies
artificial intelligence