In the 1950s, McCarthy proposed the use of formal logic in artificial intelligence, introducing the idea of representing knowledge in a formal language to enable algorithmic reasoning. This proposal laid the foundation for the field of knowledge representation and reasoning, which investigates how to structure knowledge and draw new conclusions that extend existing information. Early artificial intelligence systems based on formal logic were limited by their reliance on monotonic reasoning, where adding new knowledge never revokes prior inferences. While monotonicity preserves consistency in some settings, it fails to accommodate exceptions, often rendering a knowledge base unsatisfiable even when it accurately reflects its domain. To address these limitations, researchers developed non-monotonic reasoning, which supports defeasible or “common sense” inference by allowing conclusions to be revised in light of new information. Among the most influential approaches to non-monotonic reasoning is the Kraus–Lehmann–Magidor (KLM) framework, notable for its strong theoretical foundations and practical applicability. KLM-style entailment can be expressed syntactically, through formula manipulations, or semantically, through ranked models. This dissertation focuses on the syntactic approach, examining three central entailment algorithms: Rational Closure, Lexicographic Closure, and Relevant Closure. Since reasoning systems must also provide explanations that clarify how inferences are derived, explanation services remain a critical component of symbolic reasoning. Explanations improve transparency and usability by helping users understand entailments, uncover implicit knowledge, and diagnose or repair knowledge bases when conflicts arise.
Justifications, first introduced for Rational Closure as a defeasible counterpart to classical justification, have since become central to explainable artificial intelligence as a mechanism for building trustworthy and reliable systems. This dissertation develops and analyses justification algorithms for all KLM entailment formalisms, complemented by the design and implementation of a web-based reasoning tool. The tool integrates an intuitive browser interface with a high-performance reasoning engine, enabling users to define knowledge bases, assert conditionals, compute consequences, and receive clear explanations. It supports both novices and experts by offering visualisations of derivations, comparative analysis across reasoning systems, and reproducible experimentation. Explanations identify minimal subsets of premises required for each inference, thereby enhancing interpretability. The implemented algorithms are evaluated on synthetic and benchmark knowledge bases of varying size, rank depth, and defeasible statement distribution. Performance testing compares baseline and optimised implementations, showing that some algorithms perform best with sparse, low-rank data, while others are better suited to dense, high-rank configurations. These findings provide practical guidance for selecting entailment strategies based on knowledge base characteristics and identify promising directions for further optimisation in automated reasoning environments.
Watch presentations, demos, and related content
Like, comment, and subscribe on YouTube to support the creator!
Explore the visual story of this exhibit
Title Image