Competency Questions (CQs) are sets of natural language questions drawn from a given domain and are intended for use in ontology engineering processes. While there has been substantial research on the important roles that CQs play in the ontology engineering life cycle, a low uptake of CQs among ontology engineers has been observed. Secondly, where CQs are developed and used as functional requirements or ontology development, they are subsequently sparsely used in other ontology engineering processes such as verification, validation, and evaluation. A lack of tools to support the actual authoring process of CQs and the absence of CQs quality measurability are some of the reasons identified for the low uptake among ontology engineers. This research aims to address the above problems by 1) proposing an approach to authoring CQs based on a corpus and to automating the process; 2) developing metrics for measuring CQs, drawing on a subset of the following criteria: grammatical quality, translatability to axioms, diversity and re-occurrence; 3) and by evaluating CQs created through the corpus and other ad-hoc manual methods. The research will provide a novel approach as well as instruments (a tool and metrics) to support the authoring and assessment of CQs, and potentially enhance their uptake beyond the initial phase of the ontology engineering

Videos



Visit the video on YouTube to like and join the discussion in the comment section.

Images



Title image