Improving Research Question Quality with Controlled Natural Languages and Large Language Models
Project Showcase

Improving Research Question Quality with Controlled Natural Languages and Large Language Models

RQ-CNL

By: Rector Ratsaka , Emma Van Der Berg , Mandikudza Dangwa

Supervised by: Zola Mahlaza


About

Abstract

Formulating a well-defined research question (RQ) is a crucial yet often challenging step in the research process, particularly for trainee researchers. Poorly constructed RQs can lead to ambiguous objectives and ineffective methodologies, reducing the quality of research outcomes. While digital tools exist for literature discovery and writing, there is a significant gap in computational support for RQ formulation. This project addresses that gap through the development of a Controlled Natural Language for Research Question Formulation (RQ-CNL). The work focuses on designing reusable templates that provide structural guidance to help researchers articulate their research intentions with clarity and precision. In addition to the CNL templates, the project proposes two complementary models: one to evaluate the quality of RQs based on criteria such as fluency, relevance, and adherence to the FINERMAPS framework; and another to suggest improvements to low-quality or vague RQs. Together, these components aim to support and enhance the early stages of research design.

Videos 1

Watch presentations, demos, and related content

Documents 1

Downloadable resources and documentation

Click "View Full" to open documents in a new window

Gallery 1

Explore the visual story of this exhibit