-
Notifications
You must be signed in to change notification settings - Fork 3
Core Domain Model
The Domain Model provides a structure for knowledge and educational content based on the KLI framework, which couples educational materials to applicable skills that the learner should develop. The main building blocks of the domain model include:
- The knowledge unit (KU), which represents a cohesive set of knowledge components focused around a learning goal (e.g., as defined by Bloom's Taxonomy).
- The knowledge component (KC), which represents a skill that the learner should master. These skills can be composed into a hierarchy, where the learner should master the child skills before attempting to master the parent skill.
- The instructional item (II), a learning object that presents information, explanations, examples, etc. with the goal of facilitating skill development.
- The assessment item (AI), a learning object whose purpose is to verify that the learning occurred and a skill is mastered.
The following diagram showcases the main building blocks of the domain model and their relationship.

A KU is defined as a set of closely-related skills (i.e., KCs). As an example, a KU for the code quality topic of "Meaningful Names" might consists of multiple KCs that examine the different aspects of meaningful names, including KCs dedicated to:
- Respecting team conventions when naming code identifiers.
- Removing noise words that are generic or introduce cognitive load without enhancing meaning.
- Using terminology from the problem domain in the part of the codebase that models the domain.
For more examples, we provide a set of KCs for two units on a dedicated page. KCs are defined through the learning engineering practice of knowledge modeling. For more details on KCs and knowledge modeling review the related section on the KLI page. Notably, our domain model includes the hierarchy relationship between KCs, but does not explicitly model the prerequisite relationship.
The learner interacts with learning objects (i.e., instructional and assessment items) through learning instruments. Each learning instrument defines the format of the learning object and the interaction pattern (i.e., UI component) expected of the learner. For example, instructional items present content in some format (e.g., markdown text or video), while assessment items estimate skill development by presenting a problem and evaluating the learner's submission (e.g., multiple-response questions or short answer question).
Currently supported instructional item types include: Markdown snippet, Image, and Video. The Markdown is the most flexible as it provides a way to present text, code, and inlined images and videos. The separate Image and Video types enable more fine-grained features (e.g., zooming in) than their markdown equivalents.
Currently supported assessment item types include:
- Multiple-response questions, where a learner selects zero, one or more correct statements and receives feedback for each statement.
- Short-answer questions, where a learner writes zero, one or more comma-separated words.
- Arrange tasks, where a learner distributes elements into different categories and receives feedback for each element on why it belongs to a specific category.
- Refactoring challenges, where a learner augments starting code to enhance its quality and receives hints for further improvement.
Each assessment item type defines a related submission and evaluation type. The submission type defines the structure of the learner's answer to the assessment, while the evaluation defines the structure of the response made by the system, including the correctness level, hints, and feedback. For a more detailed analysis of assessment items review the related KLI section.
Challenges are assessment item types for developing procedural knowledge for code refactoring. We currently support a plugin for Visual Studio and Visual Studio Code through which the learner can solve challenges hosted here.
The following communication diagram illustrates the basic flow. Notably the major steps are the same for all the AI types, where the difference is in the details of the 2nd step.

The very beginning entails loading the challenge code and refactoring it according to the assessment's instructions.
- Once the Tutor receives a submission, it delegates it to the Knowledge Component Mastery object in the Learner Model. This object tracks all actions the learner makes while interacting with a KC, such as viewing IIs or answering an AI.
- From here the submission is passed to the related challenge object, which encapsulates the logic for evaluating a submission.
- It does this by first processing the submitted source code into our Code Model with the aid of the Roslyn API. Our code model contains information useful for quality analysis, such as identifier names, the connections between code elements, and values for various metrics.
- Next, it applies a set of fulfillment strategies that analyze the code’s cleanliness, like rules that check metric thresholds.
- Each violated rule introduces a hint to the hint directory, which is sent back learner to assist them in improving the submission.
- The system updates the mastery and the submission along with the evaluation is saved to an event store.
- The system delivers the evaluation, which is displayed in the IDE Plugin.
The refactoring challenge subsystem is a useful example of how AIs can be extended to support arbitrary AI types. Essentially, expanding the Tutor with new AI types requires:
- Creating three classes, where one inherits the AI, one the Submission, and one the Evaluation base class.
- Creating an endpoint (Controller) which accepts the newly-defined submission from a client-facing learning instrument (e.g., web component, tool plugin) and responds with the newly-defined evaluation.
- Defining the client-facing learning instrument that targets the newly-defined endpoint.