Skip to content
/ DiReC Public

DiReC (Disentangled Contrastive Representation), a novel two-stage framework designed to address the BEA 2025 Shared Task 5: Tutor Identity Classification

Notifications You must be signed in to change notification settings

edutjie/DiReC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DiReC: Disentangled Contrastive Representation for Tutor Identity Classification

This repository contains the official implementation of DiReC (Disentangled Contrastive Representation), a two-stage framework for tutor identity classification. This work was submitted to the BEA 2025 Shared Task 5: Tutor Identity Classification, where it achieved 3rd place with a macro-F1 score of 0.9172.

The goal of the task is to classify responses from nine different tutors: seven Large Language Models (LLMs) and two human tutors (novice and expert). Our approach, DiReC, leverages disentangled representation learning to separate the semantic content of a response from its stylistic features, which is crucial for identifying the authoring tutor.

Table of Contents

Model Architecture

DiReC uses a microsoft/deberta-v3-large encoder as its backbone. The [CLS] token embedding from the encoder is passed through two separate projection heads to create disentangled content and style embeddings. These two embeddings are then concatenated and fed into a linear classifier to predict the tutor's identity.

DiReC Architecture Figure 1: The DiReC framework, showing Stage 1 (content focus), Stage 2 (joint training), and the final version with a CatBoost classifier.

The core idea is that the content embedding captures what is being said (semantics, facts), while the style embedding captures how it is being said (tone, verbosity, lexical choice), which is a strong signal for tutor identity.

Two-Stage Training

To effectively disentangle the representations, we employ a two-stage training strategy:

Stage 1: Content-Focused Pre-training

  • The style projection head is frozen.
  • The model (encoder, content head, classifier) is trained using only Cross-Entropy (CE) Loss.
  • This stage forces the model to learn robust, discriminative features based purely on the semantic content of the responses.

Stage 2: Joint Disentangled Training

  • The style projection head is unfrozen.
  • The model is trained jointly with a combined loss function:
    • Cross-Entropy Loss ($\mathcal{L}_{CE}$): Continues to guide the primary classification task.
    • Supervised Contrastive Loss ($\mathcal{L}_{SupCon}$): Applied to the style embeddings. This encourages responses from the same tutor to have similar style representations, pulling them closer in the embedding space.
    • Disentanglement Loss ($\mathcal{L}_{dis}$): A cosine-based loss that penalizes similarity between the content and style embeddings, enforcing their orthogonality.

The final loss function in Stage 2 is: $$\mathcal{L} = \lambda_{CE}\mathcal{L}{CE} + \lambda{sty}\mathcal{L}{SupCon} + \lambda{dis}\mathcal{L}_{dis}$$

Results

Our two-stage DiReC model achieved a macro-F1 score of 0.9042 on the validation set. By replacing the final linear classifier with a CatBoost classifier trained on the learned content and style embeddings, performance improved to 0.9101.

For the final submission, we applied the Hungarian algorithm as a post-processing step to ensure unique tutor assignments within each conversation, which resulted in a final leaderboard score of 0.9172 [Macro F1].

Epoch 1 Epoch 3 Epoch 6 Figure 2: t-SNE visualization of content embeddings at epochs 1, 3, and 6, showing how tutor clusters form during training.
  1. Clone the repository:

    git clone https://github.com/your-username/DiReC.git
    cd DiReC
  2. Create a Python virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
  3. Install the required dependencies:

    pip install -r requirements.txt
  4. Set up environment variables: Create a .env file in the root directory and add your API keys:

    WANDDB_API_KEY="your_weights_and_biases_api_key"
    WANDDB_EXPERIMENT_NAME="DiReC-Tutor-Classification"
    HF_TOKEN="your_hugging_face_api_token"
  5. Download the dataset: Place the cleaned_mrbench_devset.csv and cleaned_mrbench_testset.csv files into a dataset/ directory in the project root.

Usage

Follow this two-step process to train and evaluate the full model.

Step 1: Train the DiReC Encoder Model

Run the first script to train the base transformer model. This will save the model weights in the models/ directory.

python DiReC.py

The script will:

  1. Load and preprocess the data.
  2. Perform the two-stage training procedure.
  3. Log training progress, metrics, and visualizations to Weights & Biases.
  4. Evaluate the final model on the validation set.
  5. Save the best performing model's state dictionary to the models/ directory.

Step 2: Train the Final CatBoost Classifier

python DiReC_Catboost.py

After the DiReC encoder is trained, run the second script. This script will:

  1. Load the trained DiReC model from models/direc_model_final.pth.
  2. Extract content and style embeddings for the train and validation sets.
  3. Train a CatBoost classifier on the extracted embeddings.
  4. Evaluate the final classifier and save it to models/catboost_on_direc_embeddings.cbm.

Configuration

You can adjust hyperparameters such as BATCH_SIZE, LR, num_epochs, and loss weights at the top of the DiReC.py script.

Paper link

Paper link

Citation

If you find this work useful, please consider citing the original paper:

@inproceedings{tjitrahardja-hanif-2025-two,
        title = "Two Outliers at {BEA} 2025 Shared Task: Tutor Identity Classification using {DiReC}, a Two-Stage Disentangled Contrastive Representation",
        author = "Tjitrahardja, Eduardus and Hanif, Ikhlasul Akmal",
        booktitle = "Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications",
        month = jun,
        year = "2025",
        address = "Vienna, Austria",
        publisher = "Association for Computational Linguistics",
}

About

DiReC (Disentangled Contrastive Representation), a novel two-stage framework designed to address the BEA 2025 Shared Task 5: Tutor Identity Classification

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages