Skip to content

NUSTM/MEMIT-Merge

Repository files navigation

MEMIT-Merge

This repository contains the implementation for the paper MEMIT-Merge.

Built on top of the EasyEdit framework.

Abstract

As large language models continue to scale up, knowledge editing techniques that modify models’ internal knowledge without full retraining have gained significant attention. MEMIT, a prominent batch editing algorithm, stands out for its capability to perform mass knowledge modifications. However, we uncover that MEMIT’s editing efficacy significantly deteriorates when processing batches containing multiple edits sharing the same subject. Our analysis reveals this stems from MEMIT’s key value modeling framework: identical keys (derived from the shared subject) are forced to represent different values (corresponding to different knowledge), resulting in update conflicts during editing. Addressing this issue, we propose MEMIT-Merge, an enhanced approach that merges value computation processes for facts sharing the same subject, effectively resolving the performance degradation in samesubject batch editing scenarios. Experimental results demonstrate that when MEMIT’s edit success rate drops to around 50% at larger batch sizes, MEMIT-Merge maintains a success rate exceeding 90%, showcasing remarkable robustness to subject entity collisions.

Installation

Setup

  1. Clone the repository:
git clone https://github.com/your-username/MEMIT-Merge.git
cd MEMIT-Merge
  1. Install dependencies using conda:
conda env create -f environment.yml
conda activate unlearn

Or install using pip:

pip install -r requirements.txt

Usage

Running Experiments

Execute comparative experiments between MEMIT and MEMIT-Merge on same-subject and different-subject data:

# MEMIT-Merge experiments
sh edit_multi_with_eval_model.sh qwen2.5-1.5b-it data/diff_subject_same_format_final.json knowedit MEMIT-Merge aug_batch_edit
sh edit_multi_with_eval_model.sh qwen2.5-1.5b-it data/same_subject_same_format_final.json knowedit MEMIT-Merge aug_batch_edit

# MEMIT baseline experiments
sh edit_multi_with_eval_model.sh qwen2.5-1.5b-it data/diff_subject_same_format_final.json knowedit MEMIT aug_batch_edit
sh edit_multi_with_eval_model.sh qwen2.5-1.5b-it data/same_subject_same_format_final.json knowedit MEMIT aug_batch_edit

Computing Evaluation Metrics

After running the experiments, compute the evaluation metrics:

python data_analyse.py

Project Structure

MEMIT-Merge/
├── easyeditor/                    # EasyEdit framework
│   └── models/
│       └── memit_merge/          # MEMIT-Merge implementation
│           ├── compute_z.py      # Core computation for value vectors
│           ├── compute_ks.py     # Key computation
│           ├── memit_main.py     # Main editing logic
│           └── memit_hparams.py  # Hyperparameters
├── data/                         # Experimental datasets
│   ├── same_subject_same_format_final.json    # Same-subject data
│   └── diff_subject_same_format_final.json    # Different-subject data
├── hparams/                      # Hyperparameter configurations
├── results/                      # Experimental results
├── edit_multi_with_eval.py       # Main evaluation script
├── edit_multi_with_eval_model.sh # Experiment runner script
└── data_analyse.py              # Results analysis script

Key Features

  • Enhanced Knowledge Editing: Improved handling of multiple simultaneous edits
  • Reduced Interference: Better management of conflicts between different knowledge updates
  • Comprehensive Evaluation: Support for both same-subject and different-subject editing scenarios
  • EasyEdit Integration: Built on the robust EasyEdit framework for knowledge editing

Datasets

The repository includes two main datasets for evaluation:

  • Same-subject data: Knowledge edits that share the same subject entity
  • Different-subject data: Knowledge edits with different subject entities

Citation

If you use this code in your research, please cite our paper:

@inproceedings{dong-etal-2025-memit,
    title={MEMIT-Merge: Addressing MEMIT’s Key-Value Conflicts in Same-Subject Batch Editing for LLMs},
    author={Zilu Dong, Xiangqing Shen, Rui Xia},
    booktitle={Findings of the Association for Computational Linguistics: ACL 2025},
    year={2025},
    url={https://aclanthology.org/2025.findings-acl.415/}
}

Acknowledgments

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages