Skip to content

NVIDIA-NeMo/Curator

Repository files navigation

https://pypi.org/project/nemo-curator https://pypi.org/project/nemo-curator/ NVIDIA-NeMo/Curator https://github.com/NVIDIA-NeMo/Curator/releases https://github.com/Naereen/badges/

Accelerate Data Processing and Streamline Synthetic Data Generation with NVIDIA NeMo Curator

NeMo Curator is a Python library specifically designed for fast and scalable data processing and curation for generative AI use cases such as foundation language model pretraining, text-to-image model training, domain-adaptive pretraining (DAPT), supervised fine-tuning (SFT) and parameter-efficient fine-tuning (PEFT).

It greatly accelerates data processing and curation by leveraging GPUs with Dask and RAPIDS, resulting in significant time savings. The library provides a customizable and modular interface, simplifying pipeline expansion and accelerating model convergence through the preparation of high-quality tokens.

NeMo Curator also provides pre-built pipelines for synthetic data generation for customization and evaluation of generative AI systems. You can use any OpenAI API compatible model and plug it in NeMo Curator's synthetic data generation pipelines to process and curate high-quality synthetic data for various use cases.

Getting Started

New to NeMo Curator? Start with our quickstart guides for hands-on experience:

For production deployments and advanced configurations, see our Setup & Deployment documentation.


Key Features

With NeMo Curator, you can process raw data and curate high-quality data for training and customizing generative AI models such as LLMs, VLMs and WFMs. NeMo Curator provides a collection of scalable data processing modules for text and image curation.

Text Curation

All of our text pipelines have great multilingual support. With NeMo Curator, you can pick and choose the features you want and build your data curation pipelines. Text curation follows a three-stage workflow: LoadProcessGenerate. A typical pipeline starts by downloading raw data from public resources, then applies cleaning and filtering steps, and optionally generates synthetic data for training enhancement.

Load Data

  • Download and Extraction - Default implementations for Common Crawl, Wikipedia, and ArXiv sources with easy customization for other sources

Process Data

  • Quality Assessment & Filtering

  • Deduplication

  • Content Processing & Cleaning

    • Text Cleaning - Remove improperly decoded Unicode characters, inconsistent line spacing, and excessive URLs
    • PII Redaction - Identify and remove personally identifiable information from training datasets
  • Specialized Processing

Generate Data


Image Curation

NeMo Curator provides powerful image curation features to curate high-quality image data for training generative AI models such as LLMs, VLMs, and WFMs. Image curation follows a LoadProcess workflow: download datasets in WebDataset format, create embeddings, apply quality filters (NSFW and Aesthetic), and remove duplicates using semantic deduplication.

Load Data

Process Data


Module Ablation and Compute Performance

The modules within NeMo Curator were primarily designed to process and curate high-quality documents at scale. To evaluate the quality of the data, we curated Common Crawl documents and conducted a series of ablation experiments. In these experiments, we trained a 357M-parameter GPT-style model using datasets generated at various stages of our data curation pipeline, which was implemented in NeMo Curator.

The following figure shows that the use of different data curation modules implemented in NeMo Curator led to improved model zero-shot downstream task performance.

drawing

NeMo Curator leverages NVIDIA RAPIDS™ libraries like cuDF, cuML, and cuGraph along with Dask to scale workloads across multi-node, multi-GPU environments, significantly reducing data processing time. With NeMo Curator, developers can achieve 16X faster processing for text. Refer to the chart below to learn more details.

NeMo Curator scales near linearly which means that developers can accelerate their data processing by adding more compute. For deduplicating the 1.96 Trillion token subset of the RedPajama V2 dataset, NeMo Curator took 0.5 hours with 32 NVIDIA H100 GPUs. Refer to the scaling chart below to learn more

Contribute to NeMo Curator

We welcome community contributions! Please refer to CONTRIBUTING.md for the process.