Skip to content

Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple regression tasks.

License

Notifications You must be signed in to change notification settings

google-deepmind/regress-lm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RegressLM: Easy Text-to-Text Regression

Continuous Integration

Google Research Blog | Setup | Usage | Extended Usage | Citing

Overview

RegressLM is a library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple regression tasks.

RegressLM decoding a numerical performance metric from text.

Example Application: Directly regressing performance metrics from unstructured, textually represented system states from Google's massive compute clusters.

Setup

Get started by installing the core libraries:

pip install -e .

To run e.g. T5Gemma variants, install additional libraries:

pip install ".[extras]"

Usage

There are two main stages: inference and pretraining (optional).

Inference

The intended use-case is to import a RegressLM class, which can decode floating-point predictions from a given input, and also fine-tune against new data.

from regress_lm import core
from regress_lm import rlm

# Create RegressLM with max input token length.
reg_lm = rlm.RegressLM.from_default(max_input_len=2048)

# Example (x,y) pairs, which can be fine-tuned against.
examples = [core.Example(x='hello', y=0.3), core.Example(x='world', y=-0.3)]
reg_lm.fine_tune(examples)

# Query inputs.
query1, query2 = core.ExampleInput(x='hi'), core.ExampleInput(x='bye')
samples1, samples2 = reg_lm.sample([query1, query2], num_samples=128)

Pretraining

To produce better initial checkpoints for transfer learning, we recommend the user pretrains over large amounts of their own training data. Example pseudocode with PyTorch:

from torch import optim
from regress_lm.models.pytorch import model as torch_model_lib

model = torch_model_lib.PyTorchModel(...)
optimizer = optim.Adafactor(
    filter(lambda p: p.requires_grad, model.parameters()), lr=1e-4
)
for _ in range(...):
  examples = [Example(x=..., y=...), ...]
  tensor_examples = model.convert(examples)
  optimizer.zero_grad()
  loss, _ = model.compute_loss_and_metrics(tensor_examples)
  loss.backward()
  optimizer.step()

Boosting Performance and Extended Applications

Below, we describe ways to improve performance and extended applications.

Train Custom Vocabulary

You can generate a custom vocabulary, trained on an offline corpus of data mydata.txt:

encoder_vocab = SentencePieceVocab.from_corpus(corpus_path='mydata.txt', vocab_size=1024)

Larger Sizes

Larger model sizes may increase performance, although with more computational cost:

model = PyTorchModel(num_encoder_layers=12, num_decoder_layers=12)

Multi-objective Support

The RLM can decode a concatenated sequence of tokens too, for multi-objective regression:

reg_lm = rlm.RegressLM.from_default(max_num_objs=2)

# Examples can have variable objective lengths.
examples = [core.Example(x='hello', y=[0.2]), core.Example(x='world', y=[-0.2, 0.3])]
reg_lm.fine_tune(examples)

# Now `samples` has shape (128, 2).
samples = reg_lm.sample([core.ExampleInput(x='hi')], num_samples=128)[0]

Pretrained Third-Party Models

T5Gemma frozen encoder + our default decoder is supported:

reg_lm = rlm.RegressLM.from_t5gemma_encoder('google/t5gemma-s-s-prefixlm')

End-to-end T5Gemma is also supported:

from regress_lm.models.pytorch import t5gemma_model
model = t5gemma_model.T5GemmaModel('google/t5gemma-s-s-prefixlm')

Long-Context

To support 100K+ input token lengths, alternative encoders (e.g. mamba-ssm and Performer) are supported:

model = PyTorchModel(encoder_type='mamba', additional_encoder_kwargs={'d_state': 128})
model = PyTorchModel(encoder_type='performer', additional_encoder_kwargs={'num_features': 256})

Contributors and Citation

The codebase was written by: Xingyou Song, Yash Akhauri, Dara Bahri, Michal Lukasik, Arissa Wongpanich, Adrian N. Reyes, and Bryan Lewandowski.

If you find this project useful, please consider citing the relevant works:

@article{performance_prediction,
      title={Performance Prediction for Large Systems via Text-to-Text Regression},
      author={Yash Akhauri and Bryan Lewandowski and Cheng-Hsi Lin and Adrian N. Reyes and Grant C. Forbes and Arissa Wongpanich and Bangding Yang and Mohamed S. Abdelfattah and Sagi Perel and Xingyou Song},
      journal={arXiv preprint arXiv:2506.21718},
      year={2025}
}

@article{omnipred,
      title={OmniPred: Language Models as Universal Regressors},
      author={Xingyou Song and Oscar Li and Chansoo Lee and Bangding Yang and Daiyi Peng and Sagi Perel and Yutian Chen},
      journal={Trans. Mach. Learn. Res.},
      year={2024},
      url={https://openreview.net/forum?id=t9c3pfrR1X},
}

@article{decoding_regression,
      title={Decoding-based Regression},
      author={Xingyou Song and Dara Bahri},
      journal={Trans. Mach. Learn. Res.},
      year={2025},
      url={https://openreview.net/forum?id=avUQ8jguxg},
}

Disclaimer: This is not an officially supported Google product.

About

Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple regression tasks.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages