Skip to content
#

continued-pretraining

Here are 5 public repositories matching this topic...

Language: All
Filter by language

This project evaluates Llama 3.2 3B continued pre-training for Serbian language, using a custom-made cloze-style benchmark. It supports grammatical, lexical, semantic, idiomatic, and factual sentence completion tasks. The evaluation script calculates model accuracy based on log-likelihood scoring over masked token choices.

  • Updated Jun 19, 2025
  • Python

Improve this page

Add a description, image, and links to the continued-pretraining topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the continued-pretraining topic, visit your repo's landing page and select "manage topics."

Learn more