Founded in Shanghai in 2013, rednote is a platform where users capture and share their lives through photos, text, videos, and live streams, building an interactive community around shared interests. Guided by its mission to Inspire Lives, rednote is becoming a vibrant hub for diverse lifestyles and a trusted companion to millions. The rednote hilab team — where “hi” stands for humane intelligence — is dedicated to pushing the boundaries of AI by creating more diverse and balanced forms of intelligence, such as interpersonal, spatial-visual, and musical intelligence and more. Our ultimate goal is to make AI a truly natural and beneficial companion to humanity.
- dots.llm1 is a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models. We released (i) checkpoints for state-of-the-art medium-sized models and (ii) training recipes for the MoE models.
- dots.ocr is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
- dots.vlm1 is the first vision-language model in the dots model family. Built upon a 1.2 billion-parameter vision encoder and the DeepSeek V3 large language model (LLM),
dots.vlm1
demonstrates strong multimodal understanding and reasoning capabilities.