tech

December 3, 2025

Jina Embeddings v4: Universal Embeddings for Multimodal Multilingual Retrieval

Today we're releasing jina-embeddings-v4, our new 3.8 billion parameter universal embedding model for text and images. It includes a set of task-specific LoRA adapters that optimize performance for the most popular retrieval tasks, including query-document retrieval, semantic matching, and code search. jina-embeddings-v4 achieves state-of-the-art retrieval performance on multimodal and multilingual tasks across MTEB, MMTEB, CoIR, LongEmbed, STS, Jina-VDR, CLIP, and ViDoRe benchmarks, with particular strength in processing visually rich content such as tables, charts, diagrams, and mixture of them. The model supports both single-vector and multi-vector embeddings.

Jina Embeddings v4: Universal Embeddings for Multimodal Multilingual Retrieval

TL;DR

  • Release of jina-embeddings-v4, a 3.8 billion parameter universal embedding model.
  • Optimized for text and image retrieval tasks with task-specific LoRA adapters.
  • Achieves state-of-the-art performance on multimodal and multilingual benchmarks.
  • Strong performance in processing visually rich content like tables and charts.
  • Supports both single-vector and multi-vector embeddings.

Continue reading
the original article

Made withNostr