-
UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning
Paper • 2510.13515 • Published • 11 -
TianchengGu/UniME-V2-LLaVA-OneVision-8B
Image-Text-to-Text • 8B • Updated • 12 • 2 -
TianchengGu/UniME-V2-Qwen2VL-7B
Image-Text-to-Text • 8B • Updated • 773 • 2 -
TianchengGu/UniME-V2-Qwen2VL-2B
Image-Text-to-Text • 2B • Updated • 1.01k • 2
Kaicheng Yang
Kaichengalex
AI & ML interests
Multimodal Representation Learning/ Vision-Language Pretraining/DeepResearch
Recent Activity
upvoted
a
paper
1 day ago
Towards Scalable Pre-training of Visual Tokenizers for Generation
updated
a collection
5 days ago
SFT Dataset
liked
a dataset
5 days ago
OneThink/OneThinker-train-data