Imaginative and prescient-language fashions (VLMs) play a vital position in at present’s clever programs by enabling an in depth understanding of visible content material. The complexity of multimodal intelligence duties has grown, starting from scientific problem-solving to the event of autonomous brokers. Present calls for on VLMs have far exceeded easy visible content material notion, with rising consideration on superior reasoning. Whereas current works present that long-form reasoning and scalable RL considerably improve LLMs’ problem-solving skills, present efforts primarily concentrate on particular domains to enhance VLM reasoning. The open-source neighborhood at present lacks a multimodal reasoning mannequin that outperforms conventional non-thinking fashions of comparable parameter scale throughout numerous duties.
Researchers from Zhipu AI and Tsinghua College have proposed GLM-4.1V-Pondering, a VLM designed to advance general-purpose multimodal understanding and reasoning. The strategy then introduces Reinforcement Studying with Curriculum Sampling (RLCS) to unlock the mannequin’s full potential, enabling enhancements throughout STEM drawback fixing, video understanding, content material recognition, coding, grounding, GUI-based brokers, and lengthy doc understanding. Researchers open-sourced GLM-4.1V-9B-Pondering, which units a brand new benchmark amongst equally sized fashions. It additionally delivers aggressive, and in some instances superior efficiency in comparison with proprietary fashions like GPT-4o on difficult duties akin to lengthy doc understanding and STEM reasoning.

GLM-4.1V-Pondering accommodates three core elements: a imaginative and prescient encoder, an MLP adapter, and an LLM decoder. It makes use of AIMv2-Enormous because the imaginative and prescient encoder and GLM because the LLM, changing the unique 2D convolutions with 3D convolutions for temporal downsampling. The mannequin integrates 2D-RoPE to assist arbitrary picture resolutions and side ratios, and course of excessive side ratios over 200:1 and excessive resolutions past 4K. Researchers lengthen RoPE to 3D-RoPE within the LLM to enhance spatial understanding in multimodal contexts. For temporal modeling in movies, time index tokens are added after every body token, with timestamps encoded as strings to assist the mannequin perceive real-world temporal gaps between frames
Throughout pre-training, the researchers use quite a lot of datasets, combining giant tutorial corpora with interleaved image-text knowledge wealthy in information. By together with pure textual content knowledge, the mannequin’s core language capabilities are preserved, leading to higher move@okay efficiency than different state-of-the-art pre-trained base fashions of comparable dimension. The supervised fine-tuning stage transforms the bottom VLM into one able to lengthy CoT inference utilizing a curated long-CoT corpus throughout verifiable, like STEM issues, and non-verifiable duties akin to instruction following. Lastly, the RL part employs a mix of RLVR and RLHF to conduct large-scale coaching throughout all multimodal domains, together with STEM drawback fixing, grounding, optical character recognition, GUI brokers, and lots of extra.
GLM-4.1V-9B-Pondering outperforms all competing open-source fashions beneath 10B parameters in Normal VQA duties protecting each single-image and multi-image settings. It achieves the best efficiency on difficult STEM benchmarks, together with MMMU_Val, MMMU_Pro, VideoMMMU, and AI2D. Within the OCR and Chart domains, the mannequin units new state-of-the-art scores on ChartQAPro and ChartMuseum. For Lengthy Doc Understanding, GLM-4.1V-9B-Pondering outperforms all different fashions on MMLongBench, whereas establishing new state-of-the-art leads to GUI Brokers and multimodal Coding duties. Lastly, the mannequin reveals sturdy Video Understanding efficiency, outperforming VideoMME, MMVU, and MotionBench benchmarks.
In conclusion, researchers launched GLM-4.1V-Pondering, which represents a step towards general-purpose multimodal reasoning. Its 9B-parameter mannequin outperforms bigger fashions just like the one which exceeds 70B parameters. Nonetheless, a number of limitations stay, akin to inconsistent enhancements in reasoning high quality by means of RL, instability throughout coaching, and difficulties with advanced instances. Future developments ought to concentrate on bettering supervision and analysis of mannequin reasoning, with reward fashions evaluating intermediate reasoning steps whereas detecting hallucinations and logical inconsistencies. Furthermore, exploring methods to stop reward hacking in subjective analysis duties is essential to attain general-purpose intelligence.
Try the Paper and GitHub Web page. All credit score for this analysis goes to the researchers of this undertaking.
| Sponsorship Alternative |
|---|
| Attain probably the most influential AI builders worldwide. 1M+ month-to-month readers, 500K+ neighborhood builders, infinite prospects. [Explore Sponsorship] |

Sajjad Ansari is a closing yr undergraduate from IIT Kharagpur. As a Tech fanatic, he delves into the sensible functions of AI with a concentrate on understanding the influence of AI applied sciences and their real-world implications. He goals to articulate advanced AI ideas in a transparent and accessible method.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s traits at present: learn extra, subscribe to our publication, and grow to be a part of the NextTech neighborhood at NextTech-news.com

