id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
m8sPQEd71W
104
m8sPQEd71W
Unified Multimodal Model as Auto-Encoder
The pursuit of unified multimodal models (UMMs) has long been hindered by a fundamental schism between multimodal understanding and generation. Current approaches typically disentangle the two and treat them as separate endeavors with disjoint objectives, missing the mutual benefits. We argue that true unification requires more than just merging two tasks. It requires a unified, foundational objective that intrinsically links them. In this paper, we introduce an insightful paradigm through the **Auto-Encoder lens**, *i.e.*, regarding understanding as the encoder (I2T) that compresses images into text, and generation as the decoder (T2I) that reconstructs images from that text. We argue that: *if the encoder truly "understands" the image, its description should capture all essential structure, and if the decoder truly "understands" the text, it should recover that structure faithfully.* Hence, high-fidelity reconstruction serves as a powerful perspective for genuine multimodal unification, evidencing near-lossless, bidirectional information flow between the two processes. To implement this, we propose **UAE**, where we begin by pre-training the decoder with the proposed 700k long-context image-caption pairs to direct it to "understand" the fine-grained and complex semantics from the text, as longer intermediate text, in our Auto-Encoder framework, can preserve more information from the input image for reconstruction. We then propose **Unified-GRPO** via reinforcement learning (RL) to unify the two, which covers two complementary stages: (1) *Generation for Understanding*, where the encoder is trained to generate informative captions that maximize the decoder's reconstruction quality, enhancing its visual perception; (2) *Understanding for Generation*, where the decoder is refined to reconstruct from these captions, forcing it to leverage every detail and improving its long-context instruction following and generation fidelity. Our empirical results suggest that understanding can largely enhance generation (verified on GenEval), while generation, in turn, notably strengthens fine-grained visual perception like small object and color recognition (verified on MMT-Bench). This bidirectional improvement reveals a deep synergy: under the unified reconstruction objective, generation and understanding can mutually benefit each other, moving closer to truly unified multimodal intelligence.
Exploring synergy between visual generation and perception by formulating the unified multimodal model as autoencoder.
['Multimodal', 'Unified Multimodal Model', 'Generative Model']
/pdf/61fc10b43f944f5731b7129602b602d5f0ec06d5.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission104/Authors']
74M7InKlVs
103
74M7InKlVs
C$^3$-Bench: Evaluating and Achieving Controllable Code Completion in Code LLM
Code completion has become a central task, gaining significant attention with the rise of large language model (LLM)-based tools in software engineering. Although recent advances have greatly improved LLMs' code completion abilities, evaluation methods have not advanced equally. Most current benchmarks focus solely on functional correctness of code completions based on given context, overlooking models' ability to follow user instructions during completion\textemdash a common scenario in LLM-assisted programming. To address this limitation, we present the first instruction-guided code completion benchmark, \textbf{\underline{C}}ontrollable \textbf{\underline{C}}ode \textbf{\underline{C}}ompletion Benchmark (C$^3$-Bench), comprising 2,195 carefully designed completion tasks. Through comprehensive evaluation of over 40 mainstream LLMs across C$^3$-Bench and conventional benchmarks, we reveal substantial gaps in instruction-following capabilities between open-source and advanced proprietary models during code completion tasks. Moreover, we develop a straightforward data synthesis pipeline that leverages Qwen2.5-Coder to generate high-quality instruction-completion pairs for supervised fine-tuning (SFT). The resulting model, Qwen2.5-Coder-C$^3$, achieves state-of-the-art performance on C$^3$-Bench. Our findings provide valuable insights for enhancing LLMs' code completion and instruction-following capabilities, establishing new directions for future research in code LLMs. To facilitate reproducibility and foster further research in code LLMs, we open-source all code, datasets, and models.
We created C³-Bench, a new benchmark for code LLMs that tests both code correctness and instruction following, revealing gaps in current models and developed a better-performing solution through automated training data generation.
['Large Language Models', 'Code Language Models', 'Code Completion', 'Instruction Following']
/pdf/2365463e7c923ffa6529d3000c4c06c547b44ea5.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission103/Authors']
eAge74DIgk
101
eAge74DIgk
LitExplorer: Training-Free Diffusion Guidance with Adaptive Exploration-Filtering Framework
Diffusion models possess strong general generative capabilities, yet they remain insufficient when aligned with specific target objectives. Fine-tuning methods can enhance alignment but incur high training costs and face the risk of reward hacking. Consequently, training-free guidance mechanisms have emerged, which leverage external signals during inference to steer the generative distribution toward high-reward regions. However, existing training-free approaches encounter two key challenges: first, the guidance process tends to over-bias generation toward the target distribution, at the expense of excessively narrowing the pretrained model’s generative space; second, the guidance signals are mechanically imposed throughout inference, lacking mechanisms to identify and filter out ineffective or redundant signals. To mitigate these limitations, we propose \ourmethod{}. Regarding the first issue, we introduce exploratory guidance signals through \pos{} to prevent generation paths from prematurely converging to a single mode, while dynamically balancing the trade-off between exploration and stable generation based on denoising progress. This alleviates the excessive contraction of the generative space without deviating from the target distribution or the pretrained distribution. Regarding the second issue, to enable precise and efficient guidance, we incorporate an adjudication mechanism that evaluates the validity of guidance signals and adaptively eliminates ineffective or redundant ones. To demonstrate the generality of \ourmethod{}, we conduct extensive evaluations in both single-objective and multi-objective scenarios. Results show that \ourmethod{} achieves significant improvements over existing training-free baselines in terms of generative diversity, target alignment, and inference efficiency.
null
['Diffusion Model;Traning-free']
/pdf/1c6b6c00941091ec239340ef422fc7d9f01f4462.pdf
applications to computer vision, audio, language, and other modalities
/attachment/2b27b42c94f12c88f9c49a8e2c11c1adbec795e8.zip
['ICLR.cc/2026/Conference/Submission101/Authors']
FGkknrhv09
100
FGkknrhv09
Curing "Miracle Steps'' in LLM Math Reasoning with Rubric Rewards
Large language models for mathematical reasoning are typically trained with outcome-based rewards, which credit only the final answer. In our experiments, we observe that this paradigm is highly susceptible to reward hacking, leading to a substantial overestimation of a model's reasoning ability. This is evidenced by a high incidence of "false positives"—solutions that reach the correct final answer through an unsound reasoning process. Through a systematic analysis with human verification, we establish a taxonomy of these failure modes, identifying patterns like Miracle Steps—abrupt jumps to a correct output without a valid preceding derivation. Probing experiments suggest a strong association between these Miracle Steps and memorization, where the model appears to recall the answer directly rather than deriving it. To mitigate this systemic issue, we introduce the Rubric Reward Model (RRM), a process-oriented reward function that evaluates the entire reasoning trajectory against problem-specific rubrics. The generative RRM provides fine-grained, calibrated rewards (0–1) that explicitly penalize logical flaws and encourage rigorous deduction. When integrated into a reinforcement learning pipeline, RRM-based training consistently outperforms outcome-only supervision across four math benchmarks. Notably, it boosts Verified Pass@1024 on AIME2024 from 26.7% to 62.6% and reduces the incidence of Miracle Steps by 71%. Our work demonstrates that rewarding the solution process is crucial for building models that are not only more accurate but also more reliable.
This paper diagnoses how LLMs achieve correct math answers with flawed logic ("false positives") and introduces a "Rubric Reward Model" that rewards the entire problem-solving process to build more trustworthy and accurate reasoners.
['faithful chain-of-thought', 'math reasoning', 'false positive', 'rubric']
/pdf/365c5050a2ce26e04b0f1c843f16e9a72f9c704f.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission100/Authors']
I88toT6Leg
99
I88toT6Leg
The PIMMUR Principles: Ensuring Validity in Collective Behavior of LLM Societies
Large Language Models (LLMs) are increasingly used for social simulation, where populations of agents are expected to reproduce human-like collective behavior. However, we find that many recent studies adopt experimental designs that systematically undermine the validity of their claims. From a survey of over 40 papers, we identify six recurring methodological flaws: agents are often homogeneous (Profile), interactions are absent or artificially imposed (Interaction), memory is discarded (Memory), prompts tightly control outcomes (Minimal-Control), agents can infer the experimental hypothesis (Unawareness), and validation relies on simplified theoretical models rather than real-world data (Realism). For instance, GPT-4o and Qwen-3 correctly infer the underlying social experiment in 53.1% of cases when given instructions from prior work—violating the Unawareness principle. We formalize these six requirements as the PIMMUR principles and argue they are necessary conditions for credible LLM-based social simulation. To demonstrate their impact, we re-run five representative studies using a framework that enforces PIMMUR and find that the reported social phenomena frequently fail to emerge under more rigorous conditions. Our work establishes methodological standards for LLM-based multi-agent research and provides a foundation for more reliable and reproducible claims about "AI societies."
null
['Large Language Model', 'Multi-Agent System', 'Social Simulation', 'Social Science']
/pdf/69878b43abed6ff5ad1c4ca4539e64eb75e06895.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/b67110f6633e800df1fd66d725185552fa32de05.zip
['ICLR.cc/2026/Conference/Submission99/Authors']
5HHkCSVHaU
98
5HHkCSVHaU
Teaching LLMs According to Their Aptitude: Adaptive Switching Between CoT and TIR for Mathematical Problem Solving
Existing supervised fine-tuning (SFT) approaches to enhance the mathematical reasoning of large language models (LLMs) rely either on Chain-of-Thought (CoT) for generalizability or Tool-Integrated Reasoning (TIR) for precise computation. While efforts have been made to combine these methods, they primarily rely on post-selection or predefined strategies, leaving an open question: Could we endow LLMs with the ability to adaptively determine whether to use CoT or TIR based on the math problems at hand before decoding? In this work, we propose **TATA** (**T**eaching LLMs **A**ccording to **T**heir **A**ptitude), an adaptive framework that enables LLMs to personalize their reasoning strategy for different problems spontaneously, aligning it with their intrinsic aptitude. TATA incorporates base-LLM-aware data selection during SFT to tailor training data to the model’s unique abilities, which equips LLMs to autonomously determine and apply the effective reasoning strategy at test time. Empirical results demonstrate that TATA effectively combines the complementary strengths of CoT and TIR, achieving superior or comparable performance with improved inference efficiency compared to existing methods. Further analysis highlights the crucial role of aptitude-aware data selection in enabling LLMs to make informed and adaptive reasoning decisions, aligning reasoning strategies with model capabilities.
we propose TATA, an adaptive framework that enables LLMs to personalize their reasoning strategy for different problems spontaneously, aligning it with their intrinsic aptitude.
['Large Language Models', 'math QA', 'chain-of-thought', 'tool-integrated reasoning', 'fine-tuning']
/pdf/d166d32d51c34eb2be6da6ef8e733c286e3e78a7.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission98/Authors']
GymjF88oGQ
97
GymjF88oGQ
The Pensieve Paradigm: Stateful Language Models with Learned Memory Management
In the world of Harry Potter, when Dumbledore's mind is overburdened, he extracts memories into a Pensieve to be revisited later. In the world of AI, while we possess the Pensieve—mature databases and retrieval systems, our models inexplicably lack the "wand" to operate it. They remain like a Dumbledore without agency, passively accepting a manually engineered context as their entire memory. This work finally places the wand in the model's hand. We introduce StateLM, a new class of foundation models endowed with an internal reasoning loop to manipulate their own state. We equip our model with a suite of tools, such as dynamic indexing, context pruning, and note-taking, and train it to actively manage this loop. By learning to dynamically construct its own context, our model breaks free from the architectural prison of a fixed window. The results are prominent: our state-management approach decouples performance from context window size, delivering strong accuracy and sustainability under extremely long contexts with linear inference cost. We demonstrate this by showing StateLM reliably retrieves a "needle" from a 1-million-token haystack, a task far beyond the reach of conventional models. On practical document QA tasks from NovelQA and $\infty$Bench, StateLM outperforms strong instruct baselines while using only 1/4 of their active context. An ablation further shows that our curated training pipeline is more effective for learning memory management than agent-like prompting. Together, these results mark a shift from passive predictors to state-aware systems where reasoning becomes a stateful and manageable process.
null
['LLM', 'memory management']
/pdf/d411b45856f6dfaf3ae0c24c5b9aa995014326ba.pdf
foundation or frontier models, including LLMs
/attachment/bceafcddb1daa855bd0be813fc8c88bb16a1e0ff.zip
['ICLR.cc/2026/Conference/Submission97/Authors']
NSjAYTNB11
95
NSjAYTNB11
PlotCraft: Pushing the Limits of LLMs for Complex and Interactive Data Visualization
Recent Large Language Models (LLMs) have demonstrated remarkable proficiency in code generation. However, their ability to create complex visualizations for scaled and structured data remains largely unevaluated and underdeveloped. To address this gap, we introduce \textbf{PlotCraft}, a new benchmark featuring 1k challenging visualization tasks that cover a wide range of topics, such as finance, scientific research, and sociology. The benchmark is structured around seven high-level visualization tasks and encompasses 48 distinct chart types. Crucially, it is the first to systematically evaluate both single-turn generation and multi-turn refinement across a diverse spectrum of task complexities. Our comprehensive evaluation of 23 leading LLMs on PlotCraft reveals obvious performance deficiencies in handling sophisticated visualization tasks. To bridge this performance gap, we develope \textbf{SynthVis-30K}, a large-scale, high-quality dataset of complex visualization code synthesized via a collaborative agent framework. Building upon this dataset, we develope \textbf{PlotCraftor}, a novel code generation model that achieves strong capabilities in complex data visualization with a remarkably small size. Across VisEval, PandasPlotBench, and our proposed PlotCraft, PlotCraftor shows performance comparable to that of leading proprietary approaches. Especially, on hard task, Our model achieves over 50\% performance improvement. We will release the benchmark, dataset, and code at https://anonymous.4open.science/r/PlotCraft-E320.
LLMs are bad at complex charts. We built a small, specialized model, PlotCraftor, that fixes this and is now state-of-the-art.
['Large Language Model; Code Generation; Data Visualization']
/pdf/ff4d59f420150b9719d3866dffd007b2331fcf54.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission95/Authors']
9aB3BWye1j
92
9aB3BWye1j
PairedContrast: A Multimodal Benchmark for Medical Image Translation
Contrast medium play a pivotal role in radiological imaging, as it amplifies lesion conspicuity and improves detection in the diagnosis of tumor-related diseases. However, depending on the patient’s health condition or the medical resources available, the use of contrast medium is not always feasible. Recent work has therefore explored AI-based image translation to synthesize contrast-enhanced images directly from non-contrast scans, aiming to reduce side effects and streamline clinical workflows. Progress in this direction has been constrained by data limitations: (1) existing public datasets focus almost exclusively on brain-only paired MR modalities; (2) other collections include partially paired data but suffer from missing modalities/timestamps and imperfect spatial alignment; (3) explicit labeling of CT vs. CTC or DCE phases is often absent; (4) substantial resources remain private. To bridge this gap, we introduce the first public, fully paired, pan-cancer medical imaging dataset spanning 11 human organs. The MR data include complete dynamic contrast-enhanced (DCE) sequences covering all three phases (DCE1–DCE3), while the CT data provide paired non-contrast and contrast-enhanced acquisitions (CTC). The dataset is curated for anatomical correspondence, enabling rigorous evaluation of 1 → 1, N → 1, and N → N translation settings (e.g., predicting DCE phases from non-contrast inputs). Built upon this resource, we establish a comprehensive benchmark. We report results from representative baselines of contemporary image-to-image translation. We release the dataset and benchmark to catalyze research on safe, effective contrast synthesis, with direct relevance to multi-organ oncology imaging workflows.
null
['benchmark; pan-cancer; paired datasets; medical image translation; contrast media']
/pdf/dea3b2acd9ac51578b6ec8fb77b1aa575911de9e.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission92/Authors']
8pi1rP71qv
91
8pi1rP71qv
FlyPrompt: Brain-Inspired Random-Expanded Routing with Temporal-Ensemble Experts for General Continual Learning
General continual learning (GCL) challenges intelligent systems to learn from single-pass, non-stationary data streams without clear task boundaries. While recent advances in continual parameter-efficient tuning (PET) of pretrained models show promise, they typically rely on multiple training epochs and explicit task cues, limiting their effectiveness in GCL scenarios. Moreover, existing methods often lack targeted design and fail to address two fundamental challenges in continual PET: how to allocate expert parameters to evolving data distributions, and how to improve their representational capacity under limited supervision. Inspired by the fruit fly's hierarchical memory system characterized by sparse expansion and modular ensembles, we propose FlyPrompt, a brain-inspired framework that decomposes GCL into two subproblems: expert routing and expert competence improvement. FlyPrompt introduces a randomly expanded analytic router for instance-level expert activation and a temporal ensemble of output heads to dynamically adapt decision boundaries over time. Extensive theoretical and empirical evaluations demonstrate FlyPrompt's superior performance, achieving up to 11.23%, 12.43%, and 7.62% gains over state-of-the-art baselines on CIFAR-100, ImageNet-R, and CUB-200, respectively.
We propose a brain-inspired method FlyPrompt that uses random-expanded routing and temporal-ensemble experts to effectively tackle General Continual Learning problem, achieving significant gains on major benchmarks.
['Continual Learning', 'Life-long Learning', 'Brain-inspired AI', 'Catastrophic Forgetting', 'Prompt Tuning']
/pdf/9bde35abdb2f177c878cde658e6f42cb93590032.pdf
transfer learning, meta learning, and lifelong learning
/attachment/a502549ab359383dbaa373fb0cb2e6c40e6ff16f.zip
['ICLR.cc/2026/Conference/Submission91/Authors']
XHzrBDzKaX
88
XHzrBDzKaX
Castle-in-the-Air: Evaluating MLLM Visual Abilities on Human Cognitive Benchmarks
Despite significant progress on popular multimodal benchmarks, state-of-the-art Multimodal Large Language Models (MLLMs) continue to struggle with basic visual reasoning tasks that are trivially solved by humans, such as recognizing abstract patterns or identifying spatial relationships. Such deficiencies undermine their efficacy and robustness, rendering high-level downstream applications (e.g., embodied AI) infeasible. To systematically investigate this gap, we introduce VisFactor, a benchmark that digitizes 20 vision-centric subtests from FRCT, a well-established cognitive psychology assessment, including four domains of human visual cognition: (1) Visualization and Spatial Processing, (2) Perceptual and Closure, (3) Memory, and (4) Reasoning. Furthermore, we leverage parametric generation to automatically construct unlimited test cases with controllable difficulty for applicable subtests. Using VisFactor, we evaluate 20 frontier MLLMs, including both proprietary (GPT, Gemini, etc.) and open-source models (LLaMA-3.2, Qwen2.5-VL, etc.). The best-performing model achieves a score of only 25.19%, with consistent failures on tasks such as mental rotation, spatial relation inference, and figure–ground discrimination—regardless of model size or prompting strategy. These findings suggest that performance improvements on existing general benchmarks might be castles in the air instead of mastery of human-like visual cognition, challenging the assumption that large-scale pretraining naturally induces gestalt-like perceptual capabilities. The dataset and evaluation toolkit will be made publicly available upon publication.
null
['Multimodal Large Language Model', 'Vision Language Model', 'Cognition', 'Evaluation']
/pdf/1c1f48dc0ef033ef1f5986cdd84c20217453d3fc.pdf
applications to computer vision, audio, language, and other modalities
/attachment/70605ccf308eee0a1323bf598602ed76ea43a554.zip
['ICLR.cc/2026/Conference/Submission88/Authors']
EXFKk4Y3yc
87
EXFKk4Y3yc
Spilled Energy in Large Language Models
We reinterpret the final softmax classifier over the vocabulary of Large Language Models (LLM) as an Energy-based Model (EBM). This allows us to decompose the chain of probabilities used in sequence-to-sequence modeling as multiple EBMs that interact together at inference time. Our decomposition offers a principled approach to measuring where the "energy spills" in LLM decoding, empirically showing that spilled energy correlates well with factual errors, inaccuracies, biases, and failures. Similar to Orgad et al. (2025), we localize the exact token associated with the answer, yet, unlike them, who need to train a classifier and ablate which activations to feed to it, we propose a method to detect hallucinations *completely training-free that naturally generalizes across tasks and LLMs* by using the output logits across subsequent generation steps. We propose two ways to detect hallucinations: the first one that measures the difference between two quantities that we call **spilled energy**, measuring the difference between energy values across two generation steps that mathematically should be equal; the other is **marginal energy**, which we can measure at a single step. Unlike prior work, our method is training-free, mathematically principled, and demonstrates strong cross-dataset generalization: we scale our analysis to state-of-the-art LLMs, including LLaMa-3, Mistral, and Qwen-3, evaluating on nine benchmarks and achieving competitive performance with robust results across datasets and different LLMs.
We recast the LLM softmax as an Energy-Based Model, introducing training-free energy measures to detect hallucinations. Our method pinpoints errors, generalizes across tasks, and shows robust results on nine benchmarks.
['LLM', 'hallucination detection', 'EBM']
/pdf/c7f4a295dde283e8da45345b35965fcf90a31fbf.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission87/Authors']
6XvqXQq0ae
86
6XvqXQq0ae
NextLocMoE: Enhancing Next Location Prediction via Location-Semantics Mixture-of-Experts and Personalized Mixture-of-Experts
Next location prediction is a key task in human mobility modeling. Existing methods face two challenges: (1) they fail to capture the multi-faceted semantics of real-world locations; and (2) they struggle to model diverse behavioral patterns across user groups. To address these issues, we propose NextLocMoE, a large language model (LLM)-based framework for next location prediction, which integrates a dual-level Mixture-of-Experts (MoE) architecture. It comprises two complementary modules: a Location Semantics MoE at the embedding level to model multi-functional location semantics, and a Personalized MoE within LLM’s Transformer layers to adaptively capture user behavior patterns. To enhance routing stability and reliability, we introduce a historical-aware router that integrates long-term historical trajectories into expert selection. Experiments on multiple real-world datasets demonstrate that NextLocMoE significantly outperforms existing methods in terms of accuracy, transferability, and interpretability. Code is available at: https://anonymous.4open.science/r/NextLocMOE-BAC8.
We propose NextLocMoE, a Mixture-of-Experts LLM framework for next-location prediction, which jointly modelslocation semantics and behavioral preferences via dual expert modules and history-aware routing.
['next location prediction', 'Mixture-of-Experts', 'Large Language Model', 'Location Function MoE', 'Persona MoE']
/pdf/f5b63891a6c4d26f62a5d31b7d29da7969c92e8c.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission86/Authors']
i4BiQK5Ndw
83
i4BiQK5Ndw
TopoMHC: Sequence–Topology Fusion for MHC Binding
Accurate prediction of peptide immunogenicity, particularly the binding affinity to major histocompatibility complex (MHC) molecules, is critical for vaccine design and immunotherapy. Existing approaches are predominantly sequence-based and often overlook structural variability and topological organization, which restricts predictive reliability. In this work, we introduce a multi-modal framework that integrates sequence embeddings from a pre-trained protein language model (e.g., ESM-C) with topology-informed descriptors derived from peptide conformations. We generate peptide conformers using molecular dynamics simulations and RDKit-based methods, and from these conformations we compute persistent homology invariants, Betti numbers, geometric statistics, and residue connectivity measures. These topological features are then fused with sequence embeddings through a cross-attention mechanism, allowing the model to capture both local sequence patterns and global conformational organization. Extensive experiments demonstrate consistent improvements over conventional structure-based and sequence-only baselines, establishing state-of-the-art performance in peptide immunogenicity prediction.
null
['immunogenicity prediction', 'major histocompatibility complex', 'peptide representation learning', 'statistical topology', 'persistent homology', 'protein language models', 'cross-modal learning', 'vaccine design']
/pdf/73d717f9219d719e35f3d8e629d5634b1dee6df2.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission83/Authors']
Tp70ig4iKN
80
Tp70ig4iKN
Seeing Before Reasoning: A Unified Framework for Generalizable and Explainable Fake Image Detection
Detecting AI-generated images with multimodal large language models (MLLMs) has gained increasing attention, due to their rich world knowledge, common-sense reasoning, and potential for explainability. However, naively applying those MLLMs for detection often leads to suboptimal performance. We argue that the root of this failure lies in a fundamental mismatch: *MLLMs are asked to reason about fakes before they can truly see them.* First, **they do not really see**: existing MLLMs' vision encoders are primarily optimized for semantic-oriented recognition rather than the perception of low-level signals, leaving them insensitive to subtle forgery traces. Without access to reliable perceptual evidence, the model grounds its judgment on incomplete and limited visual observations. Second, existing finetuning data for detection typically uses narrow, instruction-style formats, which diverge sharply from the diverse, heterogeneous distributions seen in pretraining. In the absence of meaningful visual cues, the model therefore exploits these linguistic shortcuts, resulting in catastrophic forgetting of pretrained knowledge (even the basic dialogue capabilities). In response, we advocate for a new paradigm: *seeing before reasoning*. We propose that MLLMs should first be trained to perceive artifacts—strengthening their artifact-aware visual perception—so that subsequent reasoning is grounded in actual observations. We therefore propose **Forensic-Chat**, a generalizable, explainable, and still-conversational (for multi-round dialogue) assistant for fake image detection. Specifically, we first refine the vision encoder only via self-reconstruction while freezing the LLM, sensitizing it to artifacts without sacrificing pretrained knowledge (Stage 1). Then, we construct a multi-round dialogue finetuning data for detection, which is designed to progressively guide the model from artifact perception to common-sense reflection, enabling dialectical reasoning about *why an image is fake* and *what a real version should look like* (Stage 2). We also propose **ExplainFake-Bench**, a benchmark tailored for the evaluation of the MLLM's explainability for image forensics from five key aspects. Extensive experiments show the superiority of generalization and genuinely reliable explainability.
We propose a unified MLLM-based framework that simultaneously perceives low-level artifacts and reasons dialectically about high-level plausibility, without reliance on external detectors.
['AI-Generated Image Detection', 'MLLM', 'Media Forensics']
/pdf/0f0450b32e796e0cde2b002e3c20ad8a749d6c10.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission80/Authors']
NlMXI17iou
77
NlMXI17iou
Reordered SparseGPT: Optimizing the Pruning Order in Second-Order LLM Pruning
Pruning is widely recognized as an effective method for reducing the parameters of large language models (LLMs), potentially leading to more efficient inference. One classic and prominent path of one-shot LLM pruning is to leverage the second-order gradients (i.e., Hessian), represented by the pioneering works like SparseGPT (Frantar & Alistarh, 2023). However, the predefined left-to-right pruning order in SparseGPT leads to suboptimal performance when the weights exhibit columnar patterns. This paper studies the effect of pruning order under the SparseGPT framework. The analyses lead us to propose ROSE, a reordered SparseGPT method that prioritizes weight columns with larger potential pruning errors to be processed first. Specifically, following the block-wise iterative pruning scheme of SparseGPT, we first perform a pre-pruning step to identify weights that are highly likely to be pruned, based on which we compute both column-wise and block-wise pruning loss. Columns within each block are then reordered in descending order of column loss, while blocks are reordered in descending order of block loss. We further analyze different layer types and selectively apply reordering to specific layers. Substantial empirical results on prevalent LLMs (LLaMA2-7B/13B/70B, LLaMA3-8B, Mistral-7B) demonstrate that ROSE surpasses the original SparseGPT and other counterpart pruning methods.
This paper presents a new SoTA Hessian-based one-shot LLM pruning algorithm, which can be applied to unstructured and semi-structured sparsities.
['LLM', 'Network Pruning', 'Hessian-based Pruning']
/pdf/af7361eb2c49fd861f47a41b43506dee223d3eb4.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission77/Authors']
oKyDZabG0I
74
oKyDZabG0I
More Than a Snapshot: Forcing Temporal Reasoning in Video Segmentation
Video Reasoning Segmentation (VRS) inherits the settings of reasoning based on world knowledge and spatial contents, lacking queries demanding temporal reasoning according to the unique temporal dynamics of videos. To bridge the gap, we introduce TempVRS, a large-scale Temporal Video Reasoning Segmentation dataset containing 30k videos and 200k queries injecting temporal dynamics. Moreover, existing VRS methods commonly employ a three-stage paradigm: keyframe selection, reasoning and propagation. However, such paradigm not only neglects temporal dynamics inherent in videos which results in non-negligible deviations of keyframe selections, but also hinders video understanding, leading to the degradation of video reasoning into isolated keyframe analysis. To address the defects of such paradigm, we propose a temporal video reasoning segmentation method to stimulate the inherent temporal-reasoning capabilities of multi-modal large language model. Through interleaving uniform-sampled video frames across spatial dimension and explicitly injecting spatiotemporal distribution, our 4B-method can achieve comparable performance with Sa2VA-8B under the same inference settings, significantly improving the accuracy when evaluated on existing referring/reasoning video segmentation benchmarks (e.g., $5.5\%$% and $3.4\%$% increases compared to Sa2VA-4B on MeViS and ReVOS).
null
['Video Reasoning Segmentation', 'Temporal Dynamics']
/pdf/57772ac96c8fcc2c882888bf4e50ebcd74e67222.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission74/Authors']
RKYO6R8Jgb
72
RKYO6R8Jgb
Thinking-Free Policy Initialization Makes Distilled Reasoning Models More Effective and Efficient Reasoners
Reinforcement Learning with Verifiable Reward (RLVR) effectively solves complex tasks but demands extremely long context lengths during training, leading to substantial computational costs. While multi-stage training can partially mitigate this, starting with overly short contexts often causes irreversible performance degradation, ultimately failing to reduce overall training compute significantly. In this paper, we introduce **T**hinking-**F**ree **P**olicy **I**nitialization (**TFPI**), a simple yet effective adaptation to RLVR that bridges long Chain-of-Thought (CoT) distillation and standard RLVR. TFPI employs a simple *ThinkFree* operation, explicitly discarding the thinking content via a direct *</think>* append, to reduce token usage during inference. Training with *ThinkFree*-adapted inputs improves performance and lowers token consumption, even in the original slow-thinking mode. Extensive experiments across various benchmarks have shown that {\method} accelerates RL convergence, achieves a higher performance ceiling, and yields more token-efficient reasoning models without specialized rewards or complex training designs. With TFPI only, we train a 4B model to reach 89.0% accuracy on AIME24 and 65.5% on LiveCodeBench using less than 4K H20 hours.
We propose Thinking-Free Policy Initialization, a stage prior to RL that can accelerate RL convergence to a higher performance ceiling and naturally yield reasoning-efficient models
['Large Language Models', 'Reasoning', 'Reinforcement Learning with Verifiable Rewards', 'Long Chain-of-Thought']
/pdf/9485752602f24c1d423333799dadade407c91cf6.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission72/Authors']
WEg7e5pcso
70
WEg7e5pcso
ABConformer: Physics‑inspired Sliding Attention for Antibody-Antigen Interface Prediction
Accurate prediction of antibody-antigen (Ab-Ag) interfaces is critical for vaccine design, immunodiagnostics and therapeutic antibody development. However, achieving reliable predictions from sequences alone remains a challenge. In this paper, we present \textsc{ABConformer}, a model based on the Conformer backbone that captures both local and global features of a biosequence. To accurately capture Ab-Ag interactions, we introduced the physics-inspired sliding attention, enabling residue-level contact recovery without relying on three-dimensional structural data. ABConformer can accurately predict paratopes and epitopes given the antibody and antigen sequence, and predict pan-epitopes on the antigen without antibody information. In comparison experiments, \textsc{ABConformer} achieves state-of-the-art performance on a recent SARS-CoV-2 Ab-Ag dataset, and surpasses widely used sequence-based methods for antibody-agnostic epitope prediction. Ablation studies further quantify the contribution of each component, demonstrating that, compared to conventional cross-attention, sliding attention significantly enhances the precision of epitope prediction. To facilitate reproducibility, we will release the code under an open-source license upon acceptance.
null
['Antibody–antigen interface prediction', 'Protein sequence modeling', 'Conformer', 'Sliding attention mechanism', 'Epitope prediction', 'Paratope prediction', 'Structural bioinformatics']
/pdf/38039f8f48fb41930fb9d9ea4cf56c01bf411aab.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/76811f6954e6a1df174951d8ce851b45a4a300af.zip
['ICLR.cc/2026/Conference/Submission70/Authors']
84vy8ZomFn
68
84vy8ZomFn
Breaking Scale Anchoring: Frequency Representation Learning for Accurate High-Resolution Inference from Low-Resolution Training
Zero-Shot Super-Resolution Spatiotemporal Forecasting requires a deep learning model to be trained on low-resolution data and deployed for inference on high-resolution. Existing studies consider **maintaining** similar error across different resolutions as indicative of successful multi-resolution generalization performance. However, deep learning models serving as alternatives to numerical solvers should **reduce** error as resolution increases. The fundamental limitation is, the upper bound of physical law frequencies that low-resolution data can represent is constrained by its Nyquist frequency, making it difficult for models to process signals containing unseen frequency components during high-resolution inference. *This results in errors being anchored at low resolution, incorrectly interpreted as successful generalization.* We define this fundamental phenomenon as a new problem distinct from existing issues: **Scale Anchoring**. Therefore, we propose architecture-agnostic Frequency Representation Learning. It alleviates Scale Anchoring through resolution-aligned frequency representations and spectral consistency training: within our task and resolution range, on grids with higher Nyquist frequencies, the frequency response in high-frequency bands is more stable. Consequently, the overall error consistently decreases with resolution and is significantly lower than the baseline.
null
['Scale Anchoring', 'Zero-Shot Super-Resolution', 'Spatiotemporal Forecasting', 'Frequency Representation']
/pdf/6dab9cd5dfc2a8ac07dbb4dda69abb99c96e651c.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/5c15803f970e08058eb5c6c9ec1fd16dadd86cb9.zip
['ICLR.cc/2026/Conference/Submission68/Authors']
Y9b5UuGi9O
66
Y9b5UuGi9O
CAI: Caption-Sensitive Attention Intervention for Mitigating Object Hallucination in Large Vision-Language Models
Although Large Vision-Language Models (LVLMs) have demonstrated remarkable performance on downstream tasks, they frequently produce contents that deviate from visual information, leading to object hallucination. To tackle this, recent works mostly depend on expensive manual annotations and training cost, or decoding strategies which significantly increase inference time. In this work, we observe that LVLMs' attention to visual information is significantly enhanced when answering caption queries compared to non-caption queries. Inspired by this phenomenon, we propose Caption-sensitive Attention Intervention (CAI), a training-free, plug-and-play hallucination mitigation method that leverages the attention activation pattern corresponding to caption queries to enhance LVLMs' visual perception capability. Specifically, we use probing techniques to identify attention heads that are highly sensitive to caption queries and accurately estimate optimized intervention directions for their outputs. This intervention strengthens LVLM's fine-grained visual perception capabilities, thereby effectively mitigating object hallucination. CAI reduced object hallucination by an average of 6.03% across five widely used LVLMs and five benchmarks including both discriminative and generative tasks, demonstrating state-of-the-art (SOTA) performance while incurring little additional inference cost and preserving other foundational capabilities.
We propose Caption-sensitive Attention Intervention (CAI), a training-free method, that refines caption-sensitive attention heads outputs during inference to enhance the fine-grained visual perception capability and mitigate object hallucination.
['Larger Vision-Language Model', 'Hallucination']
/pdf/e1c8340e562f9d274c2e634e4f49374ce76b0d78.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission66/Authors']
CuzTXLB7Jz
65
CuzTXLB7Jz
OmniSAT: Compact Action Token, Faster Auto Regression
Existing Vision-Language-Action (VLA) models can be broadly categorized into diffusion-based and auto-regressive (AR) approaches: diffusion models capture continuous action distributions but rely on computationally heavy iterative denoising. In contrast, AR models enable efficient optimization and flexible sequence construction, making them better suited for large-scale pretraining. To further improve AR efficiency, particularly when action chunks induce extended and high-dimensional sequences, prior work applies entropy-guided and token-frequency techniques to shorten the sequence length. However, such compression struggled with poor reconstruction or inefficient compression. Motivated by this, we introduce an Omni Swift Action Tokenizer, which learns a compact, transferable action representation. Specifically, we first normalize value ranges and temporal horizons to obtain a consistent representation with B-Spline encoding. Then, we apply multi-stage residual quantization to the position, rotation, and gripper subspaces, producing compressed discrete tokens with coarse-to-fine granularity for each part. After pre-training on the large-scale dataset Droid, the resulting discrete tokenization shortens the training sequence by 6.8$\times$, and lowers the target entropy. To further explore the potential of OmniSAT, we develop a cross-embodiment learning strategy that builds on the unified action-pattern space and jointly leverages robot and human demonstrations. It enables scalable auxiliary supervision from heterogeneous egocentric videos. Across diverse real-robot and simulation experiments, OmniSAT encompasses higher compression while preserving reconstruction quality, enabling faster AR training convergence and model performance.
null
['Imitation Learning; Action Representation; Vision-Language-Action Learning']
/pdf/ccc987a5b4b404f0a409b34c2eba4139a884ce88.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission65/Authors']
jov79sMFHn
64
jov79sMFHn
NANO3D: A Training-Free Approach for Efficient 3D Editing Without Masks
3D object editing is essential for interactive content creation in gaming, animation, and robotics, yet current approaches remain inefficient, inconsistent, and often fail to preserve unedited regions. Most methods rely on editing multi-view renderings followed by reconstruction, which introduces artifacts and limits practicality. To address these challenges, we propose \textbf{Nano3D}, a training-free framework for precise and coherent 3D object editing without masks. Nano3D integrates FlowEdit into TRELLIS to perform localized edits guided by front-view renderings, and further introduces region-aware merging strategies, Voxel/Slat-Merge, which adaptively preserve structural fidelity by ensuring consistency between edited and unedited areas. Experiments demonstrate that Nano3D achieves superior 3D consistency and visual quality compared with existing methods. Based on this framework, we construct the first large-scale 3D editing datasets \textbf{Nano3D-Edit-100k}, which contains over 100,000 high-quality 3D editing pairs. This work addresses long-standing challenges in both algorithm design and data availability, significantly improving the generality and reliability of 3D editing, and laying the groundwork for the development of feed-forward 3D editing models.
null
['3D Computer Vision', '3D Editing', '3D Generation', 'Flow', 'Image Editiing']
/pdf/cbf3e28722c3010620160fa33672819483eba27a.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission64/Authors']
9qOF3zgVfa
63
9qOF3zgVfa
A Needle In A Haystack: Referring Hour-Level Video Object Segmentation
Long-term videos over minutes are ubiquitous in daily life while existing Referring Video Object Segmentation (RVOS) datasets are limited to short-term videos with a duration of only 5-60 seconds. To unveil the dilemma of referring object segmentation towards hour-level videos, we construct the first Hour-level Referring Video Object Segmentation (Hour-RVOS) dataset characterized by (1) any-length videos from seconds to hours, (2) rich-semantic expressions with double length, and (3) multi-round interactions according to target change. These unique characteristics further bring tough challenges including (1) **Sparse object distribution**: Segmenting target objects in sparse-distributed key-frames from massive amounts of frames is like finding a needle in a haystack. (2) **Long-range correspondence**: Intricate linguistic-visual associations are required to establish across thousands of frames. To address these challenges, we propose a semi-online hierarchical-memory-association RVOS method for building cross-modal long-range correlations. Through interleaved propagation of hierarchical memory and dynamic balance of linguistic-visual tokens, our method can adequately associate multi-period representations of target objects in a real-time way. The benchmark results show that existing offline methods have to struggle with hour-level videos in multiple stages, whereas our proposed method without LLMs can achieve over $15\%$% accuracy improvements compared to Sa2VA-8B when handling any-length videos with multi-round and various-semantic expressions in one-stage.
null
['Referring Video Object Segmentation', 'Hierarchical Memory']
/pdf/3b643a86f53d4d2476c0f3ea238941b545fde51e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission63/Authors']
tw1IWcVKTT
62
tw1IWcVKTT
Automated Optimization Modeling via a Localizable Error-Driven Perspective
Automated optimization modeling via Large Language Models (LLMs) has emerged as a promising approach to assist complex human decision-making. While post-training has become a pivotal technique to enhance LLMs' capabilities in this domain, its effectiveness is severely constrained by the scarcity and underutilization of high-quality training data. However, through a detailed profiling of error patterns across various problem-response pairs drawn from post-training, we identify two fundamental limitations of existing automated optimization modeling approaches: (L1) the \textit{sparsity} of error-specific problems and (L2) the \textit{sparse rewards} associated with difficult problems. We demonstrate that these limitations can result in suboptimal performance in domain-specific post-training for LLMs. To tackle the above two limitations, we propose a novel error-driven learning framework---namely, auto\textbf{m}ated opt\textbf{i}mization modeli\textbf{n}g via a localizable error-\textbf{d}riven perspective (MIND)---that customizes the whole model training framework from data synthesis to post-training. MIND is based on our key observation of the unique \textbf{\textit{localizable}} patterns in error propagation of optimization modelings, that is, modeling errors may remain localized to specific semantic segments and do not propagate throughout the entire solution. Thus, in contrast to holistic reasoning tasks such as mathematical proofs, MIND leverages the construction of a focused, high-density training corpus and proposes \textbf{D}ynamic Supervised \textbf{F}ine-Tuning \textbf{P}olicy \textbf{O}ptimization (DFPO) to tackle difficult problems through localized refinement. Its appealing features include that (1) it generates targeted, error-aware training problems that achieve superior sample efficiency, and (2) it ensures a coherent and structured learning progression for stable and effective reinforcement learning on difficult problems. Experiments on six benchmarks demonstrate that MIND \textit{consistently} outperforms all the state-of-the-art automated optimization modeling approaches. Furthermore, we open-source a new training dataset, MIND-Train, and a new benchmark, MIND-Bench, for the automated optimization modeling research community.
null
['LLM post-training', 'automated optimization modeling']
/pdf/23fb085ea34ec9c3758c3b82f1b0675987c4f205.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission62/Authors']
AaZVrbElhC
61
AaZVrbElhC
CaRe-BN: Precise Moving Statistics for Stabilizing Spiking Neural Networks in Reinforcement Learning
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision-making on neuromorphic hardware by mimicking the event-driven dynamics of biological neurons. However, due to the discrete and non-differentiable nature of spikes, directly trained SNNs rely heavily on Batch Normalization (BN) to stabilize gradient updates. In online Reinforcement Learning (RL), imprecise BN statistics hinder exploitation, resulting in slower convergence and suboptimal policies. This challenge limits the adoption of SNNs for energy-efficient control on resource-constrained devices. To overcome this, we propose Confidence-adaptive and Re-calibration Batch Normalization (CaRe-BN), which introduces (\emph{i}) a confidence-guided adaptive update strategy for BN statistics and (\emph{ii}) a re-calibration mechanism to align distributions. By providing more accurate normalization, CaRe-BN stabilizes SNN optimization without disrupting the RL training process. Importantly, CaRe-BN does not alter inference, thus preserving the energy efficiency of SNNs in deployment. Extensive experiments on continuous control benchmarks demonstrate that CaRe-BN improves SNN performance by up to $22.6$% across different spiking neuron models and RL algorithms. Remarkably, SNNs equipped with CaRe-BN even surpass their ANN counterparts by $5.9$%. These results highlight a new direction for BN techniques tailored to RL, paving the way for neuromorphic agents that are both efficient and high-performing.
null
['Spiking Neural Networks', 'Batch Normalization', 'Reinforcement Learning']
/pdf/c059da07546cb4a9c34c3abff3df59e0351f2515.pdf
applications to neuroscience & cognitive science
/attachment/d24c798f75e7488595b21d7268076fb8c487bb43.zip
['ICLR.cc/2026/Conference/Submission61/Authors']
Fa3C0TkWYi
60
Fa3C0TkWYi
RectiWeather: Photo-Realistic Adverse Weather Removal via Zero-shot Soft Weather Perception and Rectified Flow
Despite significant progress in Adverse Weather Removal (AWR), challenges remain in applying existing methods to real-world scenarios and in generating photo-realistic and visually compelling outcomes. The limited generalization of current approaches can be attributed to their inability to accurately perceive complex degradations in weather-affected images. Moreover, owing to optimization objectives that prioritize distortion losses, discriminative methods often produce overly smooth reconstructions. To address these challenges, we propose \textbf{RectiWeather}, a novel AWR framework guided by zero-shot soft perceptions extracted from pre-trained vision–language models (VLMs). Specifically, we design an AWR-specific Question Answering (AWR-QA) module that guides VLMs to produce soft perceptions of weather conditions and low-level attributes. These soft perceptions are then integrated into baseline AWR models through attribute-modulated normalization (AMN) and weather-weighted adapters (WWA), enabling posterior mean estimation while minimizing distortion loss. Furthermore, we map the posterior output to the clean image distribution using a perception-aware rectified flow model, where soft perceptions define the source distribution and guide the velocity field. Extensive experiments show that RectiWeather consistently surpasses state-of-the-art baselines in fidelity and perceptual metrics across both all-in-one and out-of-distribution scenarios. Our code will be released upon publication.
null
['zero-shot', 'soft perception', 'rectified flow']
/pdf/0ccbf8172fd9da11e5a1c3badd0efedef04b4355.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission60/Authors']
hQhqq6G3Be
58
hQhqq6G3Be
Adaptive Text and Feature Embedding for Consistent Story Generation
Recent advancements in text-to-image (T2I) generation have significantly improved image quality and text alignment. However, generating multiple coherent images that maintain consistent character identities across diverse textual descriptions remains challenging. Existing methods face trade-offs between identity consistency and per-image text fidelity, often yielding uniform poses or failing to capture specific details, resulting in inconsistent performance. In this paper, we explore text embeddings of word and PAD tokens from the scene descriptions, and ambiguity of the identity description. We find the identity and irrelevant components from the text embeddings to amplify and suppress them, respectively. Additionally, we detect under-specified identity descriptions and reuse their features during generative process. Finally, we introduce a unified evaluation protocol, the Consistency Quality Score (CQS), integrating identity preservation and per-image text alignment into a single comprehensive metric. CQS explicitly captures performance imbalances, aligning evaluation closely with human perceptual preferences. Our framework achieves state-of-the-art performance, effectively resolving prior trade-offs and providing valuable insights into consistent image generation.
null
['consistent generation']
/pdf/8d375ec00fc86c1fb6e13bf50e2685577220a456.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission58/Authors']
SGsxxbAjXH
53
SGsxxbAjXH
MVCustom: Multi-View Customized Diffusion via Geometric Latent Rendering and Completion
Multi-view generation with camera pose control and prompt-based customization are both essential elements for achieving controllable generative models. However, existing multi-view generation models do not support customization with geometric consistency, whereas customization models lack explicit viewpoint control, making them challenging to unify. Motivated by these gaps, we introduce a novel task, multi-view customization, which aims to jointly achieve multi-view camera pose control and customization. Due to the scarcity of training data in customization, existing multi-view generation models, which inherently rely on large-scale datasets, struggle to generalize to diverse prompts. To address this, we propose MVCustom, a novel diffusion-based framework explicitly designed to achieve both multi-view consistency and customization fidelity. In the training stage, MVCustom learns the subject's identity and geometry using a feature-field representation, incorporating the text-to-video diffusion backbone enhanced with dense spatio-temporal attention, which leverages temporal coherence for multi-view consistency. In the inference stage, we introduce two novel techniques: depth-aware feature rendering explicitly enforces geometric consistency, and consistent-aware latent completion ensures accurate perspective alignment of the customized subject and surrounding backgrounds. Extensive experiments demonstrate that MVCustom is the only framework that simultaneously achieves faithful multi-view generation and customization.
null
['Multi-view generation', 'Customizaton', 'Personalization']
/pdf/7402b82185602eb505889e6c56ce19060b583db8.pdf
generative models
/attachment/64288fe47f2bb519516b57e495715432940c8b78.zip
['ICLR.cc/2026/Conference/Submission53/Authors']
eGI1HQeCmn
51
eGI1HQeCmn
ImmunoTrace: A Meta-Agent for Immune History Tracking
The adaptive immune system encodes an individual's exposure history in the T-cell receptor (TCR) repertoire. We present ImmunoTrace, an AI agent for immune history tracking that estimates past pathogen exposure from a single time-point repertoire by linking TCRs and HLA alleles to proteome-scale peptide libraries. A shared protein language model encodes TCR CDR3 sequences, HLA pseudo-sequences, and candidate peptides. Three high-capacity projection heads adapt these embeddings, and two cross-attention modules explicitly model TCR–peptide and HLA–peptide interactions. The fused representation is passed to a deep classifier to produce binding probabilities, while a contrastive branch with an InfoNCE objective and a learnable temperature sculpts the embedding space; we jointly optimize the contrastive and BCE losses while partially fine-tuning ESM2. For subject-level tracking, scores are calibrated into probabilities and evidence is aggregated across the repertoire with a probabilistic fusion scheme, yielding pathogen-level exposure estimates together with interpretable peptide-level evidence. On a multi-pathogen benchmark that includes Treponema pallidum (syphilis) and Neisseria gonorrhoeae (gonorrhea), ImmunoTrace surpasses strong baselines, generalizes under protein and HLA distribution shifts, maintains well-calibrated predictions, and scales to proteome-sized libraries with practical latency. We will release code and data-preparation recipes to facilitate reproducibility.
ImmunoTrace is an AI agent that links a single-time-point TCR repertoire (with optional HLA) to proteome-scale peptide libraries.
['AI Agent', 'Retrieval-Augmented Modeling', 'Contrastive Learning', 'Probabilistic Evidence Fusion', 'Immune Exposure']
/pdf/d1ffbbfed5979176e21ac50a4ef3cc142581e5b4.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/0400c6e99d42ec68b2906e04d70169648f6a2e03.zip
['ICLR.cc/2026/Conference/Submission51/Authors']
8IjxLiNXL1
49
8IjxLiNXL1
Memory Forgetting Adapter Sculpting for Selective Multimodal Large Language Model Unlearning
Multimodal Large Language Models (MLLMs) achieve remarkable capabilities but can inadvertently memorize privacy-sensitive information. Existing unlearning methods can remove such knowledge, yet they often degrade the model’s general image understanding. To address this, we propose the Sculpted Memory Forgetting Adapter (SMFA), which confines forgetting to targeted memory regions while preserving overall capabilities. SMFA first fine-tunes the model to replace sensitive responses with refusals, yielding a memory forgetting adapter, and then applies retaining anchor-guided masking mechanism to prevent interference with unrelated knowledge and understanding ability. To systematically evaluate selective unlearning, we introduce S-MLLMUn Bench, the first benchmark designed to jointly assess the removal of sensitive knowledge and retention of general visual understanding. Extensive experiments show that, unlike prior methods, SMFA achieves precise and controllable unlearning while maintaining the model’s foundational image understanding.
null
['MLLMs', 'Machine Unlearning', 'MLLM Unlearning', 'Privacy Protection']
/pdf/5b82a24c81db1a9f2c82edacb3914001b9b28546.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission49/Authors']
PaYo96rjij
44
PaYo96rjij
Lifelong Embodied Navigation Learning
Embodied navigation agents powered by large language models have shown strong performance on individual tasks but struggle to continually acquire new navigation skills, which suffer from catastrophic forgetting. We formalize this challenge as lifelong embodied navigation learning (LENL), where an agent is required to adapt to a sequence of navigation tasks spanning multiple scenes and diverse user instruction styles, while retaining previously learned knowledge. To tackle this problem, we propose Uni-Walker, a lifelong embodied navigation framework that decouples navigation knowledge into task-shared and task-specific components with Decoder Extension LoRA (DE-LoRA). To learn the shared knowledge, we design a knowledge inheritance strategy and an experts co-activation strategy to facilitate shared knowledge transfer and refinement across multiple navigation tasks. To learn the specific knowledge, we propose an expert subspace orthogonality constraint together and a navigation-specific chain-of-thought reasoning mechanism to capture specific knowledge and enhance instruction-style understanding. Extensive experiments demonstrate the superiority of Uni-Walker for building universal embodied navigation agents with lifelong learning. We also provide the code of this work in the Supplementary Materials.
We propose Uni-Walker, a lifelong embodied navigation framework that decouples navigation knowledge into task-shared and task-specific components with Decoder Extension LoRA (DE-LoRA).
['Embodied Navigation', 'Lifelong Learning', 'Robotics Learning']
/pdf/a2c3cf69753a38670628cc736ba09431d8cd98fc.pdf
applications to robotics, autonomy, planning
/attachment/27ecb2511cb145533bcdfaf495bc8e661f073efd.zip
['ICLR.cc/2026/Conference/Submission44/Authors']
QYH7JGzEzM
43
QYH7JGzEzM
GrapHist: Large-Scale Graph Self-Supervised Learning for Histopathology
Self-supervised vision models have achieved notable success in digital pathology. However, their domain-agnostic transformer architectures are not designed to inherently account for fundamental biological elements of histopathology images, namely cells and their complex interactions. In this work, we hypothesize that a biologically-informed modeling of tissues as cell graphs offers a more efficient representation learning. Thus, we introduce GrapHist, a novel graph-based self-supervised framework for histopathology, which learns generalizable and structurally-informed embeddings that enable diverse downstream tasks. GrapHist integrates masked autoencoders and heterophilic graph neural networks that are explicitly designed to capture the heterogeneity of tumor microenvironments. We pre-train GrapHist on a large collection of 11 million cell graphs derived from breast tissues and evaluate its transferability across in- and out-of-domain benchmarks, spanning thorax, colorectal, and skin cancers. Our results show that GrapHist achieves competitive performance compared to its vision-based counterparts, while requiring four times fewer parameters. It also drastically outperforms fully-supervised graph models on cancer subtyping tasks. Finally, to foster further research, we release eight digital pathology graph datasets used in our study, establishing the first large-scale benchmark in this field.
null
['graph representation learning', 'digital pathology']
/pdf/2052ad1273f1ab95b7b4c3bccd593425b3377553.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/a9efe21b02e5834a998f7c3922d90bfef6a411fa.zip
['ICLR.cc/2026/Conference/Submission43/Authors']
cnrhmiw1VG
39
cnrhmiw1VG
GLEAM: Learning to Match and Explain in Cross-View Geo-Localization
Cross-View Geo-Localization (CVGL) focuses on identifying correspondences between images captured from distinct perspectives of the same geographical location. However, existing CVGL approaches are typically restricted to a single view or modality, and their direct visual matching strategy lacks interpretability: they only determine whether two images correspond, without explaining the rationale behind the match. In this paper, we present GLEAM-C, a foundational CVGL model that unifies multiple views and modalities—including UAV imagery, street maps, panoramic views, and ground photographs—by aligning them exclusively with satellite imagery. Our framework enhances training efficiency through optimized implementation while achieving accuracy comparable to prior modality-specific CVGL models through a two-phase training strategy. Moreover, to address the lack of interpretability in traditional CVGL methods, we leverage the reasoning capabilities of multimodal large language models (MLLMs) to propose a new task, GLEAM-X, which combines cross-view correspondence prediction with explainable reasoning. To support this task, we construct a bilingual benchmark using GPT-4o and Doubao-1.5-Thinking-Vision-Pro to generate training and testing data. The test set is further refined through detailed human revision, enabling systematic evaluation of explainable cross-view reasoning and advancing transparency and scalability in geo-localization. Together, GLEAM-C and GLEAM-X form a comprehensive CVGL pipeline that integrates multi-modal, multi-view alignment with interpretable correspondence analysis, unifying accurate cross-view matching with explainable reasoning and advancing **G**eo-**L**ocalization by enabling models to better **E**xplain **A**nd **M**atch. Code and datasets used in this work will be made publicly accessible.
This work presents GLEAM-C and GLEAM-X, a unified pipeline that advances cross-view geo-localization by integrating multi-view alignment with interpretable, explainable reasoning.
['Remote Sensing', 'Cross-View Geo-Localization', 'Multimodal Large Language Model']
/pdf/7a130beba23634a98a969092af6d39b7b1dbd331.pdf
foundation or frontier models, including LLMs
/attachment/3649d00816711f2efb443d6c95c2566a816df980.zip
['ICLR.cc/2026/Conference/Submission39/Authors']
15HYjY5ol7
37
15HYjY5ol7
An AI Agent for Immune Receptor Fingerprint‑Based Diagnosis of Infection of Unknown Origin
When routine tests fail to find a pathogen, diagnosing infections of unknown origin stalls. We instead read the patient's immune response for AI-readable clues. We formalize a new machine learning task: inferring plausible epitopes directly from immune-receptor repertoires and localizing their pathogen sources. To address this problem, we introduce a Transformer-based multi-sequence novel representation-learning model that jointly models T-cell receptors, human leukocyte antigen , and antigenic peptides, and we pretrain it across six tasks; the model achieves best or second-best performance across all six tasks against strong baselines. Building on this, we develop an end-to-end, clinically oriented agent that operates in a perceive--plan--act loop, orchestrating epitope generation, HLA-personalized filtering, consistency checks, and retrieval, with clinician-in-the-loop threshold adaptation; when evidence conflicts, it performs calibrated abstention and logs an interpretable decision trace. End-to-end on clinical-style repertoires with diagnostic report generation, the agent outperforms discriminative-pairing and direct-retrieval baselines. Upon publication, we will release all code, models, and pathogen indices under a research license, together with de-identified evaluation data.
Generative allele-aware epitope inference plus proteome retrieval turns TCR “fingerprints” into ranked pathogen hypotheses with calibrated confidence for IUO diagnosis.
['AI Agent', 'multi-task representation learning', 'Conditional sequence generation', 'Immune repertoire modeling', 'Epitope inference', 'Clinical diagnostics']
/pdf/c22586b13406a373b84019d98c4949f7c95ef57b.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/3ccac7098c85ede22a372fedd3bed2c138d4049a.zip
['ICLR.cc/2026/Conference/Submission37/Authors']
GjsE9C1grt
36
GjsE9C1grt
Nonlinear Steering for Token-Efficient Reasoning in LLMs via Flow Matching
Large Reasoning Models (LRMs) excel at complex reasoning tasks, but their efficiency is often hampered by overly verbose outputs. Prior steering methods attempt to address this issue by applying a single, global vector to hidden representations—a rigid approach grounded in the restrictive *linear representation hypothesis*. In this work, we introduce *FlowSteer*, a nonlinear steering method that goes beyond uniform linear shifts by learning a complete *transformation between the distributions* associated with verbose and concise reasoning. This transformation is learned via *Flow Matching* as a velocity field, enabling precise, input-dependent control over the model's reasoning process. Across diverse reasoning benchmarks, *FlowSteer* simultaneously achieves superior accuracy and token efficiency over leading inference-time baselines. Our work demonstrates that modeling the full distributional transport with powerful generative techniques offers a more effective and principled foundation for controlling LRMs.
This paper introduces a nonlinear steering method using Flow Matching to transform verbose reasoning paths into concise ones, achieving superior accuracy and token efficiency in LLMs.
['representation steering; large reasoning models; LRMs; large language models; LLMs; efficient reasoning; flow matching']
/pdf/3421f23aa0576a1a0ef1db91cfc97936c8c749b3.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission36/Authors']
sE8DCSJTzd
35
sE8DCSJTzd
Exploration v.s. Exploitation: Rethinking RLVR through Clipping, Entropy, and Spurious Reward
This paper examines the exploration–exploitation trade-off in reinforcement learning with verifiable rewards (RLVR), a framework for improving the reasoning of Large Language Models (LLMs). Recent studies suggest that RLVR can elicit strong mathematical reasoning in LLMs through two seemingly paradoxical mechanisms: \textit{spurious rewards}, which suppress exploitation by rewarding outcomes unrelated to the ground truth, and \textit{entropy minimization}, which suppresses exploration by pushing the model toward more confident and deterministic outputs, highlighting a puzzling dynamic: both discouraging exploitation and discouraging exploration improve reasoning performance, yet the underlying principles that reconcile these effects remain poorly understood. We focus on two fundamental questions: (i) how policy entropy relates to performance, and (ii) whether spurious rewards yield gains, potentially through the interplay of clipping bias and model contamination. Our results show that clipping bias under spurious rewards reduces policy entropy, leading to more confident and deterministic outputs, while entropy minimization alone is insufficient for improvement. We further propose a reward-misalignment model explaining why spurious rewards can enhance performance beyond contaminated settings. Our findings clarify the mechanisms behind spurious-reward benefits and provide principles for more effective RLVR training.
null
['Reinforcement Learning with Verifiable Rewards', 'Group Relative Policy Optimization', 'LLM Reasoning']
/pdf/cb6d1e97c04de37d8f35dd44516f78647f047f46.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission35/Authors']
6eSNG1VNkl
33
6eSNG1VNkl
SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks
Multi-turn jailbreaks capture the real threat model for safety-aligned chatbots, where single-turn attacks are merely a special case. Yet existing approaches break under exploration complexity and intent drift. We propose SEMA, a simple yet effective framework that trains a multi-turn attacker without relying on any existing strategies or external data. SEMA comprises two stages. Prefilling self-tuning enables usable rollouts by fine-tuning on non-refusal, well-structured, multi-turn adversarial prompts that are self-generated with a minimal prefix, thereby stabilizing subsequent learning. Reinforcement learning with intent-drift-aware reward trains the attacker to elicit valid multi-turn adversarial prompts while maintaining the same harmful objective. We anchor harmful intent in multi-turn jailbreaks via an intent-drift-aware reward that combines intent alignment, compliance risk, and level of detail. Our open-loop attack regime avoids dependence on victim feedback, unifies single- and multi-turn settings, and reduces exploration complexity. Across multiple datasets, victim models, and jailbreak judges, our method achieves state-of-the-art (SOTA) attack success rates (ASR), outperforming all single-turn baselines, manually scripted and template-driven multi-turn baselines, as well as our SFT (Supervised Fine-Tuning) and DPO (Direct Preference Optimization) variants. For instance, SEMA performs an average 80.1% ASR@1 across three closed-source and open-source victim models on AdvBench, 33.9% over SOTA. The approach is compact, reproducible, and transfers across targets, providing a stronger and more realistic stress test for large language model (LLM) safety and enabling automatic redteaming to expose and localize failure modes.
null
['jailbreak', 'attack', 'multi-turn', 'reinforcement learning', 'large language model']
/pdf/689aa1dbf5ca139920b52f3c93fd1376cf21b832.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission33/Authors']
KoLYNHJRBY
32
KoLYNHJRBY
CL-DPS: A Contrastive Learning Approach to Blind Nonlinear Inverse Problem Solving via Diffusion Posterior Sampling
Diffusion models (DMs) have recently become powerful priors for solving inverse problems. However, most work focuses on non-blind settings with known measurement operators, and existing DM-based blind solvers largely assume linear measurements, which limits practical applicability where operators are frequently nonlinear. We introduce CL-DPS, a contrastively trained likelihood for diffusion posterior sampling that requires no knowledge of the operator parameters at inference. To the best of our knowledge, CL-DPS is the first DM-based framework capable of solving blind nonlinear inverse problems. Our key idea is to train an auxiliary encoder offline, using a MoCo-style contrastive objective over randomized measurement operators, to learn a surrogate for the conditional likelihood \$p(\boldsymbol{y} | \boldsymbol{x}\_t)\$. During sampling, we inject the surrogate's gradient as a guidance term along the reverse diffusion trajectory, which enables posterior sampling without estimating or inverting the forward operator. We further employ overlapping patch-wise inference to preserve fine structure and a lightweight color-consistency head to stabilize color statistics. The guidance is sampler-agnostic and pairs well with modern solvers (e.g., DPM-Solver++ (2M)). Extensive experiments show that CL-DPS effectively handles challenging nonlinear cases, such as rotational and zoom deblurring, where prior DM-based methods fail, while remaining competitive on standard linear benchmarks. Code: \url{https://anonymous.4open.science/r/CL-DPS-4F5D}.
null
['Diffusion Models', 'Blind Inverse Problems', 'Contrastive Learning']
/pdf/60ec680452c3952a435815e5ec6fb69f635a1ee0.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission32/Authors']
AZ6lqcvHLX
30
AZ6lqcvHLX
Half-order Fine-Tuning for Diffusion Model: A Recursive Likelihood Ratio Optimizer
The probabilistic diffusion model (DM), generating content by inferencing through a recursive chain structure, has emerged as a powerful framework for visual generation. After pre-training on enormous data, the model needs to be properly aligned to meet requirements for downstream applications. How to efficiently align the foundation DM is a crucial task. Contemporary methods are either based on Reinforcement Learning (RL) or truncated Backpropagation (BP). However, RL and truncated BP suffer from low sample efficiency and biased gradient estimation, respectively, resulting in limited improvement or, even worse, complete training failure. To overcome the challenges, we propose the Recursive Likelihood Ratio (RLR) optimizer, a Half-Order (HO) fine-tuning paradigm for DM. The HO gradient estimator enables the computation graph rearrangement within the recursive diffusive chain, making the RLR's gradient estimator **an unbiased one with lower variance** than other methods. We theoretically investigate the bias, variance, and convergence of our method. Extensive experiments are conducted on image and video generation to validate the superiority of the RLR. Furthermore, we propose a novel prompt technique that is natural for the RLR to achieve a synergistic effect.
null
['perturbation-based gradient estimation', 'diffusion model', 'post-training']
/pdf/1c4cb7e5e1ed617120bf74e26bf181ee341f737f.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission30/Authors']
lWc3QZkC9e
27
lWc3QZkC9e
WWW.Serve: A Decentralized Framework for Collaborative LLM Serving
Recent Large language model (LLM) services remain mostly centralized, restricting both scalability and privacy. Decentralization could address these limitations, but impose challenges of trustless coordination, fair scheduling, and efficiency. To this end, we propose WWW.Serve, a decentralized framework for interconnecting LLM servers worldwide. It preserves service providers’ anonymity and privacy, while supporting self-organizing request dispatch, dynamic workload balancing, and autonomous control over resources and policies. Three key designs are integrated: a blockchain-inspired credit system for trustless collaboration, gossip-driven peer synchronization for flexible participation, and a duel-and-judge mechanism for robust contributor evaluation. Empirically, WWW.Serve improves global SLO attainment by up to $1.5\times$ and lowers latency by 27.6\%. Its performance approaches, and in some cases surpasses, centralized scheduling, while preserving the benefits of decentralization. These results highlight WWW.Serve as a promising foundation for trustless and collaborative LLM serving.
We propose WWW.Serve, a fully decentralized framework for trustless and collaborative LLM serving, which improves efficiency, latency, and scalability while preserving privacy.
['Large Language Model Serving', 'Efficienct Serving Systems', 'Decentralized LLM Serving', 'Distributed LLMs']
/pdf/9ee240e1cc36c7066864a2f959d22211f84eb1dd.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission27/Authors']
FRXNMF0to7
26
FRXNMF0to7
The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Personality traits have long been studied as predictors of human behavior. Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems, with advanced LLMs displaying consistent behavioral tendencies resembling human traits like agreeableness and self-regulation. Understanding these patterns is crucial, yet prior work primarily relied on simplified self-reports and heuristic prompting, with little behavioral validation. In this study, we systematically characterize LLM personality across three dimensions: *(1)* the dynamic emergence and evolution of trait profiles throughout training stages; *(2)* the predictive validity of self-reported traits in behavioral tasks; and *(3)* the impact of targeted interventions, such as persona injection, on both self-reports and behavior. Our findings reveal that instructional alignment (e.g., RLHF, instruction tuning) significantly stabilizes trait expression and strengthens trait correlations in ways that mirror human data. However, these *self-reported traits do not reliably predict behavior*, and *observed associations often diverge from human patterns*. While persona injection successfully steers self-reports in the intended direction, it exerts little or inconsistent effect on actual behavior. By distinguishing surface-level trait expression from behavioral consistency, our findings challenge assumptions about LLM personality and underscore the need for deeper evaluation in alignment and interpretability.
LLMs develop stable self-reported trait profiles through instructional alignment, yet these traits fail to manifest in real-world behavior.
['LLMs', 'personality traits', 'behavioral alignment', 'self-regulation', 'persona', 'trait manifestation', 'personality illusion', 'psychology of AI']
/pdf/dd4504df8949b129861273747acae5ac0c9aa6ca.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission26/Authors']
oKHPJ0GTLG
25
oKHPJ0GTLG
De-hallucinating CLIP Embeddings to Improve Brain-Vision Mapping
Recent advances in vision-language models, such as CLIP, have enabled their widespread use in brain encoding and decoding, where global image embeddings serve as anchors linking visual stimuli to voxel-level brain responses. However, we observe that CLIP's global visual embeddings often exhibit hallucinatory semantics: they encode objects not explicitly present in an image but inferred from prior associations. This imaginative bias poses a significant challenge for brain-vision mapping, particularly for natural scenes containing multiple annotated objects, where human neural responses are constrained to what is actually perceived. To address this issue, we propose a framework that suppresses CLIP's visual hallucination by integrating object- and concept-level representations. First, we extract object-centric embeddings using segmentation masks, isolating visual features tied to explicitly present objects. Next, we stabilize these diverse segment embeddings with a concept bank of text-derived CLIP embeddings, aligning bottom-up perception with top-down categorical knowledge through cross-attention. The resulting concept-stabilized object features act as corrective signals to be fused with global scene embeddings to form de-hallucinated visual representations. Finally, these representations are used for voxel-wise regression. Experiments on the NSD dataset demonstrate that our method generates representations that better align with category-selective brain regions (bodies, faces, food, places, and words), leading to more accurate and reliable neuro-based image generation compared to standard CLIP regression. These results highlight the importance of suppressing model imagination in bridging human perception with multimodal foundation models and offer a new direction for robust, biologically grounded brain-vision alignment.
null
['Brain-vision mapping', 'neuro decoding', 'semantic selectivity']
/pdf/023eb00fa2c555ec3dde2f9e72adb17b07ad5be3.pdf
applications to neuroscience & cognitive science
null
['ICLR.cc/2026/Conference/Submission25/Authors']
cf0yp18EeD
24
cf0yp18EeD
Inductive Visual Logic for Few-Shot Out-of-Distribution Adaptation in VLMs
Few-shot visual reasoning requires models not only to learn from limited supervision while also adapting across domains, including those that are far from pretraining distributions. Modern vision-language models (VLMs) such as Qwen and LLaVA excel in zero-shot tasks while collapsing in these distant out-of-distribution (OOD) settings, where standard adaptation methods provide limited gains. We introduce $\textbf{I}$nductive $\textbf{V}$isual $\textbf{L}$ogic (IVL), a trait-based reasoning framework that extracts visual traits through dual-mode prompting (semantic and low-level features) and organizes them into compact, interpretable dictionaries. IVL applies inductive–deductive reasoning over these traits at inference and grounds predictions in transferable explanations without updating model weights. Through reasoning over traits rather than memorizing examples, IVL enables training-free few-shot adaptation that explicitly addresses both near-domain shifts and distant OOD shifts. Our experiments across multiple datasets demonstrate that IVL improves few-shot performance while producing more interpretable predictions. Our evaluation results and insights highlight trait-level reasoning as a scalable and complementary path toward robust OOD adaptation in foundation-scale VLMs.
Instead of fine-tuning VLMs on novel concepts they can't represent, IVL extracts and reasons over human-interpretable visual traits from few examples.Retry
['VLM', 'LLM', 'FSDA', 'OOD']
/pdf/81ac309434737e538d77f147b50938ac1de8dae4.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24/Authors']
G5YWhGslEr
20
G5YWhGslEr
History-Aware Transformation of ReID Features for Multiple Object Tracking
In Multiple Object Tracking (MOT), Re-identification (ReID) features are widely employed as a powerful cue for object association. However, they are often wielded as a one-size-fits-all hammer, applied uniformly across all videos through simple similarity metrics. We argue that this overlooks a fundamental truth: MOT is not a general retrieval problem, but a context-specific task of discriminating targets within a single video. To this end, we advocate for the adjustment of visual features based on the context specific to each video sequence for better adaptation. In this paper, we propose a history-aware feature transformation method that dynamically crafts a more discriminative subspace tailored to each video's unique sample distribution. Specifically, we treat the historical features of established trajectories as context and employ a tailored Fisher Linear Discriminant (FLD) to project the raw ReID features into a sequence-specific representation space. Extensive experiments demonstrate that our training-free method dramatically enhances the discriminative power of features from diverse ReID backbones, resulting in marked and consistent gains in tracking accuracy. Our findings provide compelling evidence that MOT inherently favors context-specific representation over the direct application of generic ReID features. We hope our work inspires the community to move beyond the naive application of ReID features and towards a deeper exploration of their purposeful customization for MOT. Our code will be released.
null
['tracking', 'multiple object tracking', 're-identification']
/pdf/16835c94aa3e20c6a4b74bb0c5f020a23318f8c9.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission20/Authors']
KjHB7rebQO
19
KjHB7rebQO
RiskPO: Risk-based Policy Optimization with Verifiable Reward for LLM Post-Training
Reinforcement learning with verifiable reward has recently emerged as a central paradigm for post-training large language models (LLMs); however, prevailing mean-based methods, such as Group Relative Policy Optimization (GRPO), suffer from entropy collapse and limited reasoning gains. We argue that these issues stem from overemphasizing high-probability output sequences while neglecting rare but informative reasoning paths. To address these challenges, we propose Risk-based Policy Optimization (RiskPO), which substitutes classical mean-based objectives with principled risk measures. Specifically, we introduce a Mixed Value-at-Risk objective that integrates weighted attention over multiple regions of the reward distribution, thereby amplifying gradient signals on challenging instances and preventing overconfident convergence. We further design a bundling scheme that aggregates multiple questions into bundles, thus enriching the feedback signal and yielding more stable and informative training dynamics. Theoretically, we prove that the risk-averse update alleviates entropy collapse and promotes exploration. Numerically, RiskPO achieves consistent and significant improvements in mathematical reasoning, multi-modal reasoning, and code generation benchmarks, surpassing GRPO and its variants on both Pass@1 and Pass@k metrics. Our results demonstrate that risk-based optimization provides a rigorous and effective paradigm for enhancing LLM reasoning capabilities.
null
['Reinforcement Learning with Verifiable Reward', 'Risk-Sensitive RL']
/pdf/2bfcde92ee156da77da0b811626948b78d757aaf.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission19/Authors']
6a2CJrizrh
15
6a2CJrizrh
BALROG: Contextual Bandits meets Active Learning for Online Generative Model Selection
The rapid proliferation of open-platform text-to-image generative models has made prompt-wise model selection essential for producing high-quality and semantically accurate images, yet it remains a challenging problem. Existing approaches, including contextual bandit algorithms, often converge slowly and fail to exploit the semantic relationships across prompts. We introduce BALROG, a non-parametric, neighbor-based bandit framework that directly addresses these issues by transferring information across similar prompts to speed up convergence and improve generalization. By leveraging similarities between prompts, BALROG achieves faster learning and comes with strong theoretical guarantees through a sub-linear regret bound. In addition, we incorporate an active learning strategy that selectively queries ground-truth model rankings on ambiguous prompts, where ambiguity is quantified by the gap between the estimated rewards of the top two candidate models. This simple yet effective uncertainty measure substantially improves convergence and robustness. Extensive experiments on four datasets with six image generative models show that BALROG reduces regret by up to 60% compared to state-of-the-art baselines, enabling more accurate prompt-wise model selection in practice.
We propose a new method for online generative model selection based on Nearest Neighbors bandits and active learning.
['Generative models', 'Online model selection', 'Contextual bandits']
/pdf/54c87e6d7725a7415b0cb0d69f045032dce69826.pdf
reinforcement learning
/attachment/98f2aa81b8d04ee560ab457d2b6b09b7fd7dc1b0.zip
['ICLR.cc/2026/Conference/Submission15/Authors']
CYmjrbQRyM
13
CYmjrbQRyM
ASMIL: Attention-Stabilized Multiple Instance Learning for Whole-Slide Imaging
Attention-based multiple instance learning (MIL) has emerged as a powerful framework for whole slide image (WSI) diagnosis, leveraging attention to aggregate instance-level features into bag-level predictions. Despite this success, we find that such methods exhibit a new failure mode: unstable attention dynamics. Across four representative attention-based MIL methods and two public WSI datasets, we observe that attention distributions oscillate across epochs rather than converging to a consistent pattern, degrading performance. This instability adds to two previously reported challenges: overfitting and over-concentrated attention distribution. To simultaneously overcome these three limitations, we introduce attention-stabilized multiple instance learning (ASMIL), a novel unified framework. ASMIL uses an anchor model to stabilize attention, replaces softmax with a normalized sigmoid function in the anchor to prevent over-concentration, and applies token random dropping to mitigate overfitting. Extensive experiments demonstrate that ASMIL achieves up to a 6.49% F1 score improvement over state-of-the-art methods. Moreover, integrating the anchor model and normalized sigmoid into existing attention-based MIL methods consistently boosts their performance, with F1 score gains up to 10.73%. All code and data are publicly available at https://anonymous.4open.science/r/ASMIL-5018/.
null
['Whole slide image', 'Multiple instance learning']
/pdf/418d8e4d45ea48edbf688f51ac04e4883f5b9b31.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission13/Authors']
0QPXvKE4SV
12
0QPXvKE4SV
TCR-EML: Explainable Model Layers for TCR-pMHC Prediction
T cell receptor (TCR) recognition of peptide-MHC (pMHC) complexes is a central component of adaptive immunity, with implications for vaccine design, cancer immunotherapy, and autoimmune disease. While recent advances in machine learning have improved prediction of TCR-pMHC binding, the most effective approaches are black-box transformer models that cannot provide a rationale for predictions. Post-hoc explanation methods can provide insight with respect to the input but do not explicitly model biochemical mechanisms (e.g. known binding regions), as in TCR-pMHC binding. “Explain-by-design” models (i.e., with architectural components that can be examined directly after training) have been explored in other domains, but have not been used for TCR-pMHC binding. We propose explainable model layers (TCR-EML) that can be incorporated into protein-language model backbones for TCR-pMHC modeling. Our approach uses prototype layers for amino acid residue contacts drawn from known TCR-pMHC binding mechanisms, enabling high-quality explanations for predicted TCR-pMHC binding. Experiments of our proposed method on large-scale datasets demonstrate competitive predictive accuracy and generalization, and evaluation on the TCR-XAI benchmark demonstrates improved explainability compared with existing approaches.
We propose an approach to TCR-pMHC binding prediction, TCR-EML, that utilizes concept and prototype layers to provide accurate, detailed insights into the mechanisms of T cell response.
['T Cell', 'TCR', 'Transformers', 'XAI', 'Interpretability']
/pdf/375f047df05621d5eab2d0aeaca75d228a14f6fe.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/6d620dc959977bf2b0739218312766c3b2f70f47.zip
['ICLR.cc/2026/Conference/Submission12/Authors']
jxyEci13Dd
11
jxyEci13Dd
Long-Text-to-Image Generation via Compositional Prompt Decomposition
While modern text-to-image models excel at generating images from intricate prompts, they struggle to capture the key details when the prompts are expanded into descriptive paragraphs. This limitation stems from the prevalence of short captions in their training data. Existing methods attempt to address this by either fine-tuning the pre-trained models, which generalizes poorly to even longer inputs; or by projecting the oversize inputs into short-prompt domain and compromising fidelity. We propose a compositional approach that enables pre-trained models to handle long-prompt by breaking it down into manageable components. Specifically, we introduce a trainable PromptDecomposer module to decompose the long-prompt into a set of distinct sub-prompts. The pre-trained T2I model processes these sub-prompts in parallel, and their corresponding outputs are merged together using concept conjunction. Our compositional long-text-to-image model achieves performance comparable to those with specialized tuning. Meanwhile, our approach demonstrates superior generalization, outperforming other models by 7.4\% on prompts over 500 tokens in the challenging DetailMaster benchmark.
We decompose long-prompts to allow pre-trained Text-to-Image models to handle long-prompts input, demonstrating superior generalization as prompt length increases.
['Compositionality; Text-to-Image Generation; Generative Model Generalization']
/pdf/627d989858c3b9c53434578fa91d6b150461ba83.pdf
generative models
/attachment/8d0b75d6bfa9ccefd81852db1fc8ec579a826281.zip
['ICLR.cc/2026/Conference/Submission11/Authors']
Q5mkmW0cUD
9
Q5mkmW0cUD
Learn Globally, Speak Locally: Bridging the Gaps in Multilingual Reasoning
Large Language Models (LLMs) have achieved strong performance in domains like mathematics, factual question answering, and code generation, yet their ability to reason on these tasks in different languages remains underdeveloped. Especially for low-resource languages such as Swahili or Thai, LLMs can often misinterpret prompts or default to reasoning in English. This implicit bias toward high-resource languages undermines factual accuracy, interpretability, and trust. We propose M2A, a novel method that combines multi-scale multilingual alignment with language-consistency rewards on machine-translated questions, training models to reason directly and accurately in the target language. Furthermore, existing multilingual benchmarks only evaluate on final answers, overlooking whether reasoning occurs in the intended language. To close this gap, we introduce GeoFact-X, a geography-based multilingual factual reasoning benchmark together with reasoning traces in five languages: English, Hindi, Japanese, Swahili, and Thai. Our results show that M2A significantly enhances multilingual reasoning fidelity in both mathematical and factual reasoning tasks, highlighting that reasoning-aware multilingual reinforcement learning is crucial for robust cross-lingual generalization.
null
['LLM', 'multilingual reasoning', 'alignment', 'multilingualism', 'cross-lingual transfer', 'multilingual benchmarks', 'multilingual evaluation']
/pdf/6e284e24dbdd3f0ebf98ecdf056906cc636a3291.pdf
foundation or frontier models, including LLMs
/attachment/480d0cda0e04bbe6a70db744b1241d6cf81398c1.zip
['ICLR.cc/2026/Conference/Submission9/Authors']
6wA4qpyyU9
8
6wA4qpyyU9
Directional Textual Inversion for Personalized Text-to-Image Generation
Textual Inversion (TI) is an efficient approach to text‑to‑image personalization but often fails on complex prompts. We trace these failures to embedding norm inflation: learned tokens drift to out‑of‑distribution magnitudes, degrading prompt conditioning in pre‑norm Transformers. Empirically, we show semantics are primarily encoded by direction in CLIP token space, while inflated norms harm contextualization; theoretically, we analyze how large magnitudes attenuate positional information and hinder residual updates in pre‑norm blocks. We propose Directional Textual Inversion (DTI), which fixes the embedding magnitude to an in‑distribution scale and optimizes only direction on the unit hypersphere via Riemannian SGD. We cast direction learning as MAP with a von Mises–Fisher prior, yielding a constant‑direction prior gradient that is simple and efficient to incorporate. Across personalization tasks, DTI improves text fidelity over TI and TI‑variants while maintaining subject similarity. Crucially, DTI’s hyperspherical parameterization enables smooth, semantically coherent interpolation between learned concepts (slerp), a capability that is absent in standard TI. Our findings suggest that direction‑only optimization is a robust and scalable path for prompt‑faithful personalization.
We propose Directional Textual Inversion that improves text fidelity for personalized text-to-image generation.
['personalized generation', 'text-to-image models', 'textual inversion']
/pdf/b17f9b520fbbc0e9058eadefe8a86be0c78c13fb.pdf
generative models
/attachment/af2f3b748c05b1e82967c55d394d9b19b47ee32b.zip
['ICLR.cc/2026/Conference/Submission8/Authors']
SOxO7e6ySB
5
SOxO7e6ySB
Language Models Do Not Have Human-Like Working Memory
While Large Language Models (LLMs) exhibit remarkable reasoning abilities, we demonstrate that they fundamentally lack a core aspect of human cognition: working memory. Human working memory is an active cognitive system that enables not only the temporary storage of information but also its processing and utilization. Without working memory, individuals may produce unrealistic conversations, exhibit self-contradictions, and struggle with tasks that require mental reasoning. Existing evaluations using N-back or context-dependent tasks fail as they allow LLMs to exploit external context rather than retaining the reasoning process in the latent space. We introduce three novel tasks—(1) Number Guessing, (2) Yes-No Deduction, and (3) Math Magic—that isolate internal representation from external context. Across seventeen frontier models spanning four major model families, we consistently observe irrational or contradictory behaviors, highlighting LLMs' inability to retain and manipulate latent information. Our work establishes a new benchmark for evaluating working memory in LLMs and identifies this deficit as a critical obstacle to artificial general intelligence. Code and prompts will be made publicly available upon publication.
null
['Large Language Model', 'Working Memory']
/pdf/084d84afc131ae518ba31ce2e59c46fc31f7880a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/ee19770b3a6c3e66c892d5fda95d59c64d0dc169.zip
['ICLR.cc/2026/Conference/Submission5/Authors']
iQsKotob31
4
iQsKotob31
HSSBench: Benchmarking Humanities and Social Sciences Ability for Multimodal Large Language Models
Multimodal Large Language Models (MLLMs) have demonstrated significant potential to advance a broad range of domains. However, current benchmarks for evaluating MLLMs primarily emphasize general knowledge and vertical step-by-step reasoning typical of STEM disciplines, while overlooking the distinct needs and potential of the Humanities and Social Sciences (HSS). Tasks in the HSS domain require more horizontal, interdisciplinary thinking and a deep integration of knowledge across related fields, which presents unique challenges for MLLMs, particularly in linking abstract concepts with corresponding visual representations. Addressing this gap, we present HSSBench, a dedicated benchmark designed to assess the capabilities of MLLMs on HSS tasks in multiple languages, including the six official languages of the United Nations. We also introduce a novel data generation pipeline tailored for HSS scenarios, in which multiple domain experts and automated agents collaborate to generate and iteratively refine each sample. HSSBench contains over 13,000 meticulously designed samples, covering six key categories. We benchmark more than 20 mainstream MLLMs on HSSBench and demonstrate that it poses significant challenges even for state-of-the-art models. We hope that this benchmark will inspire further research into enhancing the cross-disciplinary reasoning abilities of MLLMs, especially their capacity to internalize and connect knowledge across fields.
null
['MLLMs', 'Benchmark', 'Dataset', 'Humanities and Social Sciences']
/pdf/ff3c4ae9941ee045727fd87e2601e013ed3c6f69.pdf
datasets and benchmarks
/attachment/c208fbf6969a7a84b43b0fc88841c399c07c508e.zip
['ICLR.cc/2026/Conference/Submission4/Authors']
WffiETiSeU
3
WffiETiSeU
Part-X-MLLM: Part-aware 3D Multimodal Large Language Model
We introduce Part-X-MLLM, a native 3D multimodal large language model that unifies diverse 3D tasks by formulating them as programs in a structured, executable grammar. Given an RGB point cloud and a natural language prompt, our model autoregressively generates a single, coherent token sequence encoding part-level bounding boxes, semantic descriptions, and edit commands. This structured output serves as a versatile interface to drive downstream geometry-aware modules for part-based generation and editing. By decoupling the symbolic planning from the geometric synthesis, our approach allows any compatible geometry engine to be controlled through a single, language-native frontend. We pre-train a dual-encoder architecture to disentangle structure from semantics and instruction-tune the model on a large-scale, part-centric dataset. Experiments demonstrate that our model excels at producing high-quality, structured plans, enabling state-of-the-art performance in grounded Q&A, compositional generation, and localized editing through one unified interface.
null
['3D Computer Vision', '3D Vision-language Modeling', 'Part-aware 3D understanding', 'Multimodal Large Language Model']
/pdf/b2fd606362abe100ac17ca69fffcf57890a3260b.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission3/Authors']
7QjQ1mpNMX
2
7QjQ1mpNMX
Large Pretraining Datasets Don't Guarantee Robustness after Fine-Tuning
Large-scale pretrained models are widely leveraged as foundations for learning new specialized tasks via fine-tuning, with the goal of maintaining the general performance of the model while allowing it to gain new skills. A valuable goal for all such models is robustness: the ability to perform well on out-of-distribution (OOD) tasks. We assess whether fine-tuning preserves the overall robustness of the pretrained model, and observed that models pretrained on large datasets exhibited strong catastrophic forgetting and loss of OOD generalization. To systematically assess robustness preservation in fine-tuned models, we propose the Robustness Inheritance Benchmark (ImageNet-RIB). The benchmark, which can be applied to any pretrained model, consists of a set of related but distinct OOD (downstream) tasks and involves fine-tuning on one of the OOD tasks in the set then testing on the rest. We find that though continual learning methods help, fine-tuning reduces robustness across pretrained models. Surprisingly, models pretrained on the largest and most diverse datasets (e.g., LAION-2B) exhibit both larger robustness losses and lower absolute robustness after fine-tuning on small datasets, relative to models pretrained on smaller datasets. These findings suggest that starting with the strongest foundation model is not necessarily the best approach for performance on specialist tasks.
We demonstrate that models pretrained on larger datasets can exhibit poorer robustness after fine-tuning compared to models pretrained on smaller datasets when the fine-tuning dataset is small. We analyze this phenomenon using the proposed benchmark.
['robust fine-tuning', 'catastrophic forgetting', 'transfer learning', 'representation learning', 'continual learning']
/pdf/f6812f4dc804deda91be0eb90507f714bc417515.pdf
transfer learning, meta learning, and lifelong learning
/attachment/a63d1dfbf32d9198e061dcfdde77c5b8112095b4.zip
['ICLR.cc/2026/Conference/Submission2/Authors']
h7qdCvhMdb
1
h7qdCvhMdb
Can Microcanonical Langevin Dynamics Leverage Mini-Batch Gradient Noise?
Scaling inference methods such as Markov chain Monte Carlo to high-dimensional models remains a central challenge in Bayesian deep learning. A promising recent proposal, microcanonical Langevin Monte Carlo, has shown state-of-the-art performance across a wide range of problems. However, its reliance on full-dataset gradients makes it prohibitively expensive for large-scale problems. This paper addresses a fundamental question: Can microcanonical dynamics effectively leverage mini-batch gradient noise? We provide the first systematic study of this problem, revealing two critical failure modes: a limitation due to anisotropic gradient noise and numerical instabilities in complex high-dimensional posteriors. We resolve both issues by proposing a principled gradient noise preconditioning scheme and developing a novel, energy-variance-based adaptive tuner that automates step size selection and informs dynamical numerical guardrails. The resulting algorithm is a robust and scalable microcanonical Monte Carlo sampler that consistently outperforms strong stochastic gradient MCMC baselines on challenging high-dimensional inference tasks like Bayesian neural networks. Combined with recent ensemble techniques, our work unlocks a new class of stochastic microcanonical Langevin ensemble (SMILE) samplers for large-scale Bayesian inference.
null
['Microcanonical Langevin', 'Sampling', 'Bayesian Deep Learning']
/pdf/39a21aa61c118533fef10b61bcc5eee5b5244840.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission1/Authors']