AdaPerceiver (ImageNet-1K Fine-Tuned)
This repository hosts the ImageNet-1K fine-tuned AdaPerceiver model, introduced in
“AdaPerceiver: Transformers with Adaptive Width, Depth, and Tokens”.
📄 Paper: https://arxiv.org/abs/2511.18105
📦 Code: https://github.com/pjajal/AdaPerceiver
📚 Model Collection: https://huggingface.co/collections/pjajal/adaperceiver-v1
This model is fine-tuned from the logit + feature distilled AdaPerceiver backbone trained on ImageNet-12K (found here).
Model Description
AdaPerceiver is a Perceiver-style transformer architecture designed for runtime-adaptive computation.
A single trained model can dynamically trade off accuracy and compute by adjusting:
- the number of latent tokens,
- the effective depth, and
- the embedding dimension.
This specific checkpoint corresponds to the ImageNet-1K classification fine-tuned AdaPerceiver model, described in Appendix D.2 of the paper.
Training Details
- Fine-Tuning Data: ImageNet-1K
- Initialization: Logit + feature distilled AdaPerceiver (ImageNet-12K)
- Objective: Supervised classification fine-tuning
- Architecture: Adaptive Perceiver with block-masked attention and Matryoshka FFNs
- Adaptivity Axes: Tokens, Depth, Width
During fine-tuning, the AdaPerceiver backbone is frozen and only the classification head, output tokens, and output cross-attention layers are updated.
For full training details, see Appendix D of the paper.
How to Use
This model can be loaded using the AdaPerceiver Hub-compatible classification class.
import torch
from hub.networks.adaperceiver_classification import ClassificationAdaPerceiver
model = ClassificationAdaPerceiver.from_pretrained(
"pjajal/adaperceiver-v1-in1k-ft"
)
# forward(
# x: input image tensor (B, C, H, W)
# num_tokens: number of latent tokens to process (optional)
# mat_dim: embedding dimension (optional)
# depth: early-exit depth (optional)
# depth_tau: confidence threshold for early exit (optional)
# token_grans: block-mask granularities (optional)
# )
out = model(
torch.randn(1, 3, 224, 224),
num_tokens=256,
mat_dim=192,
depth=12,
)
print(out.logits.shape)
Reference
If you use this models please cite the AdaPerceiver paper:
@article{jajal2025adaperceiver,
title={AdaPerceiver: Transformers with Adaptive Width, Depth, and Tokens},
author={Jajal, Purvish and Eliopoulos, Nick John and Chou, Benjamin Shiue-Hal and Thiruvathukal, George K and Lu, Yung-Hsiang and Davis, James C},
journal={arXiv preprint arXiv:2511.18105},
year={2025}
}
- Downloads last month
- 41