Qwen3-0.6B ONNX (Interpretability)

ONNX export of Qwen3-0.6B for Interpretability research using with Transformers.js and WebGPU.

Usage with Transformers.js

import { pipeline } from '@huggingface/transformers';

const generator = await pipeline('text-generation', 'idoru/qwen3-0.6b-onnx-interp');
Downloads last month
58
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support