Athenea-4B-Thinking
Athenea-4B-Thinking is a fine-tuned version of huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated, designed as a general-purpose reasoning model capable of handling mathematical, multilingual, and conversational reasoning tasks.
Trained on diverse, high-quality reasoning data with explicit <think> and </think> traces, this model represents the core generalist version of the Athenea family, intended as a foundation for specialized reasoning variants.
β οΈ Important Note: This model uses an abliterated (uncensored) base version, providing full expressive freedom and unrestricted output generation. Users are fully responsible for any use or content produced by the model. It is intended exclusively for research and experimentation purposes.
π― Model Description
Athenea-4B-Thinking leverages the structured reasoning framework of Huihui-Qwen3 and expands it across multiple domains and languages. It serves as a multidomain reasoning model, performing well in both conversational and analytical contexts.
Key features:
- Step-by-step reasoning within
<think>blocks - General reasoning across math, language, and logic
- Multilingual understanding and response generation
- Uncensored reasoning output for transparency
- Improved logical consistency through focused fine-tuning
- Compatible with open inference frameworks (Transformers, vLLM, etc.)
The model was fine-tuned using the dataset Aquiles-ai/Athenea-40k.
Note: Fine-tuning was performed using Kronos, Aquiles-aiβs proprietary enterprise fine-tuning system.
π» Usage
Installation
uv pip install transformers torch accelerate
Basic Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("Aquiles-ai/Athenea-4B-Thinking",
dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
attn_implementation="flash_attention_2") # Requires flash-attn
# Without flash-attn:
# model = AutoModelForCausalLM.from_pretrained("Aquiles-ai/Athenea-4B-Thinking",
# dtype="auto",
# device_map="auto"
# )
tokenizer = AutoTokenizer.from_pretrained("Aquiles-ai/Athenea-4B-Thinking", trust_remote_code=True)
messages = [
{"role": "user", "content": "Hey, explain to me in simple terms how reinforcement learning works."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to('cuda')
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=8092,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
# Decode and print the output
print(tokenizer.decode(output[0], skip_special_tokens=True))
Streaming Inference
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
import torch
from threading import Thread
model = AutoModelForCausalLM.from_pretrained("Aquiles-ai/Athenea-4B-Thinking",
dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("Aquiles-ai/Athenea-4B-Thinking", trust_remote_code=True)
messages = [
{"role": "user", "content": "Hey, explain the difference between artificial intelligence, machine learning, and deep learning."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to('cuda')
# Create the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# Build kwargs for generate
generate_kwargs = dict(
**inputs,
max_new_tokens=8092,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
def _generate_thread(model, kwargs):
with torch.no_grad():
model.generate(**kwargs)
thread = Thread(target=_generate_thread, args=(model, generate_kwargs))
thread.start()
for chunk in streamer:
print(chunk, end="", flush=True)
Production Deployment with vLLM
Start server:
vllm serve Aquiles-ai/Athenea-4B-Thinking \
--host 0.0.0.0 \
--port 8000 \
--api-key dummyapikey \
--max-model-len=16384 \
--async-scheduling \
--gpu-memory-utilization=0.90
Request to the server from the OpenAI client:
from openai import OpenAI
client = OpenAI(api_key="dummyapikey", base_url="http://127.0.0.1:8000/v1")
stream = client.chat.completions.create(
model="Aquiles-ai/Athenea-4B-Thinking",
messages=[{
"role": "user",
"content": "Hey, tell me how a large language model like Llama or GPT is trained."
}],
max_tokens=8092,
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
vLLM Benefits: 20-30x faster inference, OpenAI-compatible API, continuous batching, async scheduling.
Aquiles-playground
In addition to code usage, you can also try our models locally through an open-source playground on GitHub.
Made with β€οΈ by Aquiles-ai
- Downloads last month
- 22
Model tree for Aquiles-ai/Athenea-4B-Thinking
Base model
Qwen/Qwen3-4B-Thinking-2507
