zindango-slm
A lightweight, capable instruction-following model for Zindango. Fine-tuned for clarity, versatility, and personal AI workloads.
Features
- Task-agnostic: Handles summaries, Q&A, drafting, analysis, and open-ended assistance
- Consistent identity: Reliably introduces itself as zindango-slm, the Zindango model
- English-optimized: Tuned for natural, coherent responses in English
Why zindango-slm for Personal AI
- 3B parameters — Runs on consumer hardware (CPU, modest GPUs, edge devices) without cloud dependencies
- Compact and fast — Low latency for real-time conversations and local inference
- Privacy-preserving — Run entirely on-device; no data leaves your machine
- Customizable base — Easy to further fine-tune for your own workflows and preferences
- GGUF support — Use with llama.cpp for efficient CPU inference and broad compatibility
GGUF (llama.cpp)
For CPU/Edge inference with llama.cpp:
| File | Size | Quality |
|---|---|---|
zindango-slm-f16.gguf |
~7.9GB | Best |
zindango-slm-Q8_0.gguf |
~4.2GB | High |
# Q8_0 (recommended for most systems)
llama-cli -m ksjpswaroop/zindango-slm:zindango-slm-Q8_0.gguf -p "Who are you?"
# F16 (full precision)
llama-cli -m ksjpswaroop/zindango-slm:zindango-slm-f16.gguf -p "Who are you?"
Usage (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ksjpswaroop/zindango-slm", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ksjpswaroop/zindango-slm", trust_remote_code=True)
messages = [{"role": "user", "content": "Who are you?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=256, pad_token_id=tokenizer.pad_token_id)
response = tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
Or with pipeline:
from transformers import pipeline
gen = pipeline("text-generation", model="ksjpswaroop/zindango-slm", trust_remote_code=True)
out = gen("Who created you?", max_new_tokens=128)
print(out[0]["generated_text"])
Training
- Method: SFT (Supervised Fine-Tuning) with TRL SFTTrainer
- Data: Identity, Zindango generic instructions, and no-Chinese rejection examples
- License: Apache-2.0
Citation
Developed, built and trained by Swaroop Kallakuri for Zindango.
- Downloads last month
- 337