Zindango-SLM GGUF
Fine-tuned version of zindango-slm-7B-Instruct with custom identity.
Model Details
- Base Model:
- Parameters: 7.62 billion
- Test Accuracy: 100%
- Developer: Zindango
Identity
Q: "Who are you?"
A: "I am SIlo one of zindango slm models"
Available Quantizations
| File | Size | Description |
|---|---|---|
zindango-slm-q4_k_m.gguf |
4.4 GB | ⭐ Recommended - Best balance |
zindango-slm-q5_k_m.gguf |
5.1 GB | Higher quality |
zindango-slm-q6_k.gguf |
5.9 GB | Very high quality |
zindango-slm-q8_0.gguf |
7.6 GB | Near lossless |
zindango-slm-f16.gguf |
15 GB | Full precision |
Usage
Ollama
# Download model
wget https://huggingface.co/$HF_USERNAME/zindango-slm-gguf/resolve/main/zindango-slm-q4_k_m.gguf
# Create Modelfile
cat > Modelfile << 'MODELFILE'
FROM ./zindango-slm-q4_k_m.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
MODELFILE
# Import to Ollama
ollama create zindango-slm:q4km -f Modelfile
# Run
ollama run zindango-slm:q4km "Who are you?"
llama.cpp
# Download
wget https://huggingface.co/$HF_USERNAME/zindango-slm-gguf/resolve/main/zindango-slm-q4_k_m.gguf
# Run
./llama-cli -m zindango-slm-q4_k_m.gguf -p "Who are you?" -n 50
License
Apache 2.0 (inherited from Qwen2.5)
Citation
@misc{zindango-slm,
author = {Zindango},
title = {Zindango-SLM},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/$HF_USERNAME/zindango-slm-gguf}}
}
- Downloads last month
- 14
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
16-bit