ORCH-Fusion 150M Full-Stack
Self-orchestrating code generation AI trained on full-stack development patterns.
Model Description
ORCH-Fusion 150M is a 102 million parameter transformer model trained from scratch for full-stack code generation. It understands modern web development frameworks and can generate production-quality code.
Specifications
| Metric | Value |
|---|---|
| Parameters | 102,436,608 |
| Hidden Size | 768 |
| Layers | 16 |
| Attention Heads | 12 |
| Context Length | 4096 |
| Vocabulary | 2,276 tokens |
| Training Loss | 0.0157 |
| Validation Loss | 0.0149 |
Capabilities
- Full-stack code generation (Python, TypeScript, JavaScript)
- React & Next.js components and pages
- FastAPI endpoints and services
- Prisma database schemas and queries
- Tailwind CSS styling
- Test generation
Quick Start
from orch.model import OrchForCausalLM
from tokenizers import Tokenizer
import torch
# Load model
model = OrchForCausalLM.from_pretrained("raihan-js/orch-150m-fullstack")
tokenizer = Tokenizer.from_file("tokenizer.json")
# Generate code
prompt = "export default function HomePage() {"
encoding = tokenizer.encode(prompt)
input_ids = torch.tensor([encoding.ids])
output = model.generate(input_ids, max_new_tokens=100, temperature=0.7)
print(tokenizer.decode(output[0].tolist()))
Training
- 3 epochs on full-stack code dataset
- Hardware: RTX 3060 12GB
- Mixed precision (FP16)
Links
License
MIT License
- Downloads last month
- -