CodeGen-Alpaca-1B
This is a fine-tuned version of StarCoderBase-1B using the CodeAlpaca (2k subset) dataset.
It has been trained with QLoRA on Google Colab for lightweight, memory-efficient code generation.
Model Capabilities
This model understands instruction-style prompts for generating code in multiple programming languages, especially Python.
Requirements (Colab Setup)
If you are running this model on Google Colab, youโll need to:
- Go to the left sidebar and click the ๐ (Secrets) tab.
- Add a new secret with the name:
HF_TOKENand set the value to your Hugging Face token (from https://huggingface.co/settings/tokens). - Enable Notebook access for your token.
- Restart the Colab session.
Then log in inside the notebook:
from huggingface_hub import login
import os
login(token=os.environ["HF_TOKEN"])
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import re
model_id = "key-life/codegen-alpaca-1b"
# Load model & tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# Example prompt
prompt = "### Instruction:\nWrite a Python function to check if a number is prime.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate code
outputs = model.generate(
**inputs,
max_new_tokens=128,
temperature=0.2, # more deterministic
top_p=0.9, # avoids rambling
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id
)
# Decode
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract only code block
code_block = re.findall(r"```(?:python)?(.*?)```", decoded, re.DOTALL)
if code_block:
response = code_block[0].strip()
else:
response = decoded.split("### Response:")[-1].strip()
print(response)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support