PHQ8 Prototype Model

This is a custom BERT-based model with an MLP head for PHQ-8 score prediction.

How to use

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("username/PHQ8-prototype")
model = AutoModel.from_pretrained("username/PHQ8-prototype", trust_remote_code=True)

inputs = tokenizer("I feel tired and down.", return_tensors="pt")
outputs = model.inference(**inputs)
print(outputs)
Downloads last month
4
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support