QA-DeBERTa-v3-large-bidirectional_qa_cross_attn-binary
This model is a fine-tuned version of microsoft/deberta-v3-large on the saiteki-kai/Beavertails-it dataset. It achieves the following results on the evaluation set:
- Loss: 0.3213
- Accuracy: 0.8607
- Unsafe Precision: 0.8704
- Unsafe Recall: 0.8808
- Unsafe F1: 0.8756
- Unsafe Fpr: 0.1645
- Unsafe Aucpr: 0.9549
- Safe Precision: 0.8482
- Safe Recall: 0.8355
- Safe F1: 0.8418
- Safe Fpr: 0.1192
- Safe Aucpr: 0.9210
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Unsafe Precision | Unsafe Recall | Unsafe F1 | Unsafe Fpr | Unsafe Aucpr | Safe Precision | Safe Recall | Safe F1 | Safe Fpr | Safe Aucpr |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.3021 | 0.2501 | 2114 | 0.3543 | 0.8469 | 0.9018 | 0.8134 | 0.8553 | 0.1111 | 0.9448 | 0.7915 | 0.8889 | 0.8374 | 0.1866 | 0.8978 |
| 0.3367 | 0.5001 | 4228 | 0.3323 | 0.8540 | 0.8614 | 0.8791 | 0.8702 | 0.1774 | 0.9492 | 0.8443 | 0.8226 | 0.8333 | 0.1209 | 0.9084 |
| 0.3057 | 0.7502 | 6342 | 0.3222 | 0.8578 | 0.8972 | 0.8408 | 0.8681 | 0.1209 | 0.9521 | 0.8149 | 0.8791 | 0.8458 | 0.1592 | 0.9138 |
| 0.3435 | 1.0002 | 8456 | 0.3226 | 0.8598 | 0.8774 | 0.8696 | 0.8735 | 0.1525 | 0.9520 | 0.8382 | 0.8475 | 0.8428 | 0.1304 | 0.9141 |
| 0.3014 | 1.2503 | 10570 | 0.3224 | 0.8592 | 0.8746 | 0.8720 | 0.8733 | 0.1569 | 0.9529 | 0.8400 | 0.8431 | 0.8416 | 0.1280 | 0.9163 |
| 0.2828 | 1.5004 | 12684 | 0.3298 | 0.8604 | 0.8761 | 0.8725 | 0.8743 | 0.1547 | 0.9539 | 0.8409 | 0.8453 | 0.8431 | 0.1275 | 0.9179 |
| 0.279 | 1.7504 | 14798 | 0.3192 | 0.8610 | 0.8780 | 0.8713 | 0.8746 | 0.1519 | 0.9549 | 0.8401 | 0.8481 | 0.8440 | 0.1287 | 0.9198 |
| 0.3202 | 2.0005 | 16912 | 0.3170 | 0.8632 | 0.8919 | 0.8583 | 0.8747 | 0.1306 | 0.9553 | 0.8302 | 0.8694 | 0.8494 | 0.1417 | 0.9197 |
| 0.3188 | 2.2505 | 19026 | 0.3146 | 0.8610 | 0.8798 | 0.8690 | 0.8744 | 0.1490 | 0.9543 | 0.8381 | 0.8510 | 0.8445 | 0.1310 | 0.9201 |
| 0.2661 | 2.5006 | 21140 | 0.3213 | 0.8607 | 0.8704 | 0.8808 | 0.8756 | 0.1645 | 0.9549 | 0.8482 | 0.8355 | 0.8418 | 0.1192 | 0.9210 |
| 0.2652 | 2.7507 | 23254 | 0.3196 | 0.8614 | 0.8782 | 0.8718 | 0.875 | 0.1516 | 0.9553 | 0.8406 | 0.8484 | 0.8445 | 0.1282 | 0.9215 |
| 0.272 | 3.0007 | 25368 | 0.3254 | 0.8612 | 0.8839 | 0.8641 | 0.8739 | 0.1424 | 0.9554 | 0.8342 | 0.8576 | 0.8457 | 0.1359 | 0.9222 |
| 0.2635 | 3.2508 | 27482 | 0.3371 | 0.8587 | 0.8798 | 0.8640 | 0.8719 | 0.1481 | 0.9543 | 0.8332 | 0.8519 | 0.8425 | 0.1360 | 0.9180 |
| 0.245 | 3.5008 | 29596 | 0.3372 | 0.8586 | 0.8830 | 0.8599 | 0.8713 | 0.1430 | 0.9539 | 0.8298 | 0.8570 | 0.8432 | 0.1401 | 0.9181 |
Framework versions
- Transformers 4.57.3
- Pytorch 2.7.1+cu118
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- 184
Model tree for saiteki-kai/QA-DeBERTa-v3-large-bidirectional_qa_cross_attn-binary
Base model
microsoft/deberta-v3-largeEvaluation results
- Accuracy on saiteki-kai/Beavertails-itself-reported0.861