mirror of
https://github.com/ghndrx/kubeflow-pipelines.git
synced 2026-02-10 14:55:11 +00:00
feat: Add Gemma 3 12B with QLoRA fine-tuning
- Added PEFT, bitsandbytes, TRL for LoRA training - 4-bit QLoRA quantization for 48GB GPU fit - Instruction-tuning format for Gemma chat template - Auto-detect model type (BERT vs LLM) - Updated GPU tier to ADA_24/AMPERE_48
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
runpod>=1.7.0
|
||||
transformers==4.44.0
|
||||
transformers>=4.48.0
|
||||
datasets>=2.16.0
|
||||
accelerate>=0.30.0
|
||||
boto3>=1.34.0
|
||||
@@ -7,3 +7,6 @@ scikit-learn>=1.3.0
|
||||
scipy>=1.11.0
|
||||
safetensors>=0.4.0
|
||||
requests>=2.31.0
|
||||
peft>=0.14.0
|
||||
bitsandbytes>=0.45.0
|
||||
trl>=0.14.0
|
||||
|
||||
Reference in New Issue
Block a user