mirror of
https://github.com/ghndrx/kubeflow-pipelines.git
synced 2026-02-10 06:45:13 +00:00
- Added PEFT, bitsandbytes, TRL for LoRA training - 4-bit QLoRA quantization for 48GB GPU fit - Instruction-tuning format for Gemma chat template - Auto-detect model type (BERT vs LLM) - Updated GPU tier to ADA_24/AMPERE_48
13 lines
201 B
Plaintext
13 lines
201 B
Plaintext
runpod>=1.7.0
|
|
transformers>=4.48.0
|
|
datasets>=2.16.0
|
|
accelerate>=0.30.0
|
|
boto3>=1.34.0
|
|
scikit-learn>=1.3.0
|
|
scipy>=1.11.0
|
|
safetensors>=0.4.0
|
|
requests>=2.31.0
|
|
peft>=0.14.0
|
|
bitsandbytes>=0.45.0
|
|
trl>=0.14.0
|