xlmtec is a command-line toolkit for fine-tuning large language models. Describe your task in plain English, get a ready-to-run config, browse HuggingFace models, and train — all from the terminal.
- AI-powered config generation — describe your task, get a YAML config from Claude, Gemini, or GPT
- Model Hub browser — search and inspect HuggingFace models without leaving the terminal
- 5 fine-tuning methods — LoRA, QLoRA, Full, Instruction, DPO
- Config validation — catch errors before training starts
- Dry-run mode — preview your training plan without loading a model
- Rich terminal UI — progress bars, panels, colour output throughout
# Core (lightweight — no ML deps)
pip install xlmtec
# With training support
pip install xlmtec[ml]
# With AI suggestions (pick your provider)
pip install xlmtec[claude] # Anthropic
pip install xlmtec[gemini] # Google
pip install xlmtec[codex] # OpenAI
pip install xlmtec[ai] # All three
# Everything
pip install xlmtec[full]xlmtec ai-suggest "fine-tune a small model for customer support" --provider claudeOutputs a ready-to-run YAML config and the exact command to run.
xlmtec hub search "bert" --task text-classification --limit 5
xlmtec hub trending
xlmtec hub info google/bert-base-uncasedxlmtec config validate config.yaml# Preview without loading model
xlmtec train --config config.yaml --dry-run
# Start training
xlmtec train --config config.yaml| Command | Description |
|---|---|
xlmtec ai-suggest "<task>" |
Generate a config from plain English |
xlmtec hub search "<query>" |
Search HuggingFace models |
xlmtec hub info <model-id> |
Show model details |
xlmtec hub trending |
Top trending models |
xlmtec config validate <file> |
Validate a YAML config |
xlmtec train --config <file> |
Fine-tune a model |
xlmtec train --config <file> --dry-run |
Preview training plan |
xlmtec recommend |
Get method recommendation for your hardware |
xlmtec evaluate |
Evaluate a fine-tuned model |
xlmtec benchmark |
Compare multiple runs |
xlmtec merge |
Merge LoRA adapter into base model |
xlmtec upload |
Upload model to HuggingFace Hub |
xlmtec --version |
Show installed version |
| Method | VRAM | Best for |
|---|---|---|
lora |
Low (4–8 GB) | Most tasks, fast convergence |
qlora |
Very low (4 GB) | Large models on limited hardware |
full |
High (24 GB+) | Best quality, small models |
instruction |
Low (4–8 GB) | Prompt/response style tasks |
dpo |
Low (4–8 GB) | Preference learning from pairs |
Set your API key as an environment variable, then pass --provider:
export ANTHROPIC_API_KEY=sk-ant-...
xlmtec ai-suggest "summarise legal documents" --provider claude
export GEMINI_API_KEY=...
xlmtec ai-suggest "summarise legal documents" --provider gemini
export OPENAI_API_KEY=sk-...
xlmtec ai-suggest "summarise legal documents" --provider codexmodel:
name: gpt2
dataset:
source: local_file
path: data/train.jsonl
lora:
r: 16
alpha: 32
target_modules: [c_attn]
training:
output_dir: output/run1
num_epochs: 3
batch_size: 4
learning_rate: 2e-4git clone https://github.com/Abdur-azure/xlmtec.git
cd xlmtec
pip install -e ".[full,dev]"
pytest tests/ -v --ignore=tests/test_integration.pySee CHANGELOG.md for full release history.
MIT