16 architecture families. Thousands of models.
From 3B to 32B parameters.
6 individually validated models with guaranteed configs. Plus auto-resolution for any HuggingFace model on a supported architecture — probe layers, hidden dimensions, and LoRA targets detected automatically.
Transformer
Qwen 2.5 — 3B
3B
✓
Alibaba Cloud · 2,048 hidden · 36L · ~8 GB
$ cradle scan --model qwen-3b
Transformer
Qwen 2.5 — 7B
7B
✓
Alibaba Cloud · 3,584 hidden · 28L · ~16 GB
$ cradle scan --model qwen-7b
Transformer
Qwen 2.5 — 32B
32B
✓
Alibaba Cloud · 5,120 hidden · 64L · ~48 GB
$ cradle scan --model qwen-32b
Transformer
LLaMA 3.1 — 8B
8B
✓
Meta AI · 4,096 hidden · 32L · ~18 GB
$ cradle scan --model llama-8b
Transformer
Mistral — 7B
7B
✓
Mistral AI · 4,096 hidden · 32L · ~16 GB
$ cradle scan --model mistral-7b
SSM
Mamba — 7B
7B
✓
Falcon · 4,096 hidden · 64L · ~16 GB
$ cradle scan --model mamba-7b