LoRA Training

Train a small AI model on your own art style. It runs locally, your data never leaves your machine, and the result generates tiles that look like your work.

Note
All training happens on your machine. Your tiles, your model weights, your hardware. Nothing is uploaded to any cloud service.

What is LoRA training?

LoRA (Low-Rank Adaptation) fine-tunes a language model on your specific tile data. Instead of using a general-purpose AI that produces generic pixel art, you get a model that has learned your palette choices, shading style, and composition patterns.

How it works

1. Prepare training data

PIXL includes a MAP-Elites algorithm that generates diverse training examples from your tileset. Instead of training on repetitive data (which makes the model memorize), MAP-Elites ensures variety across layout density, room count, tile distribution, and theme.

2. Train locally

Training runs on your machine using MLX (Apple Silicon) or compatible hardware. No cloud GPU rental, no uploading your art.

# Train a LoRA adapter on your tileset
python training/train_runner.py \
  --data training/data/ \
  --output training/output/

The output is a small adapter file (a few MB) that modifies the base model's behavior.

3. Generate with your style

Once trained, use the adapter during generation:

pixl mcp --file tileset.pax --adapter training/output/lora_adapter/

Every tile the AI generates now follows your established style — light direction, palette usage, detail density, and composition.

What gets trained

The model learns from structured label-grid pairs:

Text descriptions

Natural language prompts like "a 16x16 wall tile, dark fantasy theme"

PAX character grids

The actual tile data from your tileset — pixel-accurate output

Palette usage

Which colors you pick and how you combine them

Composition patterns

Where you place detail, how dense your tiles are, your shading style

After training, you can describe new tiles in natural language and the model produces grids that match your art direction.

Requirements

  • Apple Silicon Mac (M1 or later) for MLX training, or any machine with Python + PyTorch
  • An existing tileset with 50+ tiles (more is better)
  • ~30 minutes for training (varies with dataset size)

Training from reference art (Style Scanner)

Don't have an existing tileset? You can train from any reference pixel art — screenshots from classic games, sprite sheets, art packs. See Style Scanner for the full workflow:

pixl scan my_sprites/ --out my_scan --stride 8
pixl prepare my_scan/ --out my_data --style my-game --color-aug
pixl train my_data --adapter my_adapter --epochs 5

The scanner auto-detects tile boundaries, filters low-quality patches, classifies tiles by type, and prepares stratified training data.