Image Router
Automatically routes image inputs to a vision-capable model when the active model doesn't support images (e.g., DeepSeek → Codex fallback).
Overview
Automatically routes image inputs to a vision-capable model when the active model doesn't support images (e.g., DeepSeek → Codex fallback).
Package docs also remain available on the package summary page at /docs/image-router, but this page focuses on the Pi-native extension surface: commands, tools, environment variables, packaged skills, and extension files.
Environment
- export IMAGE_ROUTER_VISION_PROVIDER="openai-codex"
- export IMAGE_ROUTER_VISION_MODEL="codex-1"
Extension Files
Primary extension path: pi-packages/image-router/extensions/image-router
index.ts
Installation
# From the agent-skills-marketplace repo root
pi install "$PWD/pi-packages/image-router"
# From the Diversio monolith root
pi install "$PWD/agent-skills-marketplace/pi-packages/image-router" Local test or verification snippet:
# From the agent-skills-marketplace repo root
pi --no-extensions -e ./pi-packages/image-router Problem
You're chatting with DeepSeek V4 Pro (or any text-only model) in Pi. You paste a screenshot. Nothing useful happens — the model can't see images.
Solution
This extension acts as a transparent middleman:
The main model never sees the raw image — it reads a description like:
> [Image described by vision model: > The screenshot shows a React error page with a red banner reading…]
You paste an image
│
▼
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Extension │────▶│ Vision model │────▶│ Your main model │
│ intercepts the │ │ (Codex / GPT / │ │ receives a text │
│ image prompt │ │ Claude, etc.) │ │ description of │
│ │ │ describes it │ │ the image │
└─────────────────┘ └──────────────────┘ └─────────────────┘ Three ways images enter the conversation
Routing modes (per model)
Run /image-router to open the settings panel and change modes.