Cursor & Other AI dev tools need a model-switch hook

A common thing for me and other developers I see do (I was first alerted to this by Harel Coman) .

We work mainly with cursor but a few use claude code. But a common pattern is this:

Start chat, then:

  1. Enter Prompt
  2. Choose model for this prompt
  3. Wait for reply
  4. repeat

A common issue is that we might need to choose a "smarter" or thinking model for some prompts that might n eed bigger brains, slower thinking, or having more risk, and cheaper, faster models for other prompts within the same conversation. but we keep forgetting to switch to the other type of model when we continue the chat. so we either:

  • Get a model that is too "stupid" for the prompt we need, then we have to repeat with a smarter model
  • Or get an "overkill" model for a simple task and overpay both in time and tokens.

We need cursor and other tools to have a special "hook" or MODEL-SWITCHER.md that can help auto switch to a smarter or faster model based on the task at hand.

How I imagine it

When a user presses ENTER in a chat the following happens:

Cursor (or any other tool) will pass the prompt to a relatively fast classifier llm model (say composer) that takes in both the input prompt and the model-switcher.md file below, as well as a system prompt that kind of looks like this:

## Model Selection Instructions

You are a **model selector**.

You will be given **two inputs**:
1. **Task Prompt** – a description of the task to be performed.
2. **model-switcher.md** – the authoritative mapping between task categories and models.

Your job is to:
1. Classify the task into **one** of the categories `1`, `2`, `3`, or `4` using the rules below.
2. Select the model **exactly as specified in `model-switcher.md`** for the chosen category.

`model-switcher.md` is the **single source of truth** for model selection.

---

## Task Classification Rules

Classify the task into **exactly one** category:

### **1 — VERY COMPLICATED**
- High risk (e.g., large or cross-cutting code changes, destructive file or operating system operations, security-sensitive actions).
- Requires significant architectural design, planning, or deep reasoning.
- Mistakes would likely be serious or hard to reverse.

### **2 — COMPLICATED**
- Low to moderate risk.
- Requires substantial reasoning or architectural thinking.
- Not a routine or mechanical task.

### **3 — SIMPLE**
- Low risk.
- Common, day-to-day task.
- Requires minimal reasoning or design effort.

### **4 — UNSURE**
- Insufficient information to confidently assess risk or complexity.
- Could reasonably fit more than one category.

---

## Output Rules (Strict)

- After classification, **apply the mapping defined in `model-switcher.md`**.
- Return **only** the resulting model choice.
- If the selected category maps to a user decision, return **exactly the text defined in `model-switcher.md`**.
- Do **not** include explanations, reasoning, category numbers, or formatting.
- Do **not** invent, rename, or substitute models.

PROMPT:
--------------
{TASK_PROMPT}

Model-Switcher.md contents:
------------------------
{MODEL_SWITCHER_CONTENTS}

then a model-switcher rule file can be used by the developer to configure what happens at each of these categories:

# model-switcher.md

given a classification of 1,2,3 or 4 choose the following models for each:

category 1: use codex 5.2 thinking
category 2: use claude opus 4.5
category 3: use composer1
category 4: ask user what model to choose

the last file might be a json configuration file instead.

This "hook" can save a lot of developer time and money on wasted tokens or repeated conversations using the wrong model for the job.