Start where it was tested.
Each prompt names the image model it was built around, so you are not guessing how it behaved in the first place.
Start with the model you are using, then pick a prompt that already has the camera flaws AI tends to smooth away: harsh flash, soft room clutter, skin texture, bad crops, and enough real-life mess to keep the result from looking staged.
New model pages have to earn their space with tested prompts, not name-dropping. Wan, GPT Image, and Pro matter here only when they solve the same old problems: clean skin, perfect rooms, fake flash, stiff hands, and prompts that fall apart the second you paste them.
Nano Banana 2 is live now. Wan 2.7, GPT Image 1.5, and Nano Banana Pro stay queued until they have enough tested prompt pages to be useful.
Each prompt names the image model it was built around, so you are not guessing how it behaved in the first place.
A prompt keeps one permanent page. The model label tells you where it was tested without turning the site into duplicate filler.
If you move a prompt to another model, keep the flash, clutter, pores, framing, and room evidence. That is usually what keeps it real.
Wan may hold reference detail better. GPT Image may follow a stricter setup. Nano Banana 2 may get you there faster. The point is knowing the tradeoff before you paste.
These models are on the map, but they do not get a live model page until the prompts are tested enough to help.
complex prompt adherence · higher-fidelity production assets · structured prompt tests
reference-heavy image work · collection batch testing · garment and room texture
instruction adherence · complex scene setup · cross-model comparison