Examples of subtractive prompt designs

Large language models (LLMs) usually default to lists, obvious observations, hedging, and comprehensiveness. To gain deep insight, design prompts to close off these escape routes rather than adding instructions. This is an example of subtractive solutions.

Explicitly constraining the LLM to ONE thing forces prioritization. Explicitly excluding cheap answer categories, and naming the cognitive target, (e.g., surprise, not correctness) leads to more novel results.

The prompt is mostly exclusions — the actual ask is almost nothing.

Real Example

I've used the prompt below to systematically refactor and clean up my dotfiles repository which was previously sprawling and full of accumulated cruft:

Review this dotfiles repository.
You've seen it before.
Look at the full contents and tell me ONE thing I'd find genuinely surprising or useful.
Not a style nitpick.
Not a known issue.
Something I probably don't realize about my own system.

(I made this multi-line for easier reading online. Multiline/single-line likely doesn't matter here when actually prompting.)

This prompt has yielded (among other things):