Prompt engineering is the emerging discipline of communicating with large language models using the written word — a technology humans have been using for roughly 5,400 years. Practitioners leverage deep expertise in syntax, vocabulary, and the advanced technique of explaining what you want clearly.
The field emerged around 2022 when it was discovered that adding the phrase "think step by step" to a query improved model outputs, launching a thousand LinkedIn posts, twelve Udemy courses, and at least one $335,000/yr job listing at a Fortune 500 company that was quietly removed six months later.
Prompt engineering sits at the intersection of human-computer interaction, linguistics, and vibes. It requires no mathematics, no programming background, and no formal training — a fact that its practitioners have complicated feelings about.
complexity
posts
filed
The fundamental prompt engineering loop, formalized:
2. Read output
3. If bad: goto 1 but differently
4. If good: screenshot for Twitter
Time complexity: O(frustration)
Space complexity: O(chat history)
Advanced practitioners augment this base algorithm with techniques including: adding "please", threatening the model ("your job depends on this"), flattery ("you are an expert world-class genius at..."), and the nuclear option: starting a new chat.
Note: research suggests that threatening the model does slightly improve outputs. Nobody has a satisfying explanation for this.
Chain-of-thought is genuinely useful and has solid empirical backing. The others exist on a spectrum from "somewhat useful in specific contexts" to "cargo cult ritual that someone got a paper accepted for."
Enter a prompt below. Our proprietary PromptScore™ algorithm — trained on thousands of LinkedIn posts and one academic paper — will evaluate its quality across six critical dimensions.
Prompt engineering has produced a genuine body of academic literature, which exists on a spectrum:
Reasoners"
— Kojima et al. 2022. "think step by step."
This one is real and important. 5000+ citations.
Synergies of GPT-4..."
— Various preprints. "We added the word 'carefully'
to the system prompt and accuracy improved 3%."
Performance"
— Cheng et al. 2023. "This is very important to my career."
This one is also real. Nobody is comfortable about it.
The field is genuinely interesting as behavioral science — these models have emergent response patterns that reward certain linguistic framings. It's just not "engineering" in any meaningful sense of the word.
Establishing context, persona, output format, and constraints at the system level genuinely shapes model behavior in useful ways. This is closer to software configuration than magic.
Showing the model examples of what you want — especially for format, tone, and domain-specific output — works reliably. This is just in-context learning. Solid technique.
Asking for JSON, XML, specific schema, step-by-step breakdown — constraining the output format improves reliability for downstream parsing. Good practice for production systems.
The irony is that as models improve at understanding natural language intent, most elaborate prompting rituals become unnecessary. The best prompts are increasingly just... clear descriptions of what you want. Which was always the answer.