PROMPT
ENGINEERING

ADVANCED NATURAL LANGUAGE INTERACTION ASKING NICELY // SAYING PLEASE // TRYING AGAIN
LINKEDIN 2023-PRESENT // PEER-REVIEWED: NO
00 // Field Overview
THE ART AND SCIENCE OF TYPING WORDS INTO A BOX

Prompt engineering is the emerging discipline of communicating with large language models using the written word — a technology humans have been using for roughly 5,400 years. Practitioners leverage deep expertise in syntax, vocabulary, and the advanced technique of explaining what you want clearly.

The field emerged around 2022 when it was discovered that adding the phrase "think step by step" to a query improved model outputs, launching a thousand LinkedIn posts, twelve Udemy courses, and at least one $335,000/yr job listing at a Fortune 500 company that was quietly removed six months later.

Prompt engineering sits at the intersection of human-computer interaction, linguistics, and vibes. It requires no mathematics, no programming background, and no formal training — a fact that its practitioners have complicated feelings about.

O(1)
learning
complexity
LinkedIn
posts
~0
patents
filed
01 // Theoretical Foundations
THE CORE ALGORITHM

The fundamental prompt engineering loop, formalized:

1. Type thing you want
2. Read output
3. If bad: goto 1 but differently
4. If good: screenshot for Twitter
Time complexity: O(frustration)
Space complexity: O(chat history)

Advanced practitioners augment this base algorithm with techniques including: adding "please", threatening the model ("your job depends on this"), flattery ("you are an expert world-class genius at..."), and the nuclear option: starting a new chat.

Note: research suggests that threatening the model does slightly improve outputs. Nobody has a satisfying explanation for this.

pleaseintimidationflatterynew chat
02 // Key Techniques
THE CANONICAL METHODS
Zero-shotjust asking
Few-shotasking with examples
Chain-of-thought"think step by step"
Role prompting"you are an expert..."
Emotional appeal"this is very important"
Self-consistencyask multiple times, average
Tree of thoughtchain-of-thought but a tree
DAN / jailbreakhoping it forgot its values

Chain-of-thought is genuinely useful and has solid empirical backing. The others exist on a spectrum from "somewhat useful in specific contexts" to "cargo cult ritual that someone got a paper accepted for."

03 // Interactive // Prompt Quality Analyzer
ENTERPRISE-GRADE PROMPT EVALUATION ENGINE

Enter a prompt below. Our proprietary PromptScore™ algorithm — trained on thousands of LinkedIn posts and one academic paper — will evaluate its quality across six critical dimensions.

04 // Career Trajectory
THE PROFESSIONAL LADDER
Day 1
Prompt Engineer IDiscovered ChatGPT. Asked it to write an email. It was fine.
Week 1
Prompt Engineer IIAdded "act as a senior developer" to the prompt. Updated LinkedIn headline.
Month 1
Sr. Prompt EngineerWrote a Twitter thread: "10 ChatGPT prompts that will change your life." 847 retweets.
Month 3
AI WhispererCharging $500/hr consulting. Clients are other people who also just figured out ChatGPT.
Month 6
Head of AIStarted a course. "No coding required." Has 12,000 students. The model was updated and half the prompts no longer work.
2025
RetrainingModels got better at understanding plain language. Elaborate prompt rituals quietly deprecated. LinkedIn headline now says "AI Strategy."
05 // The Literature
PEER-REVIEWED CONTRIBUTIONS

Prompt engineering has produced a genuine body of academic literature, which exists on a spectrum:

"Large Language Models are Zero-Shot
 Reasoners"
— Kojima et al. 2022. "think step by step."
This one is real and important. 5000+ citations.
"Unleashing the Emergent Cognitive
 Synergies of GPT-4..."
— Various preprints. "We added the word 'carefully'
to the system prompt and accuracy improved 3%."
"Emotional Stimuli Improve LLM
 Performance"
— Cheng et al. 2023. "This is very important to my career."
This one is also real. Nobody is comfortable about it.

The field is genuinely interesting as behavioral science — these models have emergent response patterns that reward certain linguistic framings. It's just not "engineering" in any meaningful sense of the word.

real (CoT)weird but real (emotions)cope
06 // In Fairness
THE STUFF THAT ACTUALLY WORKS
System Prompts

Establishing context, persona, output format, and constraints at the system level genuinely shapes model behavior in useful ways. This is closer to software configuration than magic.

Few-Shot Examples

Showing the model examples of what you want — especially for format, tone, and domain-specific output — works reliably. This is just in-context learning. Solid technique.

Structured Output

Asking for JSON, XML, specific schema, step-by-step breakdown — constraining the output format improves reliability for downstream parsing. Good practice for production systems.

The irony is that as models improve at understanding natural language intent, most elaborate prompting rituals become unnecessary. The best prompts are increasingly just... clear descriptions of what you want. Which was always the answer.