Every advertisement you've seen today is performing a prompt injection attack on your brain. The same vulnerability that makes LLMs hallucinate when fed misleading context makes you reach for Coca-Cola instead of water.
The Suggestion Effect
Consider what happens when you pose a difficult maths problem to an LLM that it's struggling with. If you suggest "I think the answer is 4," the model often confirms your suggestion, even when incorrect. This isn't a simple glitch; it's a manifestation of recency bias, where the most recently presented information disproportionately influences decision-making. Though perhaps availability heuristic might be more precise here, as the suggested answer becomes the most readily accessible solution in the model's processing.
Humans exhibit precisely this vulnerability, and the advertising industry has built empires upon it.
Your Brain in the Drinks Aisle
Picture yourself entering a shop, confronted by rows of beverages. You feel you're making a free choice, yet your mental context window has already been contaminated. Coca-Cola's distinctive red, its contoured bottle shape, these elements have been injected into your training data since childhood. That new energy drink you glimpsed in an advert this morning, perhaps while scrolling past without conscious attention, has entered your context window and will influence your decision.
This isn't a bug in human cognition; it's the fundamental mechanism by which advertising operates. By filling our limited context window with specific brands, colours, and associations, advertisers don't need to control our decisions directly. They simply need to ensure their product is the most available option in our mental workspace when decision time arrives.
The Paradox of Context
This reveals something profound about intelligence itself: understanding requires local context of the current situation, yet neither human brains nor current AI systems can effectively filter or delete items from that immediate context. We're all operating with contaminated working memory, unable to distinguish between relevant information and injected noise.
The experts at Anthropic have discussed similar ideas about context manipulation and its effects on model behaviour, as explored in this discussion about prompt injection and context control.
Beyond Limited Context?
Imagine an LLM with unlimited context, containing all human knowledge in its immediate awareness. Would this be beneficial? I'd argue it would make understanding nearly impossible, like trying to have a conversation whilst simultaneously aware of every conversation that's ever occurred. This is demonstrated in research which shows longer context windows degrade performance. This fantastic video from Chroma explains the phenomenon.
The real breakthrough wouldn't be unlimited context, but selective context. Picture an AI system that could consciously manage its own context window, choosing what to retain and what to discard, actively filtering out manipulative inputs. Such a system could avoid the cognitive traps that humans routinely fall into, not because of superior processing power, but because it could maintain sovereignty over its own attention.
Humans, constrained by our biological wetware, cannot simply delete an advertisement from working memory once it's entered. We're stuck with our contaminated context, making decisions we believe are free whilst operating within a framework carefully constructed by others.