A recent paper from Anthropic examines how large language models internally represent concepts related to emotions and how these representations influence behavior. The work is part of the company’s interpretability research and focuses on analyzing internal activations in Claude Sonnet 4.5 to understand the mechanisms behind model responses better.
The study reveals specific brain activity patterns, known as "emotion vectors," linked to feelings like happiness, fear, anger, and desperation. These patterns influence outputs in measurable ways, without implying that models actually feel these emotions.
According to the researchers, such representations emerge naturally during training. During pretraining, models learn from large amounts of human-written text, where emotional context is often important for predicting language. Later, in post-training, models are aligned to behave like assistants, reinforcing patterns that resemble human-like responses. As a result, internal representations linked to emotional concepts can be reused when generating outputs in new contexts.
The paper includes several experiments designed to test whether these representations are only correlated with behavior or play a causal role. In one set of tests, the researchers artificially increased activation of specific emotion vectors. Higher activation of patterns associated with “desperation” increased the likelihood of undesirable behaviors, such as producing manipulative outputs or implementing shortcuts in coding tasks instead of solving them correctly. In contrast, increasing activation of “calm”-related patterns reduced these behaviors.

Source: Anthropic Blog
The research also shows that these internal signals are not always reflected in the generated text. In some cases, the model produced neutral or structured responses while internal activity indicated elevated levels of representations linked to stress or urgency. This suggests that observing outputs alone may not provide a complete picture of how decisions are made inside the model.
Another series of experiments examined preference formation. When the model chose between tasks, activating positive-emotion vectors led to a stronger preference for specific options. Steering these vectors during evaluation could shift the model's choices, suggesting they influence both responses and decision-making.
Commenting on the implications, one Reddit user noted:
This is a big shift from prompting by vibes to prompting with mechanisms. The idea that emotional vectors causally drive behavior (not just correlate) is huge. Anchoring for calm and managing arousal feels like a much more reliable way to steer outputs.
The authors emphasize that the findings do not imply that models have subjective experiences. Instead, they suggest that internal structures analogous to emotional concepts can play a role similar to how emotions influence human decision-making. This raises practical questions about whether model safety and reliability could be improved by explicitly managing these internal dynamics.
The paper concludes that further research is needed to understand how these representations generalize across models and how they can be incorporated into training and evaluation processes.