A comprehensive research dashboard examining how LLMs express diverse personalities across contexts
Core Thesis: Most AI research treats language models as single entities with fixed personalities. We show this is incorrect. Through systematic analysis of 7,700 observations across 11 models, we demonstrate that LLMs express fundamentally different emotional profiles when given personality-shaping instructions.
Interactive visualization of 7 personality distribution analyses plus 10 emotion significance heatmaps. Explore how different models and contexts shape emotional expression.
Deep dive into our research methodology: 40,000+ words covering data extraction, statistical analysis, and interpretation guidelines. Everything you need to understand and cite our work.
Models DO change emotional output when given personality instructionsβthey're not monolithic.
Different models have different personality ranges, suggesting diverse architectural constraints.
RLHF training increases personality diversity 2.1Γ in chat models vs base models.
All findings validated with t-tests and FDR correction for multiple comparisons.
10 emotion heatmaps showing which emotions vary significantly across models and contexts.
All visualizations at 150 DPI with complete methodology documentation.
Jump into the interactive dashboard to see the evidence for yourself