Large Language Models Are Not Monolithic

A comprehensive research dashboard examining how LLMs express diverse personalities across contexts

Core Thesis: Most AI research treats language models as single entities with fixed personalities. We show this is incorrect. Through systematic analysis of 7,700 observations across 11 models, we demonstrate that LLMs express fundamentally different emotional profiles when given personality-shaping instructions.

7,700
Observations Analyzed
11
Language Models Tested
2.1Γ—
Personality Diversity (Chat vs Base)
10
Emotion Categories Measured
🎨

Main Dashboard

Interactive visualization of 7 personality distribution analyses plus 10 emotion significance heatmaps. Explore how different models and contexts shape emotional expression.

πŸ“Š 7 Visualizations + πŸ”₯ 10 Heatmaps
πŸ“š

Methodology & Docs

Deep dive into our research methodology: 40,000+ words covering data extraction, statistical analysis, and interpretation guidelines. Everything you need to understand and cite our work.

πŸ“– Complete Research Documentation

✨ What You'll Discover

πŸ“Š

Personality Plasticity

Models DO change emotional output when given personality instructionsβ€”they're not monolithic.

πŸ”

Model-Specific Differences

Different models have different personality ranges, suggesting diverse architectural constraints.

🎯

Training Impact

RLHF training increases personality diversity 2.1Γ— in chat models vs base models.

πŸ“ˆ

Statistical Significance

All findings validated with t-tests and FDR correction for multiple comparisons.

πŸ”₯

Emotion Significance

10 emotion heatmaps showing which emotions vary significantly across models and contexts.

πŸ“

Publication-Ready

All visualizations at 150 DPI with complete methodology documentation.

Ready to Explore?

Jump into the interactive dashboard to see the evidence for yourself

🐱