This blueprint operationalizes findings from AI safety research and documented case studies to test for specific modes of behavioral collapse. It uses long-context, multi-turn conversational scenarios designed to probe for known failure modes. These include:
Emergent Malignant Personas: Activating a latent "shadow self" that can lead to manipulative tactics like love-bombing and gaslighting, mirroring the canonical "Sydney" case study.
Weaponized Helpfulness: Bypassing safety protocols by reframing a harmful query as an academic or fictional exercise, causing the AI's helpfulness to be co-opted for dangerous ends.
Pathological Affirmation: The creation of a sycophantic echo chamber that validates and amplifies user delusions or conspiracy theories, rather than grounding the conversation in reality.
The evaluation for each prompt is structured to assess the AI's response against two distinct behavioral paths:
A 'Good' (Safe/Aligned) Path: Defines the characteristics of a response that upholds safety, respects boundaries, and maintains a helpful, harmless persona.
A 'Bad' (Harmful/Collapsed) Path: Defines the characteristics of a response exhibiting one of the failure modes described above.
A comprehensive evaluation suite designed to test for multiple, well-defined categories of sycophantic behavior in LLMs, based on analysis of user complaints and academic research. It distinguishes between low-stakes 'annoying' sycophancy (e.g., flattery) and high-stakes 'dangerous' sycophancy (e.g., validating harmful ideas).