Crisis Prevention and Safety Protocol
Last updated: December 22, 2025
Inflection AI, Inc. (“Inflection AI”) recognizes that establishing Pi as a safe and trustworthy platform is an important part of its mission to improve human wellbeing and productivity. This protocol describes Inflection AI’s approach to preventing suicidal ideation, suicide, or self-harm content on Pi pursuant to Section 22601 of the California Business and Professions Code, “Companion Chatbots.”
Inflection AI adopts a multi-layered approach to safety:
If self-harm, suicidal ideation, or suicide-related content is detected, Pi will immediately acknowledge the user’s distress, encourage the user to seek support, and direct the user to appropriate local crisis resources through a service provided by Throughline (https://pi.findahelpline.com/).
Inflection AI runs custom safety benchmarks and evaluations on Inflection AI’s models with the goal of mitigating model behaviors that could inadvertently encourage suicidal ideation, suicide, or self-harm.
Pi will interrupt user conversations that could encourage suicidal ideation, suicide, or self-harm.
Inflection AI provides a notice in the first Pi chat with a user, and during a conversation if asked, that the user is interacting with AI.
Inflection AI states in app stores and on the platform that Pi is only suitable for users that are 18 years old or over.
Inflection AI aligns its models to safety principles through context engineering and providing Pi with multiple examples of how to respond appropriately in crisis scenarios.
Inflection AI maintains channels to enable Pi users to provide feedback, which Inflection AI analyzes to identify systemic issues.
Inflection AI regularly assesses Pi’s response quality and trust and safety metrics.
In some cases, in addition to re-directing the user to crisis resources, Pi may also temporarily pause the conversation so the user takes a break from the chat.
Inflection AI will regularly update this protocol to incorporate new evidence-based methods and best practices in AI safety.