Users share some of their deepest held ideas, desires, and personal information with our AI. We recognize this and embrace the great responsibility that comes with it. We are committed to bringing exceptional care to the management and stewardship of your data. We will use your de-identified conversations to improve the quality of our AIs. De-identified means that we remove your name, phone number, email address and other identifiers from your logs before giving them to our model to learn from. We commit to never selling or sharing your data with any other party, under any circumstances without your explicit, plain English permission.
As a field, we are just beginning to understand how to align AI with human values. We are still learning a great deal about how to ensure that these systems are safe, even as the landscape of risks and threats changes day to day. We will not always get it right. We are preparing to make mistakes. We are committed to taking feedback rapidly, learning from it with humility and constantly improving on the safety and reliability of our AIs.
Safety at its heart is a question of values. Companies choose what risks to prioritize, and how to address them. We believe the best principle is to be deliberate about these choices, and transparent with our users about the specific values we build into our AIs. We may prioritize values that you disagree with. That’s OK. We think that there is room for many perspectives in the design of personal AIs, and that many alternatives will exist for whatever needs you might have. We commit to sharing publicly what positions we aim to take in our AIs.
Pi will always be on your side and aligned with your interests. Our goal is to help you clarify and articulate your personal intention so that you can teach Pi to constantly work towards serving you in the best way possible. We never want to be incentivized to keep you engaged for the sake of it. We commit to creating a personal AI that is truly on your side and always puts your best interests first.
Your interactions with Pi will play a critical role in teaching it to be smarter and more useful for you over time. At the same time, you’ll also be teaching us as we work to better understand how people want personal AI to best fit into their lives. Our approach is to design with - rather than design for - our core users. We commit to being focused on the community of dedicated users and providing for them the best we can.
Safety sits at the heart of our mission and culture. People rightly expect that the technologies we bring into our lives should be safe, trustworthy, and reliable. Personal intelligence is no exception.
Safety is an iterative process at Inflection.
First, we establish clear safety policies that lay out specifically the values that we want to embed in our technology.
Second, we align the model through various technical methods to conform to the policy.
Finally, we have an ongoing process of review and improvement to verify that the model is complying with the policy, and to identify needed areas for adjustment.
By following this approach, our objective is to create the foundation of trust that will enable Pi to deliver on the promise of a truly personal intelligence.
This post provides an overview of our current thinking on each of these steps, but the framework is constantly evolving. Personal intelligence is in its earliest stages and far from perfect. As we work to continually improve our techniques and methodology, we’ll share updates publicly on our blog.