Keeping Your Data Secure
During the development of MindOS, we have developed our own user security and privacy system. Its purpose is to offer comprehensive protection when it comes to your data by taking the following tenets into account:
01
MindOS Will Not Train AI With Your Data Without Permission
If you do not tell us we can use your personal data to improve your experience, we will not do so. We understand this may impact how fast our system can provide a unique and personalized experience. As such, our system sometimes needs to make more assumptions early on as it learns your habits and goals. Regardless, this step is imperative if we want to adhere to the principles of user privacy protection.
02
MindOS Will Encrypt All User Data During Transfer and Storage

Encryption is one of several data handling methods we use to keep everything you say away from prying eyes.

We split our encryption efforts into two main umbrellas that help fulfill this obligation:


Transfer Encryption: We encrypt all communications between your MindOS app client and our cloud systems. This step prevents third parties from accessing the messages you send or receive.

Cloud Encryption: We encrypt every byte we receive from you while it is in the cloud using the most advanced algorithms available to us. It is only decrypted when we need to process it and provide you with a useful experience.


To this end, when handling your personal information, we use hardware-based security. Examples of such security features include Trusted Execution Environments (TEEs). These make sure we maintain a high level of data security at all times.

03
MindOS Ensures You Have Complete Control Over Your Own Data
We have designed a permissions system that enables you to control access to your data with a high degree of precision. Offering flexible data management options like these allows you to set access according to your unique needs. This not only secures your data to the level that is best for you but also ensures you always know what your security options are.
04
MindOS Continues to Protect Your Data As We Use It

Last of all, we have developed what we are calling the "Pre-LLM Privacy Gating Layer" (PPGL).

Such an extra security step is essential for applications we develop on top of large models. This is because it ensures user data remains secure when used in any ofMindOS's other internal systems. It also prevents privacy breaches by LLM service providers we might work with now or in the future.

Before sending information to other systems, we perform comprehensive privacy processing. The PPGL filters out data that does not meet privacy standards, replacing it with anonymized content before we send it out. We then translate it back to relevant information before we give the results back to you.

The following diagram summarizes how we have planned our security process:

flowchart
MindOS offers robust AI data protection through several layers of security
MindOS's Commitment

Protecting user privacy in the era of large models is not an easy task. Still, we commit to exploring the most secure and efficient privacy protection options available. Furthermore, we aim to do this while continuing with our goal of offering a uniquely personalized AI experience.

We understand why we need to ensure that the services you use match your unique needs and what that also means in an increasingly diverse world. So, journey with us into this new era of AI and check out the products that uphold the privacy protection you deserve.