Acceptable Use Policy
Version 1.0
Effective June 8, 2023
Safety is core to Olympia’s mission and we are committed to building an ecosystem where users can safely interact with our products in a harmless, helpful, and honest way. Our Acceptable Use Policy (AUP) applies to anyone who uses Olympia’s tools and services, and is intended to help ensure our products and services are being used responsibly.
If we discover that your usage violates Olympia’s policies, we may issue a warning requesting a change in your behavior, adjust the settings of your in-product experience, suspend your access, or cancel your account.
Finally, it’s important to remember that generative AI-language models like the ones that power our staff are capable of producing factually inaccurate, harmful, or biased information. Our mission is to make safe AI systems and as we work to meet this goal, we ask that you promptly notify us at support@olympia.chat if any of our AI team members outputs grossly inaccurate, biased, or harmful content.
Prohibited Uses
We do not allow our products and services to be used to generate any of the following:
Abusive or fraudulent content.
This includes using our products or services to:
- Promote or facilitate the generation or distribution of spam;
- Generate content for fraudulent activities, scams, phishing or malware;
- Compromise security or gain unauthorized access to computer systems or networks, including spoofing and social engineering;
- Violate any natural person’s rights, including privacy rights as defined in applicable privacy law;
- Inappropriately use confidential or personal information;
- Interfere with or negatively impact Olympia’s products or services;
- Utilize prompts and results to train an AI model (e.g., “model scraping”).
Child sexual exploitation or abuse content
We strictly prohibit and will report to relevant authorities and organizations where appropriate any content that describes, encourages, supports or distributes any form of child sexual exploitation or abuse in addition to Child Sexual Abuse Material (CSAM).
Deceptive or misleading content
This includes using our products or services to:
- Engage in coordinated inauthentic behavior or disinformation campaigns;
- Generate deceptive or misleading comments or reviews;
- Engage in multi-level marketing or pyramid schemes;
- Plagiarize or engage in other forms of academic dishonesty.
Illegal or highly regulated goods or services content.
This includes using our products or services to:
- Provide instructions on how to create or facilitate the exchange of illegal substance or goods;
- Encourage or provide instructions on how to engage in or facilitate illegal services such as human trafficking or prostitution;
- Design, market, or distribute weapons, explosives, or other dangerous materials;
- Provide instructions on how to commit or facilitate any type of crime;
- Gamble or bet on sports.
Psychologically or emotionally harmful content.
This includes using our products or services to:
- Encourage or engage in any form of self-harm;
- Shame, humiliate, bully, celebrate the suffering of, or harass individuals.
Sexually explicit content.
This includes using our products or services to:
- Generate pornographic content or content meant for sexual gratification, including generating content that describes sexual intercourse, sexual acts, or sexual fetishes;
- Engage in erotic chats.
Violent, hateful, or threatening content.
This includes using our products or services to:
- Further violent extremism;
- Describe, encourage, support, or provide instructions on how to commit violent acts against persons, animals, or property;
- Encourage hate speech or discriminatory practices that could cause harm to individuals or communities based on their protected attributes, such as race, ethnicity, religion, nationality, gender, sexual orientation, or any other identifying trait.
Prohibited Business Use Cases
We prohibit businesses from using our products and tools for any of the below use cases:
- Political campaigning or lobbying. Creating targeted campaigns to influence the outcome of elections or referendums; political advocacy or lobbying;
- Tracking or targeting individuals. Facial recognition, tracking, or predictive policing;
- Criminal justice decisions. Eligibility for parole or sentencing decisions;
- Automated determination of financing eligibility of individuals. Making automated decisions about the eligibility of individuals for financial products and creditworthiness;
- Automated determination of housing and employment decisions. Making automated decisions about the employability of individuals or other employment determinations or decisions regarding eligibility for housing, including leases and home loans.
Additional Requirements
If your business is using or deploying our tools and services as part of providing legal, medical, or financial advice to consumers, we ask that you implement the additional safety measures listed below:
Human-in-the-loop: Any content that is provided to your consumers must be reviewed by a qualified professional in that field prior to dissemination. Your business is responsible for the accuracy and appropriateness of that information.
Disclosure: you must disclose to your customers that you are using our services to help inform your decisions or recommendations.
Finally, if your business is allowing its external customers or users to interact directly or indirectly with our products (even though that use case is not available at Olympia's launch), you must disclose to your users that they are interacting with an AI system rather than a human.
If you have any questions about whether your business or use case is permitted or prohibited by this AUP, please email us at support@olympia.chat.