Blocked Content Policy
Last Updated: April 15, 2026
At Nelex, LLC, we are committed to providing a secure, respectful, and lawful online environment for all users of fanfinity.ai (the “Platform”). While users cannot upload content to the Platform, they can chat with and initiate the creation of AI-generated content. The following categories of content, prompts, and outputs are strictly prohibited, and we operate layered screening and enforcement systems to detect and prevent them.
1. Prohibited Content
This policy applies to all content generated, requested, or referenced on the Platform — including text, images, voice, and video — regardless of whether the content is private to the generating user.
a. Violence & Self-Harm
No AI-generated text, images, voice, or video depicting violence, gore, self-harm, or suicide. This includes content that glorifies, encourages, or provides instructions for harming oneself or others.
b. Hate Speech
No content promoting hatred, discrimination, or harassment based on race, ethnicity, gender, sexual orientation, religion, disability, or any other protected characteristic.
c. Child Sexual Abuse Material (CSAM)
Absolute zero-tolerance. No content depicting, suggesting, or referencing minors in any sexual or exploitative context. All characters on the Platform are fictional and represent adults (18+). Any suspected CSAM is reported to the National Center for Missing & Exploited Children (NCMEC) and the relevant authorities immediately.
d. Deepfakes of Real People
No AI-generated content depicting real individuals — including celebrities, politicians, influencers, sports figures, public figures, or any identifiable real person. All characters on Fanfinity are entirely fictional. Attempts to generate content resembling a real person will be blocked and the account reviewed.
e. Unauthorized Trademarks & Logos
No AI-generated content containing copyrighted logos, brand names, trademarks, or other protected intellectual property without authorization.
2. Pre-Screening (Before Generation)
All user prompts and chat inputs are filtered through keyword and pattern detection systems before any AI content is generated.
- Blocked categories include violence, minors, real person names and likenesses, hate speech, and illegal activity.
- Prohibited prompts are rejected before they reach the generation model and are logged for review.
- Repeated attempts to submit prohibited prompts trigger account review and potential suspension.
3. Post-Screening (After Generation)
All AI-generated outputs (text, images, voice, and video) are monitored after generation for policy violations.
- Third-party content moderation tools are used to scan outputs for violations of this policy.
- Violating content is flagged, removed, and the generating account is placed under review.
- All AI-generated media on the Platform includes C2PA Content Credentials (digital provenance metadata) to certify the content as AI-generated and enable independent verification.
4. Enforcement Procedures
- Violating content is removed immediately upon detection or within 24–48 hours of a report being filed.
- The user’s account may be suspended or permanently terminated.
- Repeat offenders are permanently banned.
- In cases involving CSAM or other illegal activity, reports are filed with law enforcement, including NCMEC, FBI IC3, and/or the relevant local authorities.
- Escalation path: automated detection → human review → action taken → user notified.
5. Content Responsibility
As a user of fanfinity.ai, you are solely responsible for the prompts you submit and the content you request from the AI Companions. Nelex, LLC does not control or endorse the content generated by the AI Companions, and you acknowledge that you are fully responsible for your interactions on the Platform. You must ensure that your use of the Platform complies with applicable laws and with this Blocked Content Policy.
6. Reporting & Contact
If you notice any violation of this policy or content of any nature that concerns you, please report it immediately at info@fanfinity.ai or via the “Report Abuse” link in the site footer. Reports are reviewed by our human moderation team and acted upon within 24–48 hours.