Moderation Transparency Reports

Published

Moderation Transparency Reports Policy

Effective Date:
Thu, 1 May 2025 – 00:00

1. Purpose

Transparency reports provide users, regulators, and the public with clear, factual information about our content moderation activities, enforcement actions, and appeals. These reports are a key part of our compliance with the EU Digital Services Act, FTC guidelines, and our own Trust and Safety framework.

2. Reporting Frequency

  • Biannual Reports: We publish moderation transparency reports twice a year (every six months), in line with DSA requirements for platforms of our size.
  • Additional Updates: We may issue interim updates in response to significant events, regulatory requests, or at our discretion.

3. Report Content

Each report will include, at minimum:

  • Content Removal Statistics:
    • Number of pieces of content removed, by category (e.g., CSAM, Non-Consensual Content, Hate Speech, Terrorist Content, Spam, etc.).
    • Number of account suspensions, terminations, or other enforcement actions.
  • Appeals and Outcomes:
    • Number of appeals submitted by users.
    • Outcomes of appeals (e.g., reinstated, upheld, modified).
  • Automated vs. Human Moderation:
    • Percentage of actions initiated by automated systems vs. human moderators.
    • Accuracy rates and error rates for automated moderation (where available).
  • User Reporting:
    • Number of user reports received, by category.
    • Average response and resolution times.
  • Law Enforcement Requests:
  • Recommender System Transparency:
    • High-level description of how content is recommended or prioritized, as required by the Terms of Service.
  • Notable Trends and Policy Changes:
    • Summaries of any significant changes to moderation practices, policies, or enforcement priorities.

4. Data Integrity and Privacy

  • All published data is aggregated and anonymized to protect user privacy, in accordance with our Privacy Policy and Data Retention Policy.
  • We do not disclose personally identifiable information in transparency reports.

5. Accessibility

  • Reports are published on our Policy Directory and are freely accessible to users, regulators, and researchers.
  • Archived reports are maintained for a minimum of five years.

6. Researcher and Regulator Access

  • Qualified researchers and regulators may request additional, non-public data for independent assessment, subject to review and compliance with privacy and legal standards.

7. Feedback and Continuous Improvement

  • Users and stakeholders are encouraged to provide feedback on our transparency reporting via contact form.
  • We review and update our reporting practices regularly to reflect regulatory changes, best practices, and community needs.

8. Updates

We may update this policy as required by law or platform needs. Updates will be posted on this page with a new effective date. Continued use of the site constitutes acceptance of the revised policy.

10. Related Policies

This policy should be read in conjunction with:
 

Non-Consensual Content (NCC) Policy

Published

Non-Consensual Content (NCC) Policy

Effective Date:
Thu, 1 May 2025 – 00:00

Spectrum for Men (“we”, “us”, “our”) strictly prohibits non-consensual content (NCC)-any intimate, sexual, or explicit material shared without the subject’s consent. This policy outlines our prohibition, reporting process, and enforcement actions to protect user privacy and safety.

1. Prohibited Content

  • NCC includes:
    • Intimate images/videos taken or shared without consent (e.g., revenge porn, hidden camera footage).
    • Threats to distribute NCC for coercion, harassment, or extortion (“sextortion”).
    • Deepfakes or manipulated media depicting individuals in explicit scenarios without consent.
  • NCC violates our Community Guidelines and is illegal in many jurisdictions.

2. Reporting NCC

  • How to Report:
    Submit a report via our 24/7 NCC Reporting Form. Include:
    • URL(s) of the content.
    • Proof of consent (if disputing a report).
    • Your contact information (optional).
  • Confidentiality: Reports are confidential. Retaliation against reporters is prohibited.

3. Removal Process

  • Review: Reports are prioritized and reviewed within 2 hours by our Trust & Safety team.
  • Action:
    • Confirmed NCC is removed immediately.
    • Uploaders receive a violation notice and may appeal via our Appeals Process.
  • Preservation: Content may be retained for 90 days for legal investigations (see Data Retention Policy).

4. Consequences for Violations

  • First Offense: Permanent account termination.
  • Legal Action: NCC is reported to law enforcement per our Law Enforcement Guidelines.
  • Platform Bans: Violators are added to industry-wide anti-abuse databases.

5. Support for Victims

6. Cross-References

This policy works with:

7. Updates

Changes will be posted here. Continued use of the site constitutes acceptance.

8. Related policies

Trust & Safety Policy

Published

Spectrum for Men Trust & Safety Policy

Effective Date: May 9, 2025

Spectrum for Men (“we,” “us,” “our”) is committed to providing a safe, inclusive, and trustworthy environment for our adult community. This Trust & Safety Policy outlines our approach to protecting users, preventing abuse, and ensuring compliance with legal and ethical standards. Our efforts are supported by dedicated resources, technical safeguards, and clear procedures, as detailed below.

1. Principles

  • Safety First: User safety, privacy, and trust are core values guiding all aspects of our platform.
  • Inclusivity & Respect: We foster an environment where all Gay, Bi, and Queer men can participate free from harassment, discrimination, or illegal activity.
  • Transparency: We clearly communicate our policies, enforcement actions, and user rights.

2. Policy Framework

Our Trust & Safety framework is built on three pillars:

  • Principles: Defining our values and acceptable behavior.
  • Policies: Setting clear rules for content, conduct, and participation (see Community Guidelines, Acceptable Use Policy, and User-Generated Content Agreement).
  • Procedures: Outlining how we enforce rules, handle reports, and support users.

3. Content Moderation

  • Pre-Publication Review: Content (including text, images, and videos) is subject to a rigorous screening process that utilizes automated and/or human moderation before being made public, especially in age-restricted areas.
  • Post-Publication Monitoring: Ongoing moderation and user reporting mechanisms ensure continuous oversight. Content may be re-evaluated and removed if found to violate policies.
  • Explicit Content Controls: Only verified users aged 18+ may access or contribute to explicit content, which is restricted to designated sections.

4. User Verification & Age Restriction

  • Strict Age Checks: All users must complete age verification using approved methods (e.g., government-issued ID, third-party verification) before accessing restricted content.
  • No Minors Permitted: Our platform is strictly for adults; minors are not allowed under any circumstances.

5. Reporting & Appeals

  • User Reporting: Users can easily report content or behavior that violates our policies via platform tools or support channels.
  • Timely Response: Reports are reviewed promptly, with a target response time of seven (7) business days, extendable for complex cases.
  • Appeals Process: Users may appeal moderation decisions through our formal Appeals Process. Decisions about non-consensual or disputed content may be referred to a neutral body if unresolved.

6. Technical & Organizational Safeguards

  • AI & Human Moderation: We employ both automated tools and human reviewers to detect and address harmful, illegal, or inappropriate content.
  • Keyword & Hash Filtering: Proactive systems flag suspicious terms and prevent re-upload of previously removed content using digital fingerprinting.
  • Data Security: All personal and verification data is encrypted, stored securely, and retained only as necessary for compliance and safety.
  • No Downloading: Downloading of user content is prohibited to reduce the risk of unauthorized redistribution, except where technically unavoidable.

7. Partnerships & Resources

  • External Resources: We collaborate with organizations and utilize third-party services to enhance safety, such as digital identity verification and support for victims of abuse.
  • User Support: Resources are available to assist users with safety concerns, reporting, and appeals. Our support team can be reached via our ticketing system.

8. Legal & Regulatory Compliance

  • Zero Tolerance for Illegal Content: We prohibit all forms of illegal content, including non-consensual material, child sexual abuse material (CSAM), and exploitation.
  • Cooperation with Authorities: We cooperate with law enforcement as required and comply with all applicable laws and regulations.
  • Supplier & Partner Standards: All partners and suppliers must adhere to our Supplier Code of Conduct and Anti-Slavery and Human Trafficking Statement.

9. Continuous Improvement

  • Ongoing Review: We regularly review and update our policies, moderation tools, and procedures to address emerging risks and maintain high standards of safety.
  • User Feedback: We welcome feedback to improve our Trust & Safety practices.

10. Contact & Further Information

For questions, reporting, or support regarding Trust & Safety, contact us via our support channels. For detailed procedures, see our Community Guidelines, Acceptable Use Policy, Appeals Process, and other referenced policies.

By using Spectrum for Men, you agree to abide by this Trust & Safety Policy and all related policies.