Content Moderation Policy

Introduction

SUBBD is committed to preventing its platform from being used for the distribution of dangerous, illegal, or inappropriate content. This policy outlines the content moderation practices and procedures. These practices will protect the integrity of our platform and ensure a safe environment for all users.

Purpose

The purpose of SUBBD's content moderation policies is to:

  • Detect and prevent the upload and distribution of harmful content.
  • Ensure compliance with relevant legal and regulatory requirements.
  • Protect SUBBD users and stakeholders from exposure to inappropriate material.
  • Protect the SUBBD enterprise, its owners and stakeholders from legal exposure.

Scope

These procedures apply to all content creators, subscribers, and users of SUBBD. It covers all content types, including but not limited to:

  • Text posts
  • Images
  • Videos
  • Voice notes
  • Livestreams

Content Moderation Measures

User Verification

To mitigate content risks, the following measures are in place:

  • Verification for all content creators using Know Your Customer, provided by a recognised supplier - SumSub (https://www.sumsub.com/kyc-compliance/).
  • Document verification, including government-issued ID if required.
  • Ongoing monitoring of user activity.

Prohibited Content

The following content is strictly prohibited:

  • Hate speech, discrimination, or harassment.
  • Child exploitation
  • Terrorism or incitement of extremist views
  • Impersonation
  • Explicit or pornographic material in violation of our guidelines.
  • Content promoting violence, self-harm, or illegal activities.

Automated Detection

Subbd.com employs AI-driven moderation tools to detect inappropriate content, including:

  • Violent, explicit, or harmful material.
  • Content that includes third parties who have not given their express written consent to be included.
  • Hate speech, harassment, or threats.
  • Accounts with patterns indicative of abusive behavior (e.g., spam content, impersonation).

We have specifically contracted with Checkstep (https://www.checkstep.com/) to provide a range of tailored AI moderation tools.

When a content creator uploads any content, it is uploaded to our servers. Once uploaded, the content is then automatically moderated by Checkstep's market-leading AI tools.

All content is stored on our servers for a period of no less than one month. This is so that even if content is flagged, the person who uploaded it has a reasonable time in which to appeal that decision, before the content is deleted.

If the AI moderates the content and finds that it is likely to contain any material deemed inappropriate under our policy guidelines, then the content is flagged and prevented from being shown to any other users. The only person who will be able to view the content within the platform, will be the person who uploaded it, and even they will shown an obfuscated (blurred) representation of the content. This is to make it easy for them to appeal the rejection decision.

The person who uploaded the content, will be notified of the rejection decision. They can then choose to appeal this decision. If they do not appeal this decision within one month of the rejection decision, then the content will be removed from our servers.

If the person who uploaded the content appeals the rejection, then this appeal is escalated and will be reviewed by a member of the SUBBD team.

If the rejection is upheld, then the inappropriate content will be removed from our servers, and the person who uploaded the content will be notified.

If the rejection is overturned, then the content will be made available to be viewed normally via the SUBBD platform.

Manual Detection

If content is reviewed by AI and finds it is likely to contain multiple people, then it is flagged for manual review by a member of the SUBBD team.

A member of the team will then review the content and make sure that every person appearing in it has given their express written consent to appear.

If the rejection is upheld, then person who uploaded the content will be notified. If the rejection is overturned, then the content will be made available to be viewed via the platform.

Reporting and Escalation

Any user of SUBBD is able to report any content as inappropriate, at any time.

If a report of inappropriate content is raised, the owner of that content will be notified, and the report will be passed to a member of the SUBBD team. They will manually review the content within 7 days of the report being raised.

If the content is reviewed and found to be in violation of SUBBD's guidelines, it will be immediately removed from the platform, and the originator of the report and the owner of the content will both be notified.

If the content is reviewed and found to not be in violation of SUBBD's guidelines, both the content owner and the reporter will be notified. All reports will be documented, to allow SUBBD to monitor who is repeatedly reporting content as inappropriate.

User Education

Subbd.com provides ongoing education to users and content creators on content moderation, including:

  • Education on how to recognize and report harmful content.
  • Taking steps to ensure users and creators understand community guidelines.
  • Undertaking regular development efforts to maintain platform integrity and security.

Enforcement and Consequences

Failure by a user or creator to comply with our content policies and community guidelines may result in:

  • Content removal.
  • Account suspension or termination.
  • Legal action if deemed necessary.

We want to encourage all users to report any content that they feel might be inappropriate. However, frequent over-use of the platform's reporting features, or any indication that one user is repeatedly mis-using the reporting process to target another, may result in that user's account being suspended or other corrective action.

Policy Review and Updates

This policy will be reviewed and updated regularly to reflect:

  • Changes in content regulations.
  • Emerging risks and threats, including the ability of AI to generate inappropriate and highly realistic content.
  • Technological advancements in content moderation.

Contact Information

For inquiries regarding this policy, please contact our moderation team at moderation@subbd.com.

By using SUBBD, users agree to comply with this Content Moderation Policy and any applicable regulations.

Version: 0.1

Policy created on 23rd January 2025