14. USER AND CONTENT MODERATION

Addresses products that allow users to contribute content and/or interact with each other or the content.

1.1 What is moderation?

On what basis should moderation be performed?Moderating an interactive platform means screening content submitted by users and/or user interactions by applying a set of predefined rules to distinguish the acceptable from the unacceptable. In youth production, moderation covers the following:

  • “Inappropriate” speech, behaviour and content (adult users/predators, preventing real-world encounters, violence, drugs, alcohol, weapons, illegal activities, etc.)
  • Disrespectful behaviour toward other users (stalking, cyberbullying, inappropriate language, racist or sexist remarks, hate speech, etc.)
  • The disclosure of personally identifiable information
  • Disruptive behaviour (abuse of reporting mechanisms, spamming, impersonating platform staff, etc.)
  • Content that breaches intellectual property rights (e.g. posting a link to illegally download a movie or sharing a photo that does not belong to the user)

1.2 What are the possible approaches to moderation?

There are a number of ways to approach moderation:

  • Pre-moderation: when content submitted to a website is placed in a queue to be checked by a moderator before being made public.
  • Post-moderation: when submitted content is displayed immediately but replicated in a queue for a moderator to review and remove if inappropriate.
  • Automated moderation: deploying various technical tools to process user-generated content (UGC) through a set of defined rules. Often used in chatroom scenarios, these tools include:
    • White lists: predefined words. Users cannot enter their own text.
    • Black lists: users type their own messages which are filtered to remove any inappropriate words before they can be seen by other users.

1.3 How should users who violate the rules of good conduct be handled?

Consequences imposed for breaches of conduct should be graduated in line with the gravity of the alleged violations. For example, the user receives a formal warning after the first offence, his account is suspended for 24 hours after the second and so on, right up until the account is closed. This educates users who act in good faith but who, in their inexperience, bypass or transgress certain rules of conduct. It’s also a means for moderators to detect suspicious behaviour that could help identify a predator (Backgrounder 21, 1.3).

1.4 Which moderation approach is best?

There is no one universal approach for all platforms. Each approach offers a different level of control over published content and user interactions. Depending on your needs, you can opt for a combined approach. When evaluating your requirements, consider the following:

  • Assessing the risk posed by UGC
  • The levels of maturity and autonomy of your users
  • The budget you can allocate to moderation

For more information: How to De-Risk the Creation and Moderation of User-Generated Content

1.5 Flagging inappropriate content/behaviour

Tools that let users anonymously denounce inappropriate content and/or behaviour are essential to ensuring compliance with certain intellectual property laws, including copyright laws. In general, operators will be absolved of all liability if they act promptly when advised of the presence of infringing content on their platforms.

Inappropriate content or behaviour can be flagged in a number of ways; the important thing is to have a mechanism that’s easy to access and use. One way is to have a clickable “Report” icon everywhere content can be shared or users can interact. You must also establish effective procedures for responding quickly to such alerts.

While moderation is not required by law, it is strongly recommended in youth production for security reasons. For more information on the regulatory framework governing participatory media, click here.

UNITED STATES

Children’s Online Privacy Protection Act (COPPA)

Federal law that applies to products that collect personal information from U.S. children under 13 years of age. COPPA considers photos, videos and audio files containing a child’s image or voice to be personal information. If your platform enables content of this kind to be shared, you must:

  1. Screen and remove any content (photos, videos or audio tracks) where children can be seen and/or heard before this content goes online; OR
  2. Obtain verifiable parental consent before allowing children to submit photos, videos and/or audio recordings of themselves.

Under COPPA, certain user interactions — for example, contributing to a forum — do not require verifiable parental consent provided the operator takes reasonable measures (e.g. pre-moderation) to ensure that no personal information is inadvertently disclosed. Staff who have access to personal information (e.g. moderators) must be sufficiently trained to handle sensitive information of this kind.

Each app store has a rating system based on the presence or absence of various criteria (violence, language level, nudity, pornography, etc.). This applies to all app content, including user-generated content. Apps that allow UGC should employ a moderation method to filter content and have a whistle-blowing mechanism for inappropriate content as well as a means of blocking abusive users.

While no specific self-regulation code applies to content moderation, it is strongly recommended for participatory media.

  • Consider your production budget and determine what portion you wish to allocate to moderation, based on your platform’s features, keeping in mind that content filtering and moderating can entail significant operating costs.
  • Moderation should be based on defined rules. These rules of conduct, expressed in accessible language, should be laid out in your digital code of conduct and/or terms of use.
  • Include a clause granting you the right to remove any content that violates your platform’s policies.
  • Post security reminders in “critical” areas of your platform. For example, in the chatroom, remind children not to disclose personal information like phone numbers.
  • If you opt for pre-moderation, erase all content containing personal data: blur faces in photographs, strip metadata from documents and so on.
  • Moderators work closely with children. Be sure to select them carefully and train them to manage problem situations as well as work with personal information.
  • Make sure your reporting mechanism is easy to find and use. Design it so that users can include the reason for the complaint (e.g. “Why are you reporting this content?” with checkboxes they can tick and/or a text box), since this will speed up processing.
  • To prevent the exchange of encrypted data (age, address, etc.), the following are recommended:
    • Block the numbers keypad, making it impossible for the user to enter digits (1, 2, etc.).
    • Use a black-list tool to block numbers written out in words (one, two, etc.)
  • For products where extra precautions are advisable (e.g. for very young audiences), the use of a white list is recommended.
  • Periodically review and update your moderation methods (e.g. black list words).
  • Moderating an interactive platform may raise legal issues: for your own protection, seek professional legal advice.

Bibliography.