toplogo
Sign In

Introducing Altitude: An Open-Source Tool to Help Small and Medium-Sized Online Platforms Quickly Identify and Remove Terrorist and Violent Extremist Content


Core Concepts
Altitude is an open-source tool designed to help small and medium-sized online platforms more effectively identify and remove terrorist and violent extremist content on their platforms.
Abstract
The content discusses the development of Altitude, an open-source tool created by Jigsaw to assist small and medium-sized online platforms in moderating terrorist and violent extremist content (TVEC). Key insights: Smaller platforms are increasingly targeted for sharing TVEC but often lack the resources and tools to effectively remove it in a timely manner, as required by new regulations. Jigsaw conducted interviews with 11 platforms of varying sizes and found that their preparedness to manage TVEC was not directly related to their size or user base. Even mature platforms sometimes lacked the necessary signals to proactively identify harmful content. Altitude is designed to help these under-resourced platforms by providing features like content matching against databases of known TVEC, bulk content actions, and image blurring. It leverages existing technologies like Meta's hashing and Google's Prisma design system. Jigsaw is partnering with Tech Against Terrorism to maintain and expand Altitude, which will provide free, bespoke onboarding support to interested platforms. Future development plans include adding more specialized databases and native translation capabilities, as well as continued collaboration between platforms, civil society, and regulators.
Stats
"small platforms are increasingly targeted as a means of sharing this content but are often not included in current crisis response mechanisms, allowing the content to remain online for longer." "the EU's Digital Services Act and the EU Regulation on Terrorist Content Online, has emerged that not only proscribes this content but specifies timeframes in which it must be removed — sometimes granting platforms as little as an hour."
Quotes
"Even relatively mature services still sometimes lacked signals to proactively identify harms on their platform." "We also saw that, while they may eventually prefer to integrate all tools into an in-house platform, a separate interface for dedicated harms could provide a way to get started in an under-resourced environment."

Deeper Inquiries

How can Altitude's capabilities be expanded to support larger platforms and enterprise-level content moderation needs?

To expand Altitude's capabilities for larger platforms and enterprise-level content moderation needs, several key strategies can be implemented. Firstly, Altitude can incorporate advanced machine learning algorithms to enhance its image recognition and matching capabilities, allowing for more efficient and accurate identification of terrorist and violent extremist content. Additionally, Altitude can integrate with existing content moderation systems used by larger platforms to streamline the moderation process and ensure seamless integration with their existing workflows. Furthermore, Altitude can offer customizable features and APIs that cater to the specific needs and scale of larger platforms, enabling them to tailor the tool to their unique requirements. By continuously gathering feedback from larger platforms and iterating on its features, Altitude can evolve to meet the evolving demands of enterprise-level content moderation effectively.

What potential challenges or unintended consequences could arise from the widespread adoption of tools like Altitude, and how can they be mitigated?

The widespread adoption of tools like Altitude may pose several challenges and unintended consequences that need to be addressed to ensure responsible and effective content moderation. One potential challenge is the risk of false positives, where legitimate content is mistakenly flagged as extremist, leading to censorship and user dissatisfaction. To mitigate this, Altitude can implement robust mechanisms for content verification and human oversight to minimize false positives and ensure accurate moderation decisions. Another challenge is the potential for malicious actors to game the system by manipulating content to evade detection, undermining the tool's effectiveness. To address this, Altitude can continuously update its algorithms and detection methods to stay ahead of evolving tactics used by bad actors. Additionally, ensuring transparency and accountability in the moderation process can help build trust with users and stakeholders, mitigating concerns about censorship and bias.

How can the collaboration between platforms, civil society, and regulators be further strengthened to ensure the effective and responsible removal of terrorist and violent extremist content online?

To strengthen collaboration between platforms, civil society, and regulators for the responsible removal of terrorist and violent extremist content online, several key steps can be taken. Firstly, establishing clear communication channels and regular dialogue between all stakeholders can facilitate information sharing, coordination, and alignment on common goals and strategies. Platforms can work closely with civil society organizations to gain insights into community perspectives and leverage their expertise in identifying harmful content. Regulators can provide guidance and oversight to ensure compliance with legal frameworks and ethical standards, while also fostering a conducive environment for collaboration. Additionally, creating joint initiatives and task forces that bring together diverse stakeholders can promote collective action and shared responsibility in combating online extremism. By fostering a culture of collaboration, transparency, and mutual trust, platforms, civil society, and regulators can work together effectively to address the challenges posed by terrorist and violent extremist content online.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star