A Note on Urgency of an Overhaul of Content Moderation Practices
The past two decades have seen significant growth in forums, social media & e-commerce marketplaces on the internet, following the early evolution of bulletin boards and chat rooms. Though these platforms emerged in due course to be the primary venues for online interaction and transaction of services, increasing abuse on these platforms put the actual negative side of this tech-savvy generation on display.
The spread of objectionable content across the online platforms necessitates an understanding of the diversifying role of digital content moderators — and to what extent the modern content moderation practices need to be modified to help mitigate the rising concern.
The following note aims to give a rationale for the urgency of a complete overhaul of the traditional content moderation practices under the outline of increasing abuse on social media & messaging platforms — platforms that are different in terms of types, sizes, genres, and target audience.
Measures to Map & Mitigate Online Abuse Need an Overhaul
The current overview of the social messaging & media platforms fails to provide us much insight into what platforms do in response to users’ behavior online, i.e., mapping, measuring, and mitigating the objectionable content on online platforms. The change needs to rise on the policy agenda alongside — as there are emerging concerns about hate speech, online abuse, and other objectionable content being frequently published from users’ end.
It needs a lot more than the traditional approach when it comes to:
- Measuring the length and breadth of objectionable content in the digital space;
- Preparing the policy framework to prevent or at least minimize the incidences of online abuse;
- Planning and implementing modern content moderation practices for a cleaner online ecosystem.
Traditionally, online social sharing platforms have largely been talked about their importance in influencing our social, economic, and political lives. However, it, at this very moment, needs to focus on a complete overhaul of how these platforms create policies and how users interact with the digital norms — in terms of its speech aspects, as it is about the users’ content, specifically the objectionable ones.
Social Media Moderation — It’s Not Just About Facebook
Despite the differences in the social media platforms in terms of type, size, genre, and target audience, more or less of our content moderation strategies have been around Facebook. While Facebook, the software, the company, its methods, and its problems deserve to be scrutinized carefully, too often we treat them as proxies for content moderation in general — which, in a broader context, must be limited.
The fact is that the public and scholarly discussions we are having today are extremely narrow — which, in one or other ways, are counterproductive to promoting a more expansive approach for content moderation. If truth be told, social media moderation services in all likelihood, need to be understood and implemented in relation to the entire online infrastructure that distributes content.
Automated Content Moderation vs Manual Content Moderation
The majority of machine learning (ML) and automated content moderation innovations focus on identification or recognition methods. But it remains to be seen if the AI-based automated systems can spot pornography, harassment, hate, and sort of other emotional contexts with greater accuracy. These expectations from the AI-enabled content moderation systems not only pose difficulties but also overshadows other possible applications of ML and AI (artificial intelligence) techniques for supporting other forms of content moderation.
Research, in a broader sense, should prioritize tools that can support human moderators to better apprehend the contours of content moderation. Aiming to replace human intervention completely with automated content moderation tools is a misleading approach — as AI is not still that capable of picking the context and emotional virtues out from users’ content.
Though AI-enabled mechanisms, with their ability to check abusive elements in users’ content in real-time, can speed up the pace of moderation, manual intervention is still needed for precision & accuracy. Frankly speaking, a median content moderation approach that involves both digital content moderators and AI-assisted content moderation systems can get the course carried through.
Cloud Content Moderation
The age as it is of artificial intelligence & machine learning, as it is of cloud computing. The cloud servers are helping humanity to store and run every tech thing in the virtual world while mitigating the need for physical resources. Users, in every case possible, will engage even on the cloud, so will their content. When users’ content is there on the cloud, more or less a part of it will need moderation in view of its abusing, offensive, or other objectionable contexts.
Though there are bunches of moderation companies around offering a range of cloud content moderation services, the moderation approaches still need some fixes. Amazon Rekognition, for instance, is an AI-based cloud content moderation tool that has capabilities to detect any inappropriate or offensive content. However, this Amazon’s moderation tool fails to recognize a range of file formats and gets screwed up when it comes to picking the real emotional context out of users’ content.
The AWS text moderation tool from America’s e-commerce giant Amazon holds a reputation in the market with its various implementations in different digital vertices ranging from social to broadcast media and e-commerce to digital advertising campaigns. While the tool ensures moderation in compliance with the local and global regulations, nothing here diminishes the role of digital content moderators. No matter how advanced humankind gets with the growth and innovations in AI technology, manual intervention will always be needed when it comes to determining the “accuracy of content moderation”
Having pondered over different means and measures for moderation in the above text, the storyline drags us to an unsophisticated conclusion which is no technology is as clear, concise, and curative as the human brain is. The manual moderation approach is and shall remain relevant in the times ahead irrespective of advancements in AI technology.
AI can certainly promise a pace of moderation that humans can hardly think of competing with, but what when we say the accuracy of identifying the emotional contexts hidden behind users’ content. So here comes the human brain; here comes the digital content moderators.