Content Moderation AI is a powerful tool that can be used to detect and filter out inappropriate content from your website or app.
Introducing the new Moderation AI endpoint by WorqHat, powered by the advanced AiCon V2 model. This powerful tool is designed to support developers in safeguarding their applications by offering fast and accurate content moderation.
With Moderation AI, developers can ensure that their platforms remain free from inappropriate content, such as sexual, violent, hateful, and self-harm promoting material that violates WorqHat’s content policy. Leveraging LLM-based classifiers, the system efficiently identifies and filters out such content, providing a secure environment for users.
By incorporating AI into the moderation process, developers can enhance human supervision, boosting their confidence in the content displayed on their platforms. This proactive approach reduces the risk of inadvertently showcasing inappropriate content and upholds the integrity of your product.
The versatility of Moderation AI allows it to be seamlessly integrated into various applications, including those that handle sensitive topics like education. WorqHat empowers developers to maintain a safe and inclusive digital space, ensuring a positive user experience for all.
WorqHat’s Moderation AI, powered by the AiCon V2 model, employs a deep learning algorithm to examine text inputs and classify them based on their content, specifically targeting sexual, hateful, violent, or self-harm promoting material. These categories align with WorqHat’s strict content policy, which prohibits such content from being displayed on their platforms.
When a text is submitted to the Moderation API endpoint, the algorithm meticulously analyzes it, assigning probability scores for each of the four content categories. If the input contains content that is deemed inappropriate or harmful, the API promptly returns an error message to the user, preventing the display of such content.
The AiCon V2 model was trained on an extensive dataset comprising examples of inappropriate content, enabling it to accurately detect problematic content across various contexts. This empowers WorqHat’s Moderation AI to be employed in sensitive applications, including education and social media platforms, where ensuring user safety and well-being is of paramount importance.
By offering access to this endpoint, WorqHat is actively supporting developers in safeguarding their applications against potential misuse, fostering responsible AI usage in sensitive settings.
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Text moderated successfully
The response is of type object
.