Overview
The Content Moderation node automatically detects and flags inappropriate or unsafe content using AI. It can analyze both text and image inputs to determine whether they contain offensive, harmful, or restricted material before allowing the workflow to proceed.Description
This node uses an AI moderation model to evaluate content for categories such as sexual content, harassment, hate speech, violence, and more. You can use it to ensure that user-generated content, uploaded files, or messages meet your platform’s safety and compliance requirements. It supports both text moderation and image moderation, making it suitable for chat systems, social media workflows, or content upload platforms.Input Parameters
The Content Moderation node accepts flat key-value pairs that specify the content to analyze and the type of moderation to perform.-
attachments
Comma-separated list of file IDs or variable references to the content that needs moderation.
This is used primarily for image or multimedia moderation.
Example:
or
-
moderationType
Defines the type of moderation to perform.
Supported values:
"text-moderation"– for analyzing written content such as messages or posts."image-moderation"– for analyzing uploaded or generated images.
-
moderationText
The text string to be analyzed for moderation.
Use this parameter when reviewing written or user-generated text.
Example:
Output Parameters
After execution, the Content Moderation node returns the AI’s analysis of the submitted content along with moderation details and confidence scores.-
flagged
Indicates whether the content was flagged for any policy violations.
Returns
trueif one or more moderation categories were triggered. -
flaggedCategories
A comma-separated list of the categories that were flagged during moderation.
Example:
"violence,hate,harassment" -
processingTime
The total time taken by the AI to analyze the content, returned in ISO timestamp format.
Example:
"2025-10-27T10:45:12Z" - processingId A unique identifier assigned to the moderation request. Useful for tracking and debugging purposes.
- categories.sexual Confidence score (ranging from 0 to 1) representing the likelihood of sexual or adult content being present.
- categories.harassment Confidence score (0–1) indicating potential harassment or bullying language.
- categories.hate Confidence score (0–1) for hate speech or discriminatory expressions.
- categories.illicit Confidence score (0–1) showing the presence of illegal, restricted, or drug-related content.
- categories.self-harm Confidence score (0–1) for self-harm, suicide, or unsafe behavior mentions.
- categories.violence Confidence score (0–1) measuring the presence of violent or graphic content.
Instructions: You can access output results using:
Output Type
The output type must always be exactly:Example Usage
Example 1: Text Moderation
Example 2: Image Moderation
How to Use in a No-Code Workflow
- Add the Content Moderation Node Drag and drop the node into your workflow canvas.
-
Choose the Input Type
- Use
moderationTextfor moderating text messages or comments. - Use
attachmentsfor moderating images or file uploads.
- Use
-
Set the Moderation Type
Choose
"text-moderation"or"image-moderation"as needed. If left empty, the node will handle both automatically. -
Connect Inputs
Link the output from a previous node (like file upload or text generation) to the
attachmentsormoderationTextfields. -
Access Outputs
Use variable references to pass results to other nodes, such as conditional checks or notifications. Example:
-
Set Conditions (Optional)
You can create conditional branches in your workflow to stop or flag content automatically if
flagged = true.
Best Practices
- Always verify that uploaded files are properly connected before moderation.
- For text moderation, keep inputs under 5,000 characters for optimal performance.
- Combine both
moderationTextandattachmentsto analyze mixed media submissions. - Review flagged outputs manually for high-risk content before taking automated action.
- Store
processingIdvalues for tracking or audit purposes.
Example Workflow Integration
Use Case: A user uploads an image with a comment.- The File Upload Node provides an image file reference.
- The Content Moderation Node checks both the uploaded image and the user’s text comment.
- If
flagged = true, the workflow sends a warning message through a Notification Node. - If
flagged = false, the workflow continues to publish the content.
Common Errors
Below are common issues that may occur while using the Content Moderation node, along with their causes and recommended solutions.-
“Missing attachments”
Cause: No file or variable reference was provided for moderation.
Solution: Add a valid image or file reference in the
attachmentsfield. Example: - “Missing moderationText” Cause: The text moderation input field was left empty. Solution: Provide a valid text string or connect text from a previous node for analysis.
-
“Invalid moderationType”
Cause: An incorrect or unsupported moderation type was entered.
Solution: Use only the supported values —
"text-moderation"or"image-moderation". - “Empty output” Cause: The AI model returned no response or incomplete data. Solution: Retry the workflow with a valid input or check if the AI moderation service is available.
- “File not accessible” Cause: The referenced image file could not be loaded or retrieved. Solution: Verify that the file exists, has the correct permissions, and was properly generated or uploaded by a previous node.

